Giter Club home page Giter Club logo

decaylanguage's Introduction

scikit-hep: metapackage for Scikit-HEP

image

image

image

image

image

image

image

Project info

The Scikit-HEP project is a community-driven and community-oriented project with the aim of providing Particle Physics at large with an ecosystem for data analysis in Python embracing all major topics involved in a physicist's work. The project started in Autumn 2016 and its packages are actively developed and maintained.

It is not just about providing core and common tools for the community. It is also about improving the interoperability between HEP tools and the Big Data scientific ecosystem in Python, and about improving on discoverability of utility packages and projects.

For what concerns the project grand structure, it should be seen as a toolset rather than a toolkit.

Getting in touch

There are various ways to get in touch with project admins and/or users and developers.

scikit-hep package

scikit-hep is a metapackage for the Scikit-HEP project.

Installation

You can install this metapackage from PyPI with `pip`:

python -m pip install scikit-hep

or you can use Conda through conda-forge:

conda install -c conda-forge scikit-hep

All the normal best-practices for Python apply; you should be in a virtual environment, etc.

Package version and dependencies

Please check the setup.cfg and requirements.txt files for the list of Python versions supported and the list of Scikit-HEP project packages and dependencies included, respectively.

For any installed scikit-hep the following displays the actual versions of all Scikit-HEP dependent packages installed, for example:

>>> import skhep
>>> skhep.show_versions()

System:
    python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0]
executable: /srv/conda/envs/notebook/bin/python
   machine: Linux-5.15.0-72-generic-x86_64-with-glibc2.27

Python dependencies:
       pip: 23.1.2
     numpy: 1.24.3
     scipy: 1.10.1
    pandas: 2.0.2
matplotlib: 3.7.1

Scikit-HEP package version and dependencies:
        awkward: 2.2.2
boost_histogram: 1.3.2
  decaylanguage: 0.15.3
       hepstats: 0.6.1
       hepunits: 2.3.2
           hist: 2.6.3
     histoprint: 2.4.0
        iminuit: 2.21.3
         mplhep: 0.3.28
       particle: 0.22.0
          pylhe: 0.6.0
       resample: 1.6.0
          skhep: 2023.06.09
         uproot: 5.0.8
         vector: 1.0.0

Note on the versioning system:

This package uses Calendar Versioning (CalVer).

decaylanguage's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

decaylanguage's Issues

Test notebooks in the CI

Just noticed that this is not done here whereas we do it in Particle, which is optimal to keep docs up-to-date.

tox --> nox now?

Hello! I have been using this package in last few weeks for some preliminary studies on B decays at Belle(II) and found it very useful! Thank you for making it :)

I also have a few suggestion, which I will try to communicate in coming weeks. But before that, you mention tox in step 4 of CONTRIBUTING.rst, but I see that the tox.ini is deleted in the PR #34.

Am I missing something here or did you move from tox to nox now and CONTRIBUTING.rst needs an update?

Binder Fails

Just discovered that when I launch binder (trough the icon on the main github page), I get an error during the machine build.

Pip subprocess error:
ERROR: Invalid requirement: 'particle=0.6.*' (from line 2 of /home/jovyan/condaenv.lrqydel1.requirements.txt

Hint: = is not a valid operator. Did you mean == ?
CondaEnvException: Pip failed

Feedback on particle module

Non-python files in decaylanguage/particle/

I reckon it would be better to move those .txt and .csv files out of the python submodule,
similarly to what we did in https://github.com/scikit-hep/scikit-hep/tree/master/skhep/data.
At some point one might even consider a data package in Scikit-HEP, BTW ... But this move is just fine for now, I think.

I would also rename mass_width.csv to explicitly include the year, as that's relevant for a user to know that the file provided by the PDG is a decade old. So rename to mass_width_2008.csv.
This way, if we get the courage to update the file ourselves, we can give a hint in the filename.

pdgID_to_latex.txt

It would make sense to add a header with basic information, even if trivial. So, headers for the 3 columns at least, maybe also the date at which the file was created/checked.

particle.py

AmpGen is there even in methods and it it important to give some reference for it.
Also the docstring with "from an AmpGen style name" could be extended to hint at what the style is.

An extra '2' in one line of LHCB dec file?

0.000044342 Upsilon pi0 pi0 VVPIPI;2 #[Reconstructed PDG2011]

There seems to be an extra digit 2 in one line of the dec file. It should work with current version of the decfile.lark but I believe it should not be there. I was playing around with the decfile.lark to incorporate decay line like this

Decay xxx
0.5 SOME_MODEL 1 2 3
      4 5 6; # some comments
Dnddecay

That is how I found this extra 2.

Update importlib.resources usage

We should use the newer interface (3.8+ or 3.9+ with backport), it's much simpler to use. It's also very unevenly used in the current package; hopefully we can use it consistently once we use the new API.

_repr_svg_() is deprecated from graphviz v0.19

It looks like graphviz.Digraph's _repr_svg_() is replaced by _repr_mimebundle_(include, exclude) now: see changelog.

This seems to lead to AttributeError when using DecayChainViewer in notebooks.

I am still figuring out how to get the rendered image using _repr_mimebundle_() instead of just SVG string.

Add handy class constructors for DecFileParser

This is really handy if one wants to instantiate the parser with a user decay together with all generic decays defined in a master DECAY.DEC. In fact this is likely to be the most useful instantiation.

restore AmpGen2GooFit conversion

Hello, I would like to use the functionality to convert AmpGen options files to GooFit code.

However, I'm running into issues. For example, trying to do

python3 -m decaylanguage -G goofit  models/DtoKpipipi_v2.txt 

results in

/* Autogenerated file by AmpGen2GooFit
Generated on  2023-01-31 17:28:16.726714


Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/freiss/.local/lib/python3.10/site-packages/decaylanguage/__main__.py", line 29, in <module>
    main()
  File "/home/freiss/.local/lib/python3.10/site-packages/decaylanguage/__main__.py", line 25, in main
    DecayLanguageDecay.run()
  File "/home/freiss/.local/lib/python3.10/site-packages/plumbum/cli/application.py", line 634, in run
    retcode = inst.main(*tailargs)
  File "/home/freiss/.local/lib/python3.10/site-packages/decaylanguage/__main__.py", line 21, in main
    ampgen2goofit(filename)
  File "/home/freiss/.local/lib/python3.10/site-packages/decaylanguage/modeling/ampgen2goofit.py", line 40, in ampgen2goofit
    printer(colors.bold | seen_factor, ":", *my_lines[0].spinfactors)
  File "/home/freiss/.local/lib/python3.10/site-packages/decaylanguage/modeling/goofit.py", line 167, in spinfactors
    raise LineFailure(
decaylanguage.utils.errors.LineFailure: D0{K(1)(1270)-[GSpline.EFF]{KPi20[FOCUS.Kpi]{K-,pi+},pi-},pi+}: Spinfactors not currently included!: DtoA1P1_A1toN2P2_N2toP3P4

Trying to understand this, I believe this is caused by some particles in the decay chain having NonDefined or Unknown SpinType. From the particle package:

@property
    def spin_type(self) -> SpinType:
        """
        Access the SpinType enum.
        Note that this is relevant for bosons only. SpinType.NonDefined is returned otherwise.
        """
        # Non-valid or non-standard PDG IDs
        if self.pdgid.j_spin is None:
            return SpinType.NonDefined

        # Fermions - 2J+1 is always an even number
        if self.pdgid.j_spin % 2 == 0:
            return SpinType.NonDefined

        J = int(self.J)
        if J in {0, 1, 2}:
            if self.P == Parity.p:
                return (SpinType.Scalar, SpinType.Axial, SpinType.Tensor)[J]
            if self.P == Parity.m:
                spin_types = (
                    SpinType.PseudoScalar,
                    SpinType.Vector,
                    SpinType.PseudoTensor,
                )
                return spin_types[J]

        return SpinType.Unknown

One place where I noticed this, are the pipi and Kpi S-waves, which are described as quasi-particles in MintDalitzSpecialParticles.csv. In this case, the KPi20 quasi-particle is given the PDGID 998112. In my understanding, the last digit corresponds to 2*j+1, so its spin is 1/2, which I don't think is intended and results in an undefined SpinType.NonDefined. I believe this could be fixed by changing the last digits of the quasi-particles to 1 to represent spin 0. This requires changing another digit to have unique PDGIDs for all of them and I'm not sure which can be safely changed.

Another problematic particle seems to be the K(1460), which seems to have an undefined parity Parity.u and then results in SpinType.Unkown. Here I'm not sure what causes this and how to fix it.

Any help would be greatly appreciated!

Dead link in readme

Let me quickly report that https://github.com/scikit-hep/decaylanguage/tree/master/decaylanguage/data from the readme is dead.

pydot required but not installed

I just installed decaylanguage via pip3 install decaylanguage and then tried to

from decaylanguage import DecFileParser

but this resulted in

Traceback (most recent call last):
  File "/home/ritter/belle2/software/decfiles/tests/parse_decfiles.py", line 2, in <module>
    from decaylanguage import DecFileParser
  File "/home/ritter/belle2/externals/v01-09-00/Linux_x86_64/common/lib/python3.6/site-packages/decaylanguage/__init__.py", line 14, in <module>
    from .decay import DecayChainViewer
  File "/home/ritter/belle2/externals/v01-09-00/Linux_x86_64/common/lib/python3.6/site-packages/decaylanguage/decay/__init__.py", line 4, in <module>
    from .viewer import DecayChainViewer
  File "/home/ritter/belle2/externals/v01-09-00/Linux_x86_64/common/lib/python3.6/site-packages/decaylanguage/decay/viewer.py", line 21, in <module>
    raise ImportError("You need pydot for this submodule. Please install pydot with for example 'pip install pydot'\n")
ImportError: You need pydot for this submodule. Please install pydot with for example 'pip install pydot'

It seems that pydot is not optional but apparently not in the requirements. Can we make it an optional dependency? Or add it to the requirements?

Phasespace generation from decay chains

In order to make the package compatible with phasespace requires each Particle to know the distribution that parametrizes its mass if it is not fixed (e.g. BreitWigner, Gauss).

Where should this best be added?

Comment on license

A last comment related to the transfer of the package to the Scikit-HEP organisation. Same as we do for other packages, would you consider to change in the LICENSE file

Copyright (c) 2018, Henry Fredrick Schreiner III

to something more open, such as

Copyright (c) 2018, the decaylanguage developers

? This is what we do in scikit-hep, root_numpy, etc.

Thanks for considering this.

AmpGen rename

Several bits of code have AmpGen in the title. They should be renamed, since someone does not need to know about or use AmpGen to use DecayLanguage. Here are the current names:

Name Suggested replacement
ampgen2goofit.py
AmpGen2GooFit
Particle.from_AmpGen
AmplitudeChain.read_ampgen .read_file

Support for decay descriptors

As part of the LHCb Ntuple Wizard project, we have implemented decay descriptor parsing (and rendering) using pyparsing. I would like to port this over to decaylanguage and make it easy to add different "grammars" used by different experiments/software packages.

Example grammars:

We can also think about doing matching and substitution (which we also have functionality for in the Ntuple Wizard) but there it's less clear to me how easy it is to support different conventions from different experiments/packages.

How to extract branching fractions?

Dear Authors,

Thanks for this project. I am looking at a decay file that looks like:

https://gitlab.cern.ch/lhcb-datapkg/Gen/DecFiles/-/blob/master/dkfiles/Bu_JpsiX,ee=JpsiInAcc.dec

and in particular the section:

image

Where the numbers are not the absolute fractions, for instance, the B^+\to J/\psi K^+ process has a BF=1e-3

image

Is there a way to retrieve the actual branching fractions with this project? I could not see anything like that in the documentation, although I might have missed it. Ideally I would do something like:

import pdg_reader as pdgr

ifile=pdgr.load_decays(decay_path)
block=ifile.read_block('B+sig')
#Get PDG fractions instead of the ones in the file (relative?)
l_bf = block.get_bf(kind='absolute')

Cheers.

General API for decay chain representations

There should be a fairly simple (and documented) way to add a generic decay, so other libraries (like pyhepmc3) can easily use DecayLanguage to build a decay plot. Sharing a bit of code between parts (dec, decay) would be nice too if possible. Also, DecayLanguage's requirements could probably be trimmed (Pandas was mostly for Particle, I expect).

Either or type decay chains

Hi @eduardo-rodrigues @henryiii

I am not sure if, either or type chains are possible to write in decay language. I can give an example. Suppose a meson decays to tau+ and tau-. And one of the taus should go to muon and another one should go to 3 pions and charge conjugation decays will be according to the charges, but they shouldn't repeat, (say tau+ to mu+ and tau- to mu- is not allowed). Is it possible here? If so please help!

ModelAlias support

Hi,

it seems our latest decay file is no longer compatible with decaylanguage as we now have ModelAlias lines in our main decay file to define some aliases for models. For example

ModelAlias SLBKPOLE_Dtoetaplnu SLBKPOLE 1.0 0.281 1.0 2.010;

Supporting the ModelAlias itself is obviously easy enough but then also supporting it to be used as models in the decay files seems currently a bit non-trivial.

Functionality to create multiple decay descriptors from a dict when particles have multiple decay modes

At the moment it is possible to create a DecayChain object from the output of DecFileParser.build_decay_chains() only if every particle has only one decay mode defined (enforced by a line that does assert len(decay_chain_dict.keys()) == 1).

For the purpose of making a tool that searches DecFiles by decay, I need some functionality that extracts all combinations of decaying particles (i.e. multiple DecayChain objects when a particle has multiple decay modes)

Replace pydot with graphviz

The package pydot is no longer maintained, as obvious from the time (Dec. 2018) of the last commit and the increasing number of open issues. We should switch to graphviz in the near future.

Deal with variable definitions for decay models in .dec files

Some .dec decay files may define decay model parameters via variables. For the sake of example, the line

0.000030 J/psi omega SVV_HELAMP PKHminus PKphHminus PKHzero PKphHzero PKHplus PKphHplus;

in DECAY_LHCB.DEC uses variables defined such as
Define PKHminus 0.612
This is presently not dealt with by the parser, meaning the parser does not check whether model parameters that are not numbers have a matching Define statement, in which case it should replace the names by the actual numbers.

Linux 2.7 Azure uses Pandas

On Azure, the 2.7 version tries to skip Pandas, but then downloads it anyway and takes forever building it. It needs to download a wheel somehow or be properly skipped.

Numbers in decay models are parsed as parameters

For decay models with numbers in the name (eg, HQET2), the number is parsed as a parameter. For instance, this minimal code

from decaylanguage import DecFileParser
from decaylanguage.dec.dec import get_model_name

s = """Decay B0sig
0.0030  MyD_0*- mu+ nu_mu       PHOTOS  ISGW2;
0.0493  MyD*-   mu+ nu_mu       HQET2 1.207 0.908 1.406 0.853;
0.0493  MyD*-   mu+ nu_mu       HQET 1.207 0.908 1.406 0.853;
Enddecay
"""

dfp = DecFileParser.from_string(s)
dfp.parse()
dms = dfp._find_decay_modes('B0sig')
for dm in dms:
    dm_details = dfp._decay_mode_details(dm, True)

    print('Model name: ', get_model_name(dm))

    for detail in dm_details:
        print(detail)

outputs

Model name:  ISGW
0.003
['MyD_0*-', 'mu+', 'nu_mu']
PHOTOS ISGW
[2.0]
Model name:  HQET
0.0493
['MyD*-', 'mu+', 'nu_mu']
HQET
[2.0, 1.207, 0.908, 1.406, 0.853]
Model name:  HQET
0.0493
['MyD*-', 'mu+', 'nu_mu']
HQET
[1.207, 0.908, 1.406, 0.853]

Indicating that both for HQET2 and ISGW2, the number is taken as a model parameter. I checked this was the behavior with decaylanguage tag 0.11.2 and @yipengsun checked that it was still the case in the master branch.

Are we doing something wrong or is this a bug?

Enhance test suite to verify all EvtGen models

Ideally, all EvtGen models known to the package should be tested comprehensively. Not only the parsing succeeds but dedicated tests would ensure that all model parameters, including model aliases and model options, are all parsed correctly.

The presently known list of models is the following (items to be ticked off as dedicated model parsing correctness tests are added):

  • BaryonPCR
  • BC_SMN
  • BC_TMN
  • BC_VHAD
  • BC_VMN
  • BCL
  • BGL
  • BLLNUL
  • BNOCB0TO4PICP
  • BNOCBPTO3HPI0
  • BNOCBPTOKSHHH
  • BS_MUMUKK
  • BSTOGLLISRFSR
  • BSTOGLLMNT
  • BT02PI_CP_ISO
  • BTO3PI_CP
  • BTODDALITZCPK
  • BToDiBaryonlnupQCD
  • BTOSLLALI
  • BTOSLLBALL
  • BTOSLLMS
  • BTOSLLMSEXT
  • BTOVLNUBALL
  • BTOXSGAMMA
  • BTOXELNU
  • BTOXSLL
  • BQTOLLLLHYPERCP
  • BQTOLLLL
  • CB3PI-MPP
  • CB3PI-P00
  • D_DALITZ
  • D_hhhh
  • D0GAMMADALITZ
  • D0MIXDALITZ
  • DToKpienu
  • ETAPRIME_DALITZ
  • ETA_DALITZ
  • ETA_FULLDALITZ
  • ETA_LLPIPI
  • ETA_PI0DALITZ
  • FLATQ2
  • FLATSQDALITZ
  • FOURBODYPHSP
  • GENERIC_DALITZ
  • GOITY_ROBERTS
  • HELAMP
  • HQET3
  • HQET2
  • HQET
  • ISGW2
  • ISGW
  • KS_PI0MUMU
  • Lb2Baryonlnu
  • Lb2plnuLCSR
  • Lb2plnuLQCD
  • LbAmpGen
  • LLSW
  • LNUGAMMA
  • MELIKHOV
  • OMEGA_DALITZ
  • PARTWAVE
  • PHI_DALITZ
  • PHSPDECAYTIMECUT
  • PHSPFLATLIFETIME
  • PHSP
  • PI0_DALITZ
  • PROPSLPOLE
  • PTO3P
  • PVV_CPLH
  • PYCONT
  • PYTHIA
  • RareLbToLll
  • SLBKPOLE
  • SLL
  • SLN
  • SLPOLE
  • SSD_CP
  • SSD_DirectCP
  • SSS_CP_PNG
  • SSS_CP
  • SSS_CPT
  • STS_CP
  • STS
  • SVP_CP
  • SVP_HELAMP
  • SVP
  • SVS_CP_ISO
  • SVS_CPLH
  • SVS_CP
  • SVS_NONCPEIGEN
  • SVS
  • SVV_CPLH
  • SVV_CP
  • SVV_HELAMP
  • SVV_NONCPEIGEN
  • SVVHELCPMIX
  • TAUHADNU
  • TAULNUNU
  • TAUOLA
  • TAUSCALARNU
  • TAUVECTORNU
  • THREEBODYPHSP
  • TSS
  • TVP
  • TVS_PWAVE
  • VLL
  • VSP_PWAVE
  • VSS_BMIX
  • VSS_MIX
  • VSS
  • VTOSLL
  • VUB
  • VVPIPI
  • VVP
  • VVS_PWAVE
  • XLL
  • YMSTOYNSPIPICLEO
  • YMSTOYNSPIPICLEOBOOST

JetSetPar not recognized?

There are lines starting with JetSetPar in my decfile which caused the package to throw a UnexpectedToken exception. Such lines are not relevant to the visualization but should be handled.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.