Giter Club home page Giter Club logo

sparkx's People

Contributors

hendrik1704 avatar ngoetz avatar nilssass avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

sparkx's Issues

Choose uniform naming convention for file names

We should rename the python files containing the classes as they differ in their naming scheme. For example to import the Particle class the user needs to import

from ParticleClass import Particle

while for Histogram it is

from Histogram import Histogram

Remember to change this also in the documentation examples!

OSCAR print to file

Print routine for the OSCAR class to print the filtered data set to a file.

Oscar time steps

Currently Oscar cannot handle time evolution as different time steps are separated by an additional comment line. This should be saved as np.array per time step, so

[event][time_step] with the types [python list][np.array]

Oscar Photon format

At the moment, there is a workaround implemented for the Oscar Extended Photons format, which has a different convention for the comment lines.
This should be made consistent with the other Oscar formats, probably in SMASH and then changed accordingly in this code.

Inconsistent PDG list

SMASH and SPARKX PDG lists are inconsistent. In SMASH, PDGs like "9922214" do appear, which SPARKX' PDG list does not know. Therefore, it cannot be removed from OSCAR objects.

Particle numbering after filter

Currently, if particles are filtered and afterwards printed to a file, they are not renumbered as (1,2,3,4,5,6,...) but they leave 'holes' (1,2,4,6,...). We should think about distinguishing between the original number of a particle (like an id) and a renumbered scheme for output files

Implement EventCharacteristics class

This class is supposed to get a list of Particle objects for one event, e.g., from the Oscar class.
From the provided Particle objects in the list, it can compute $\varepsilon_n$ (on the level of particles).

A further enhancement would include also the computation of the energy/baryon/charge/strangeness densities on a lattice and a function to compute the eccentricities on the level of the densities.

@NGoetz I will start to implement a dummy of the class and its constructor in a new branch, where you can put the computation of the eccentricities on the particle level. For the density part, I need to implement a lattice class first. When it's pushed, I will write you a message.

Add a CHANGELOG.md file

We should add a CHANGELOG.md file to keep ourselves posted about the changes and additions to the codebase.

Make check in ParticleClass better

Just as a reminder for us: make the check for the OSCAR and OSCARExtended file better. At the moment it will fail if the number of columns is different than e.g. 21.

Missing member baryon_number

Check in the quantity setting methods whether the length of accepted input data needs to be corrected due to the newly added baryon_number member.

Already done for set_quantities_OSCAR2013Extended(self,line_from_file)

Analysis class

What do we want?

  • Flow analysis:

  • reaction plane flow (integrated)

  • reaction plane flow (differential)

  • event plane flow (integrated)

  • event plane flow (differential)

  • scalar product flow (integrated)

  • scalar product flow (differential)

  • Q-cumulants flow (integrated)

  • Q-cumulants flow (differential)

  • Cumulant flow (integrated)

  • Cumulant flow (differential)

  • LYZ flow (integrated)

  • LYZ flow (differential)

  • PCA flow (integrated)

  • PCA flow (differential)

  • SC(n,m)

  • NSC(n,m)

  • Eccentricities as function of time needs time dependent OSCAR readin

  • Multiplicities (centrality classes)

Create unit tests

This is a kind reminder that we should start preparing unit tests for SPARKX.

This is a list of all needed tests:

  • CentralityClasses
  • EventCharacteristics
  • Filter
  • Histogram
  • JetAnalysis
  • Jetscape
  • Lattice3D
  • Oscar
  • Particle
  • Utilities
  • Flow

Utilities in documentation

There is something wrong with the Utilities section in the documentation. When I compile it on my local machine it generates an error and does not show up in the html file.

Implement OSCAR full time evolution readin

Get max_time from first event (last particle), but: if first event empty, check -> are all event empty, if yes -> max_time = None, else -> last time from first non empty event

Improvements for Jetscape class

  • Filter for hadron status
  • Event cuts, e.g., $E_\mathrm{charged}$. Probably just an event energy filter, as one can perform the charged particles cut before.

Setter functions not working

While creating some tests, I noticed that the example in the Particle class documentation is not working. The setters do not set the members of particle.

Wrong physics when the particle is not known

I just came across some functions in the Particle class and I'm not sure if we handle the new cases with not self.pdg_valid_ correctly. See this function for example:

def is_meson(self):
        """
        Is the particle a meson?
        Returns
        -------
        bool
            True, False
        """
        if not self.pdg_valid_:
            return False
        return PDGID(self.pdg).is_meson

When you have an invalid particle for the PDGID package, this not-known particle is a highly excited meson. In this case, the function would give you False even if the particle is a meson. Shouldn't we return None in this case, when a statement about the physics of this particle is not possible?
With the given implementation the physics output might be wrong.

Enable the import with *

It might be a nice thing to have the possibility to import the package parts with:

from sparks import *
from sparks import Jetscape

instead of always using from sparkx.Jetscape import Jetscape

I have a first implementation of this in the branch roch/import_package which has to be tested.

Speed up scripts

Search for places in the code that can be better handled in terms of computational time (use the power of AI here to get ideas)

Potential issues we found:

Oscar:

  • We go through the complete file to get num_output_per_event, num_events, etc. and other stuff but without reading and for reading we go through it a second time

Useful:

  • We may use seek to read single lines in a file e.g. for num_output_per_event. This seems to be much faster (~x20)

Density smearing

Add different options for density smearing in event characterisation.

Class for jet analysis

There is a need for a class which can perform a jet analysis using the fastjet jet finder package.

Zenodo id

We should register this for an doi.

Adjust github action for documentation

We need to adjust the current github action that builds and publishes the documentation with every new merge on main to only build and publish the documentation with every tag. Otherwise the documentation will run ahead

Prepare for binary output

The binary output is commonly used especially for high-statistics runs. It should be possible to use it out of the box too.

Todo Friday before release

  • #97 List types in Oscar / Jetscape
  • #98 SigmaGen in Jetscape
  • #18 Flow test class: GenerateFlow -> on branch roch/GenerateFlow
  • #87 Requirements (Niklas)
  • Finalize flow classes + documentation (ChatGPT)
  • #96 Time step read in (after #97) + #14
  • Switch to SMASH collaboration (with Alessandro), what happens with our actions??
  • Tag version 1.0.0
  • Prepare GitHub actions for automatic upload to pypi (needs pypi account)

X-SCAPE output is different

I've just learned that the X-SCAPE hadron output is a bit different compared to the JETSCAPE output.
The last line has two additional parameters printed. One is the weight, which causes the Jetscape class to throw an error in the current sparkx version.

Installation fails

Using an empty conda environment, I get the following failing installation message

pip install -e .
Obtaining file:///home/niklas/Desktop/sparkx
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Preparing editable metadata (pyproject.toml) ... done
Collecting abc-property==1.0 (from sparkx==1.0.2)
  Using cached abc_property-1.0-py3-none-any.whl (3.4 kB)
Collecting fastjet==3.4.1.2 (from sparkx==1.0.2)
  Downloading fastjet-3.4.1.2.tar.gz (4.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.3/4.3 MB 2.9 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting matplotlib==3.7.1 (from sparkx==1.0.2)
  Downloading matplotlib-3.7.1.tar.gz (38.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.0/38.0 MB 13.7 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting numpy==1.23.5 (from sparkx==1.0.2)
  Downloading numpy-1.23.5.tar.gz (10.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.7/10.7 MB 20.4 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [33 lines of output]
      Traceback (most recent call last):
        File "/home/niklas/programs/anaconda3/envs/sparkx_test/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/home/niklas/programs/anaconda3/envs/sparkx_test/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/niklas/programs/anaconda3/envs/sparkx_test/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 112, in get_requires_for_build_wheel
          backend = _build_backend()
                    ^^^^^^^^^^^^^^^^
        File "/home/niklas/programs/anaconda3/envs/sparkx_test/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
          obj = import_module(mod_path)
                ^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/niklas/programs/anaconda3/envs/sparkx_test/lib/python3.12/importlib/__init__.py", line 90, in import_module
          return _bootstrap._gcd_import(name[level:], package, level)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
        File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
        File "<frozen importlib._bootstrap>", line 1304, in _find_and_load_unlocked
        File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
        File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
        File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
        File "<frozen importlib._bootstrap>", line 1325, in _find_and_load_unlocked
        File "<frozen importlib._bootstrap>", line 929, in _load_unlocked
        File "<frozen importlib._bootstrap_external>", line 994, in exec_module
        File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
        File "/tmp/pip-build-env-7cmkytlq/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 16, in <module>
          import setuptools.version
        File "/tmp/pip-build-env-7cmkytlq/overlay/lib/python3.12/site-packages/setuptools/version.py", line 1, in <module>
          import pkg_resources
        File "/tmp/pip-build-env-7cmkytlq/overlay/lib/python3.12/site-packages/pkg_resources/__init__.py", line 2172, in <module>
          register_finder(pkgutil.ImpImporter, find_on_path)
                          ^^^^^^^^^^^^^^^^^^^
      AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

Note that this also happens when installing from PyPI. Fixing this is essential for reaching the next milestone.

Improving error handling

An external user might not know common problems when playing with custom OSCAR files.
We should raise helpful error messages, especially if the particle number in the comment line of the OSCAR file is not equal to the number of particles. The user should be told that the file does not follow the OSCAR convention.

Testing of the anisotropic flow implementation

We need a test case to check the correctness of the anisotropic flow implementations.
This class needs to create an angular distribution in the transverse plane with given flow coefficients and a momentum distribution for the particles.
There should be a function with output into a dummy OSCAR/JETSCAPE file, that can be read in by the corresponding classes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.