pylhc / omc3 Goto Github PK
View Code? Open in Web Editor NEWPython 3 codes for beam optics measurements and corrections in circular particle accelerators
Home Page: https://pylhc.github.io/omc3/
License: MIT License
Python 3 codes for beam optics measurements and corrections in circular particle accelerators
Home Page: https://pylhc.github.io/omc3/
License: MIT License
For example if they are int (which is the default for pandas).
Related to #11? (I don't know what #11 was referencing... please more detailed Issue descriptions).
Code Example :
import tfs
df = tfs.TfsDataFrame([[1,2,3]])
tfs.write("test.tfs", df)
Crashes with:
Traceback (most recent call last):
File "/media/jdilly/Storage/Repositories/omc3/omc3/udillyties/tests/tfs_writer_test.py", line 3, in <module>
tfs.write("test.tfs", df)
File "/media/jdilly/Storage/Repositories/omc3/omc3/tfs/handler.py", line 173, in write_tfs
colnames_str = _get_colnames_str(data_frame.columns, colwidth)
File "/media/jdilly/Storage/Repositories/omc3/omc3/tfs/handler.py", line 202, in _get_colnames_str
return "* " + fmt.format(*colnames)
ValueError: Unknown format code 's' for object of type 'int'
We need to update anyway for PTC tracking with AC-Dipole, so let's go for the newest version
discuss possible options how to name tunes
Q1/Q2 vs Qx/Qy
NatQ/DrivenQ vs NatQ/Q
Options how to name Phase/Phaseadvance/TotalPhaseAdvance
PHASE/MU
Automatic documentation is in place...let's use it
Add a readme to the repository
The name of the module parser in omc3 conflicts with another parser in my anaconda distribution.
Similar problem: https://stackoverflow.com/questions/50795252/python-3-import-conflicts-with-internal-parser
In my opinion all output files should be fixed as classes in the code, which define output name and columns as well as column types. Maybe even headers.
Every reference to a column should then be handled by referring to the column of the respective class as a constant.
Also this would limit the problem with the tfs-writer that occurs sometimes, when you don't define the cloumn data-type and it tries to guess.
Should be combined somehow with tfs-collection.
Create a task manager similar to DepInjector in GetLLM.py from Beta-Beat.src
Transfer optics measurements and accelerator classes from Beta-Beat.src:
This is a large task as the codes very entangled:
To be added to _validate() function in tfs-handler:
maybe len(set(index)) == len(index)
Related pylhc/Beta-Beat.src#142
add a handler for loading of PTC tracking data
Implement chromatic analysis based on 3D excitation
Chromaticity measurement
Chromatic beta-beating measurement
Sphinx doesn't find generic_parser
Split into sdds and tbt package
write more unit tests
optimise for python 3
Pythonise the code as it was more or less copied from Java
Coupling codes have to be rewritten in python3. Related to #13
currently rescale factor is the same in both kick_ files, adding plane would make merging files in some later analysis easier, avoiding name clashes
Change the reference turn for phases of spectral lines to the middle of the data (n_turns / 2), this will remove phase bias coming from a frequency error
Add some narrower windowing functions
Update the accuracy test
See:
https://github.com/Syntaf/travis-sphinx
also the travis.yml
also the /doc folder
also the settings (to define branch for github.io)
also put travis-sphinx in the requirements, so that travis installs it automatically
Todo:
Imports should be still the same!! Awesome!
when using --is_free_kick = True, harpy breaks when using kicker.phase_correction
issue: dataframe panda does not contain PK2PK column
Fails on Travis
idea is to allow loading also non-lhc/non-sdds data without previous conversion to sdds
split up tbt.handler.py in data_class.py containing the class TbtData and var. accel_handler.py which all implement the read_tbt function to read the specific files (possibly also write_tbt etc.)
in hole_in_one.py / _run_harpy select then appropriate module to import and read tbt
add a column with kick times (LHC sdds timeformat) to kick files
potentially also put both planes in one file again
... and make it work with python 3
Extension of entrypoint arguments depends on the type of input, i.e. way to extend option differs when config file or command line arguments were used.
Frequency analysis is not yet in
Make Entrypoint python3 compatible
vim creates a tags
file for syntax-highlighting
add this to .gitignore
optional:
String conversion of list in f-strings crashes... not necessary... to be removed
The frequencies output to linx/y files should be from [0, 1) not from [0, 0.5].
This is important for phase alignment of spectral lines with frequencies from [0.5, 1].
Behavoiur is unexpected, when parsing a config file from "rest-args" as these brackets are not added then. Solution: Remove them completely.
With the possibility of personal gitignores we can clean our gitignore from a lot of messy entries.
especially coding environment related ones like
.vscode
tags (from vim)
*.plist
adding IOTA accelerator class
IOTA model
hdf5 to sdds converter
Currently, we have a different unit of the orbit in the measurement (mm) and in the model (m) from MADX. Should be unified to meters (as agreed with Rogelio and Josch).
The affected codes are:
harpy and some optics_measurements modules (dpp, dispersion, kick and rdt).
The change should also include corresponding changes of input options.
Usage is similar to the command.run, but way easier to read.
With bumpversion like tfs and sdds: https://twiki.cern.ch/twiki/bin/view/BEABP/Git#Workflow_Release_of_Python_packa
Todo:
e.g.
beta_from_phase.py l 159
beta_df = beta_df.loc[beta_df["NCOMB"] > 0]
see # TODO
Crash when calling harpy (run_per_bunch) with default params from harpy_params()
:
Lines 332 to 417 in 398bed5
with Error Message:
ERR: TypeError: '>' not supported between instances of 'float' and 'dict'
on
ERR: File /media/awegsche/HDD/omc3/omc3/harpy/clean.py: line 103, in _detect_bpms_with_spikes FILE 98 return bpm_flatness FILE 99 FILE 100 FILE 101 def _detect_bpms_with_spikes(bpm_data, max_peak_cut): FILE 102 """ Detects BPMs with spikes > max_peak_cut """ FILE --> 103 too_high = bpm_data[bpm_data.max(axis=1) > max_peak_cut].index FILE 104 too_low = bpm_data[bpm_data.min(axis=1) < -max_peak_cut].index FILE 105 bpm_spikes = too_high.union(too_low) FILE 106 if bpm_spikes.size: FILE 107 LOGGER.debug(f"Spikes > {max_peak_cut} detected. BPMs removed: {bpm_spikes.size}") FILE 108 return bpm_spikes
Full Error Log:
ERR: Traceback (most recent call last): ERR: File do_petra_analysis.py: line 47, in <module> FILE 42 data = {} FILE 43 data['X'] = pd.read_pickle(os.path.join(m.rawpath, "x_data")) FILE 44 data['Y'] = pd.read_pickle(os.path.join(m.rawpath, "y_data")) FILE 45 print(data['X']) FILE 46 FILE --> 47 lin_files.append(handler.run_per_bunch(TbtData([data], None, [0], 13000), hp)) FILE 48 FILE 49 op = get_optics_params() FILE 50 op.outputdir = os.path.join("output", SELECTION + "_output_" + ID) FILE 51 op.modeldir = "./2019_june_26_desy" FILE 52 op.accelerator = "petra" ERR: lin_files.append(handler.run_per_bunch(TbtData([data], None, [0], 13000), hp)) ERR: File /media/awegsche/HDD/omc3/omc3/harpy/handler.py: line 43, in run_per_bunch FILE 38 bpm_datas, usvs, lins, bad_bpms = {}, {}, {}, {} FILE 39 output_file_path = _get_output_path_without_suffix(harpy_input.outputdir, harpy_input.files) FILE 40 for plane in PLANES: FILE 41 bpm_data = _get_cut_tbt_matrix(tbt_data, harpy_input.turns, plane) FILE 42 bpm_data = _scale_to_mm(bpm_data, harpy_input.unit) FILE --> 43 bpm_data, usvs[plane], bad_bpms[plane], bpm_res = clean.clean(harpy_input, bpm_data, model) FILE 44 lins[plane], bpm_datas[plane] = _closed_orbit_analysis(bpm_data, model, bpm_res) FILE 45 FILE 46 tune_estimates = harpy_input.tunes if harpy_input.autotunes is None else frequency.estimate_tunes( FILE 47 harpy_input, usvs if harpy_input.clean else FILE 48 dict(X=clean.svd_decomposition(bpm_datas["X"], harpy_input.sing_val), ERR: bpm_data, usvs[plane], bad_bpms[plane], bpm_res = clean.clean(harpy_input, bpm_data, model) ERR: File /media/awegsche/HDD/omc3/omc3/harpy/clean.py: line 39, in clean FILE 34 return bpm_data, None, [], None FILE 35 bpm_data, bpms_not_in_model = _get_only_model_bpms(bpm_data, model) FILE 36 if bpm_data.empty: FILE 37 raise AssertionError("Check BPMs names! None of the BPMs was found in the model!") FILE 38 with timeit(lambda spanned: LOGGER.debug(f"Time for filtering: {spanned}")): FILE --> 39 bpm_data, bad_bpms_clean = _cut_cleaning(harpy_input, bpm_data, model) FILE 40 with timeit(lambda spanned: LOGGER.debug(f"Time for SVD clean: {spanned}")): FILE 41 bpm_data, bpm_res, bad_bpms_svd, usv = _svd_clean(bpm_data, harpy_input) FILE 42 all_bad_bpms = bpms_not_in_model + bad_bpms_clean + bad_bpms_svd FILE 43 return bpm_data, usv, all_bad_bpms, bpm_res FILE 44 ERR: bpm_data, bad_bpms_clean = _cut_cleaning(harpy_input, bpm_data, model) ERR: File /media/awegsche/HDD/omc3/omc3/harpy/clean.py: line 56, in _cut_cleaning FILE 51 FILE 52 def _cut_cleaning(harpy_input, bpm_data, model): FILE 53 LOGGER.debug(f"Number of BPMs in the input {bpm_data.index.size}") FILE 54 known_bad_bpms = _detect_known_bad_bpms(bpm_data, harpy_input.bad_bpms) FILE 55 bpm_flatness = _detect_flat_bpms(bpm_data, harpy_input.peak_to_peak) FILE --> 56 bpm_spikes = _detect_bpms_with_spikes(bpm_data, harpy_input.max_peak) FILE 57 exact_zeros = _detect_bpms_with_exact_zeros(bpm_data, harpy_input.keep_exact_zeros) FILE 58 all_bad_bpms = _index_union(known_bad_bpms, bpm_flatness, bpm_spikes, exact_zeros) FILE 59 original_bpms = bpm_data.index FILE 60 FILE 61 bpm_data = bpm_data.loc[bpm_data.index.difference(all_bad_bpms)] ERR: bpm_spikes = _detect_bpms_with_spikes(bpm_data, harpy_input.max_peak) ERR: File /media/awegsche/HDD/omc3/omc3/harpy/clean.py: line 103, in _detect_bpms_with_spikes FILE 98 return bpm_flatness FILE 99 FILE 100 FILE 101 def _detect_bpms_with_spikes(bpm_data, max_peak_cut): FILE 102 """ Detects BPMs with spikes > max_peak_cut """ FILE --> 103 too_high = bpm_data[bpm_data.max(axis=1) > max_peak_cut].index FILE 104 too_low = bpm_data[bpm_data.min(axis=1) < -max_peak_cut].index FILE 105 bpm_spikes = too_high.union(too_low) FILE 106 if bpm_spikes.size: FILE 107 LOGGER.debug(f"Spikes > {max_peak_cut} detected. BPMs removed: {bpm_spikes.size}") FILE 108 return bpm_spikes ERR: too_high = bpm_data[bpm_data.max(axis=1) > max_peak_cut].index ERR: File /home/awegsche/anaconda3/envs/pythonthree/lib/python3.7/site-packages/pandas/core/ops.py: line 1766, in wrapper FILE 1761 FILE 1762 else: FILE 1763 values = self.get_values() FILE 1764 FILE 1765 with np.errstate(all='ignore'): FILE --> 1766 res = na_op(values, other) FILE 1767 if is_scalar(res): FILE 1768 raise TypeError('Could not compare {typ} type with Series' FILE 1769 .format(typ=type(other))) FILE 1770 FILE 1771 # always return a full value series here ERR: res = na_op(values, other) ERR: File /home/awegsche/anaconda3/envs/pythonthree/lib/python3.7/site-packages/pandas/core/ops.py: line 1649, in na_op FILE 1644 x = x.view('i8') FILE 1645 FILE 1646 method = getattr(x, op_name, None) FILE 1647 if method is not None: FILE 1648 with np.errstate(all='ignore'): FILE --> 1649 result = method(y) FILE 1650 if result is NotImplemented: FILE 1651 return invalid_comparison(x, y, op) FILE 1652 else: FILE 1653 result = op(x, y) FILE 1654 ERR: result = method(y) ERR: TypeError: '>' not supported between instances of 'float' and 'dict'
The line
Line 44 in 398bed5
has an issue:
With std
being a list, the max
function will result in
np.max(limit, np.array([some_value]))
If some_value
is higher than limit, the result is [some_value]
which is a list and cannot be compared by <
with np.abs(...)
.
Proposed solution:
change to
mask = np.logical_and(mask,np.abs(y_orig - avg) < np.max(np.append(nsig * std, limit)))
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.