bcalden / clusterpyxt Goto Github PK
View Code? Open in Web Editor NEWThe Galaxy Cluster ‘Pypeline’ for X-ray Temperature Maps
License: BSD 3-Clause "New" or "Revised" License
The Galaxy Cluster ‘Pypeline’ for X-ray Temperature Maps
License: BSD 3-Clause "New" or "Revised" License
When the fitting procedure is run in parallel mode it appears to have a memory leak. Within about an hour or two the user will have to restart the procedure. The pipeline allows for this without having to refit already completed regions but it should not be an issue in the first place.
Add screenshots to README.me to further assist the user in pipeline flow.
New user here... getting to grips with ClusterPyXT
for data processing. I've set everything up according to the instructions on the main page, and when I set up a cluster for processing, the pipeline freezes during the download.
For context, I'm running CIAO-4.15
via the instructions on https://cxc.cfa.harvard.edu/ciao/download/ with the full CALDB
installation on OSX 10.15.7
(Catalina), and CIAO
runs successfully (I ran a different pipeline yesterday). After launching the virtualenv and running python3 clusterpyxt.py
the GUI boots up correctly, but after entering information for my cluster target and clicking "Run Stage 1" I get the rainbow wheel and the Download message doesn't shift from 0%, even after waiting upwards of half an hour.
The download stage of the other pipeline only took a matter of a few minutes for each dataset, so I don't understand what the problem is. Any help would be appreciated!
The pipeline should allow for the inclusion of ACIS-S observations within the pipeline.
After downloading the cluster data from Chandra Data Archive , when it is doing preparing to merge the observations:
Reprocessing Abell 85
Reprocessing Abell 85/4881
Traceback (most recent call last):
File "clusterpyxt.py", line 117, in
menu.make_menu()
File "/home/wanglei/xray/ClusterPyXT-master/menu.py", line 301, in make_menu
display_menu(main_menu)
File "/home/wanglei/xray/ClusterPyXT-master/menu.py", line 198, in display_menu
selected_action'function'
File "/home/wanglei/xray/ClusterPyXT-master/menu.py", line 171, in continue_cluster
ciao.start_from_last(current_cluster)
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 1085, in start_from_last
success = function_stepspypeline_progress_index
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 406, in merge_observations
reprocess_cluster(cluster)
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 230, in reprocess_cluster
copy_event_files(observation.reprocessing_directory, observation.analysis_directory)
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 278, in copy_event_files
evt2_filename = evt2_filename[-1]
IndexError: list index out of range
ps :Computer system is Linux Ubuntu 64 bit,ciao-4.11
When multiple observations (around 10 or more) are used, there is significant slowdown during multiple stages of the pipeline.
Potential fix by reprocessing the observations individually instead of passing the list of observations.
I will be using CLusterPyXT (https://github.com/bcalden/ClusterPyXT/tree/dev-CIAO-4.12) to produce spectral maps for Chandra data in a Linux Ubuntu 18.04.5 machine.
To do so I had to install Ciao (version 4.12) using anaconda3 and Python 3.8.3.
The system downloads the observation data and then presents the following message:
The data have been reprocessed.
Start your analysis with the new products in
/home/user/Doutorado/A2199/10748/repro
Running ccd_sort on A2199.
Working on A2199/10748
evt1 : ['/home/user/Doutorado/A2199/10748/secondary/acisf10748_001N002_evt1.fits']
evt2 : /home/user/Doutorado/A2199/10748/repro/acisf10748_repro_evt2.fits
detname : ACIS-01236
A2199/10748: Making level 2 event file for ACIS Chip id: 0
A2199/10748: Making level 2 event file for ACIS Chip id: 1
A2199/10748: Making level 2 event file for ACIS Chip id: 2
A2199/10748: Making level 2 event file for ACIS Chip id: 3
A2199/10748: Making level 2 event file for ACIS Chip id: 6
Running ciao_back on A2199.
Finding background for /home/user/Doutorado/A2199/10748/analysis/acis_ccd2.fits
Found background at /home/user/anaconda3/envs/ciao-4.12/CALDB/data/chandra/acis/bkgrnd/acis2iD2009-09-21bkgrnd_ctiN0003.fits
pset: cannot convert parameter value : rval
/tmp/tmpamechstz.dmkeypar.par: cannot convert parameter value : rval
Error getting parameter file in CIAO. Please close ClusterPyXT and re-try the stage. If the problem persists, please file a bug report on https://github.com/bcalden/ClusterPyXT with the following error message:
Traceback (most recent call last):
File "clusterpyxt.py", line 496, in run_stage_1
ciao.run_stage_1(self._cluster_obj)
File "/home/user/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 1313, in run_stage_1
merge_observations(cluster)
File "/home/user/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 485, in merge_observations
ciao_back(cluster)
File "/home/user/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 188, in ciao_back
echo=True)
File "/home/user/anaconda3/envs/ciao-4.12/lib/python3.5/site-packages/ciao_contrib/runtool.py", line 1810, in call
stackfiles = self._update_parfile(parfile)
File "/home/user/anaconda3/envs/ciao-4.12/lib/python3.5/site-packages/ciao_contrib/runtool.py", line 1365, in _update_parfile
self._update_parfile_verify(parfile, stackfiles)
File "/home/user/anaconda3/envs/ciao-4.12/lib/python3.5/site-packages/ciao_contrib/runtool.py", line 1294, in _update_parfile_verify
oval = _to_python(ptype, pio.pget(fp, oname))
ValueError: pget() Parameter not found
Abort (core dumped)
Do you have any idea why this error?
Dear sir,
Could I use it for CIAO version 4.14 ?
Brian, I should have mentioned my set of "unofficial" notebooks I occasionally update - you can find them at https://github.com/DougBurke/sherpa-standalone-notebooks and I think the following may be helpful (or at least I hope it is) https://github.com/DougBurke/sherpa-standalone-notebooks/blob/master/Fitting%20a%20PHA%20dataset%20using%20the%20object%20API.ipynb
Hi.
I see that the best-fit parameters are saved in acb/*spectral_fits.csv files. Is there any existing way to export the norm obtained from the spectral fitting as a fits image? Probably like the way the temperature map is generated.
In the code, the abundance and other parameters are kept fixed. Only temperature and normalization are free. Is there any plan to do spectral fitting letting the abundance as free parameters as well?
Thanks.
If the sources.reg file contains regions of zero area, a warning is thrown and passed as part of a filename string (near line 478 in ciao.py). This string is then used in the dmextract command at line 478 which crashes the application as it contains warning messages not expected by dmextract.
We have just released CIAO 4.16 which comes with native macOS ARM support (which makes your mac laptops go brrrrr).
I note that for CIAO 4.15 we changed how the conda installation works (making use of conda-forge for various technical/legal issues) so the installation steps will likely need updating.
I would hope that the Sherpa code doesn't need much updating but if you do use group_counts
(or any of the other group_xxx
calls) then the behavior may change (it depends if you group then filter or filter then group, as the latter is where the behavior changes). There may be other changes, including from CIAO 4.15 (the notice/ignore/group/set_analysis/... calls now report the selected filter range for one).
Successfully installed pychips and tested it by importing it in python.
But "ModuleNotFoundError: No module named 'pychips'" is showing while trying to run cluster.py inside ciao-4.13 environment from ClusterPyXT (deleted the try except block from cluster.py)
command: python3 cluster.py
Error:
Traceback (most recent call last): File "cluster.py", line 5, in <module> import config File "/home/beerus/ClusterPyXT/config.py", line 2, in <module> import cluster File "/home/beerus/ClusterPyXT/cluster.py", line 12, in <module> import pychips ModuleNotFoundError: No module named 'pychips'
Even after executing every step without any error except one (WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available) my output Temperature and pressure maps are blank. Maybe something is wrong with spectral fittings because it took less than a minute with 6478 region files only and showing "no fit" for many regions.
The codebase needs refactoring to better follow the pipeline overview graphic. The code still have many of the vestiges of the bash script program flow. Break it down by the stages in the graphic. While a further breakdown may be necessary, at a minimum there should be a more mirrored code naming.
Program crashes with --continue argument.
Traceback (most recent call last):
File "clusterpyxt.py", line 116, in
args = process_commandline_arguments(cluster_obj)
File "clusterpyxt.py", line 45, in process_commandline_arguments
ciao.start_from_last(cluster_obj)
File "/home/jzuhone/Source/ClusterPyXT/ciao.py", line 1055, in start_from_last
success = function_steps[pypeline_progress_index](cluster)
TypeError: 'NoneType' object is not callable
I did all the installations as indicated on the site, using ciao-4.14, but when I enter the directory and run python clusterpyxt.py I get the following message:
gabriel@gabriel-Aspire-A515-51G:~/astrosoft/ClusterPyXT$ python3 clusterpyxt.py
/home/gabriel/astrosoft/ClusterPyXT/pypeline_config.ini
qt.qpa.plugin: Could not load the Qt platform plugin "offscreen" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Abortado (imagem do núcleo gravada)
Allow in the GUI and the command line (likely through a configuration file as well as argument driven) changing the model parameters and which are fixed/fitted.
Hi @bcalden. What are the units of the pixels in the product maps (surface_brightness_nosrc_cropped.fits, _temperature_map.fits, _density.fits, _pressure.fits, broad_thresh.expmap, broad_flux.img, merged_evt.fits, etc.)? Would it be possible to add the units of the pixels to the headers? Maybe this question is obvious to X-ray experts, but I can't find the answers for some of the maps.
Add the capability to process XMM-Newton data in the pipeline and integrate w/CIAO data.
The current iteration of the pipeline only works with ACIS-I observations that are publicly accessible but does not check either of those caveats. If given an ACIS-S observation ID and/or non-public data, the program will crash/produce undesired results.
Last Step Completed: 5
Creating temperature map.
Traceback (most recent call last):
File "acb.py", line 1132, in
make_temperature_map(clstr, args.resolution)
File "acb.py", line 963, in make_temperature_map
_update_completed_things(i, len(regions), "regions")
UnboundLocalError: local variable 'i' referenced before assignment
There are vestiges of the string based cluster class throughout the pipeline (mainly in the early stages). Rewrite/refactor to make use of the cluster/observation classes properties
Hi Brian, I'm new user ClusterPyXT and when I run the command clusterpyxt.py the linux show me the follow mesage:
File "/home/alisson/ClusterPyXT-master/acb.py", line 9, in <module>
import ciao_contrib.runtool as rt
ModuleNotFoundError: No module named 'ciao_contrib'
apparently I don't have the ciao-contrib package installed. However, I am unable to install with
pip install ciao-contrib
or
sudo python ciao-contrib install
Could you help me with this issue?
I'm using ciao-4.16 and when I run ciaover -v the output is:
The current environment is configured for:
CIAO : CIAO 4.16.0 Tuesday, December 05, 2023
Contrib : Package release 0 Tuesday, November 28, 2023
bindir : /home/alisson/miniconda3/envs/ciao-4.16/bin
Python path : /home/alisson/miniconda3/envs/ciao-4.16/bin
CALDB : 4.11.0
System information:
Linux alisson-550XBE-350XBE 6.5.0-14-generic #14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 20 18:15:30 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
I hope this information helps
Best,
Alisson
Allow for the editing of cluster config files from within the GUI. Further, allow for progress check on the clusters with the option of repeating stages from w/in the GUI.
Implement unit tests for each stage of the pipeline with a known good cluster dataset at that stage. This should be implemented so that the tests are done during commits (as well as can be run during local development.)
At a minimum, every function should have a comment describing its inputs, outputs, and what it does/is intended to do.
I'm using ciao-4.14 to do an analysis but when I run my pipeline to do the analysis I get the following return:
gabriel@gabriel-Aspire-A515-51G:~/astrosoft/ClusterPyXT$ python clusterpyxt.py
/home/gabriel/astrosoft/ClusterPyXT/pypeline_config.ini
Cluster data written to /home/gabriel/aglomerados/dados.clusterpyxt/A119/A119_pypeline_config.ini
Making directories
Cluster data written to /home/gabriel/aglomerados/dados.clusterpyxt/A119/A119_pypeline_config.ini
Downloading files for ObsId 4180, total size is 58 Mb.
vvref pdf 25 Mb #################### 8 s 3295.2 kb/s
evt1 fits 21 Mb #################### 9 s 2449.3 kb/s
asol fits 3 Mb #################### 2 s 1393.9 kb/s
evt2 fits 2 Mb #################### 4 s 694.4 kb/s
mtl fits 520 Kb #################### 2 s 254.8 kb/s
cntr_img jpg 517 Kb #################### 2 s 277.8 kb/s
bias fits 428 Kb #################### 2 s 216.3 kb/s
bias fits 426 Kb #################### 2 s 277.8 kb/s
bias fits 426 Kb #################### 2 s 242.0 kb/s
bias fits 426 Kb #################### 2 s 223.8 kb/s
bias fits 425 Kb #################### 2 s 230.8 kb/s
stat fits 372 Kb #################### 2 s 209.5 kb/s
osol fits 356 Kb #################### 2 s 201.1 kb/s
osol fits 355 Kb #################### 2 s 170.9 kb/s
osol fits 355 Kb #################### 2 s 182.6 kb/s
eph1 fits 282 Kb #################### 2 s 178.6 kb/s
eph1 fits 275 Kb #################### 2 s 153.4 kb/s
eph1 fits 259 Kb #################### 2 s 154.1 kb/s
aqual fits 200 Kb #################### 2 s 125.9 kb/s
full_img jpg 77 Kb #################### 1 s 57.5 kb/s
osol fits 64 Kb #################### 1 s 52.4 kb/s
cntr_img fits 56 Kb #################### 1 s 45.9 kb/s
vv pdf 51 Kb #################### 1 s 41.7 kb/s
full_img fits 44 Kb #################### 1 s 41.7 kb/s
bpix fits 21 Kb #################### < 1 s 21.9 kb/s
oif fits 20 Kb #################### < 1 s 21.2 kb/s
readme ascii 10 Kb #################### < 1 s 10.8 kb/s
eph1 fits 7 Kb #################### < 1 s 10.3 kb/s
fov fits 7 Kb #################### < 1 s 11.0 kb/s
flt fits 6 Kb #################### < 1 s 11.2 kb/s
msk fits 5 Kb #################### < 1 s 6.4 kb/s
pbk fits 4 Kb #################### < 1 s 6.5 kb/s
Total download size for ObsId 4180 = 58 Mb
Total download time for ObsId 4180 = 1 m 2 s
Reprocessing A119.
Running chandra_repro
version: 15 March 2022
Processing input directory '/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180'
No boresight correction update to asol file is needed.
Resetting afterglow status bits in evt1.fits file...
Running acis_build_badpix and acis_find_afterglow to create a new bad pixel file...
Running acis_process_events to reprocess the evt1.fits file...
Filtering the evt1.fits file by grade and status and time...
Applying the good time intervals from the flt1.fits file...
The new evt2.fits file is: /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro/acisf04180_repro_evt2.fits
Updating the event file header with chandra_repro HISTORY record
Creating FOV file...
Cleaning up intermediate files
Any issues pertaining to data quality for this observation will be listed in the Comments section of the Validation and Verification report located in:
/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro/axaff04180N004_VV001_vv2.pdf
The data have been reprocessed.
Start your analysis with the new products in
/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro
Running ccd_sort on A119.
Working on A119/4180
evt1 : ['/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/secondary/acisf04180_000N005_evt1.fits']
evt2 : /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro/acisf04180_repro_evt2.fits
detname : ACIS-01236
A119/4180: Making level 2 event file for ACIS Chip id: 0
A119/4180: Making level 2 event file for ACIS Chip id: 1
A119/4180: Making level 2 event file for ACIS Chip id: 2
A119/4180: Making level 2 event file for ACIS Chip id: 3
A119/4180: Making level 2 event file for ACIS Chip id: 6
Running ciao_back on A119.
Finding background for /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/analysis/acis_ccd1.fits
Found background at /home/gabriel/miniconda3/envs/ciao-4.14/CALDB/data/chandra/acis/bkgrnd/acis1iD2000-12-01bkgrnd_ctiN0005.fits
Running dmkeypar /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/analysis/acis_ccd1.fits "GAINFILE" echo=True
pset: cannot convert parameter value : rval
/tmp/tmpiutbyeed.dmkeypar.par: cannot convert parameter value : rval
Error getting parameter file in CIAO. Please close ClusterPyXT and re-try the stage. If the problem persists, please file a bug report on https://github.com/bcalden/ClusterPyXT with the following error message:
Traceback (most recent call last):
File "/home/gabriel/astrosoft/ClusterPyXT/clusterpyxt.py", line 523, in run_stage_1
ciao.run_stage_1(self._cluster_obj)
File "/home/gabriel/astrosoft/ClusterPyXT/ciao.py", line 1290, in run_stage_1
merge_observations(cluster)
File "/home/gabriel/astrosoft/ClusterPyXT/ciao.py", line 435, in merge_observations
ciao_back(cluster)
File "/home/gabriel/astrosoft/ClusterPyXT/ciao.py", line 138, in ciao_back
acis_gain = rt.dmkeypar(infile=acis_file,
File "/home/gabriel/miniconda3/envs/ciao-4.14/lib/python3.9/site-packages/ciao_contrib/runtool.py", line 1836, in call
stackfiles = self._update_parfile(parfile)
File "/home/gabriel/miniconda3/envs/ciao-4.14/lib/python3.9/site-packages/ciao_contrib/runtool.py", line 1396, in _update_parfile
self._update_parfile_verify(parfile, stackfiles)
File "/home/gabriel/miniconda3/envs/ciao-4.14/lib/python3.9/site-packages/ciao_contrib/runtool.py", line 1326, in _update_parfile_verify
oval = _to_python(ptype, pio.pget(fp, oname))
ValueError: pget() Parameter not found
Abortado (imagem do núcleo gravada)
AttributeError is showing while doing spectral fitting using the following command (replaced the 'path/to/cluster_config_file' with my cluster config file path):
Note: Running in a 2 core 4 thread processor computer.
python spectral.py --parallel --num_cpus 4 --cluster_config_file 'path/to/cluster\_config\_file' --resolution 2
or
python spectral.py --cluster_config_file 'path/to/cluster\_config\_file' --resolution 2
Error:
Traceback (most recent call last):
File "/home/beerus/anaconda3/envs/ciao-4.12/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/beerus/anaconda3/envs/ciao-4.12/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/beerus/ClusterPyXT/cluster.py", line 1750, in fit_region_number
sherpa.set_analysis('energy')
AttributeError: 'NoneType' object has no attribute 'set_analysis'
4690: Loading data pulse invariant files (PI files)
4690: Loading background PI files
@bcalden
Solved the pychips import error by importing these git files in "/home/zareef/anaconda3/envs/ciao-4.13/lib/python3.8/site-packages" and replacing pychips with Pychips in cluster.py python file.
Solved the sherpa import error by reinstalling Sherpa package using conda.
After that ran these commands for spectral fittings(178 iterations), temperature and pressure Map:
python spectral.py --parallel --cluster_config_file /home/zareef/gc/xray/a3444/a3444_pypeline_config.ini --resolution 3
(ciao-4.13) zareef@zareef:~/soft/ClusterPyXT$ python acb.py --temperature_map --resolution 3 --cluster_config_file /home/zareef/gc/xray/a3444/a3444_pypeline_config.ini
(ciao-4.13) zareef@zareef:~/soft/ClusterPyXT$ python acb.py --make_pressure_map --cluster_config_file /home/zareef/gc/xray/a3444/a3444_pypeline_config.ini
But when I opened a3444_temperature_map.fits and a3444_pressure.fits using ds9 both of them are blank.
Dear Sir,
I tried to run the code. I have created the source and exclude regions. I have proceeded further through GUI mode as well as command line mode (python clusterpyxt.py --continue)
When I clicked on run stage 2, I have encountered with the following error. I am unable to resolve it.
It will be very useful, I you could provide one example.
Traceback (most recent call last):
File "clusterpyxt.py", line 533, in run_stage_2
ciao.run_stage_2_parallel(self._cluster_obj, get_arguments())
File "/home/sk/ClusterPyXT-master/ciao.py", line 1357, in run_stage_2_parallel
remove_sources_in_parallel(cluster,args)
File "/home/sk/ClusterPyXT-master/ciao.py", line 671, in remove_sources_in_parallel
with mp.Pool(args.num_cpus) as pool:
File "/home/sk/CIAO/ciao-4.14/ots/lib/python3.8/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/home/sk/CIAO/ciao-4.14/ots/lib/python3.8/multiprocessing/pool.py", line 205, in init
raise ValueError("Number of processes must be at least 1")
ValueError: Number of processes must be at least 1
Aborted (core dumped)
After printing "Cleaning the lightcurve for {OBSID}" the program pauses until the user presses enter. This is undesired and may be related to either the CIAO command punlearn, or the deflare command. (Lines 484-492 in ciao.py)
Short term fix may be to just add "press enter to continue", but this should only be considered short term.
There should be no pause.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.