ptychography-4-0 / ptychography Goto Github PK
View Code? Open in Web Editor NEWCode repository for Ptychography 4.0 project.
Home Page: https://ptychography-4-0.github.io/ptychography/
License: GNU General Public License v3.0
Code repository for Ptychography 4.0 project.
Home Page: https://ptychography-4-0.github.io/ptychography/
License: GNU General Public License v3.0
Ptychography 4.0 – An Information and Data Science Pilot Project Data infrastructure and applications
--> Poster: PtychoHIP_FZJ_HZB_HZDR-DiWe-0.2-web.pdf <--
We are looking forward to collaborate on new project ideas! Contact us to arrange a follow-up discussion for details.
Alexander Clausen [email protected] (FZ Jülich, LiberTEM, data logistics)
Simeon Ehrig [email protected] (HZDR, Alpaka, implementation)
Heide Meißner [email protected] (HZDR, Alpaka, application)
Knut Müller-Caspary [email protected] (FZ Jülich, electron microscopy)
Dieter Weber [email protected] (FZ Jülich, LiberTEM, application, presenting author)
Markus Wollgarten [email protected] (HZB, electron microscopy)
For EPIE (DESY-version and Alpaka-version) an interface between C++ and python is required.
Start with Achim's EPIE which is written in python to have an easy full workflow.
To the best of our knowledge, tomographical ptychography is always reconstructed by
Wolfgang had the idea of trying a real 3D reconstruction by inverting all data together, i.e. using polar coordinate shifts instead of cartesian ones.
From a mathematical point of view this seems to be feasible. The benefit would be less required data points and thus less artefacts. The drawback is the amount of data which is now considerably increased by magnitudes which is needed for each single iteration now putting through all data. This is the task for WP3 when the algorithm is adapted => WP1
Since the code for SSB and for stitching is pretty mature and we are about to publish a paper regarding live ptychography based on this SSB implementation, it is now time for the first release.
deploy_docs
job needs to have access to a github deploy keylibertem_bot
user?)references*.bib
DeprecationWarning
that are supposed to be removed in that release.sphinx-build -b linkcheck "docs/source" "docs/build/html"
0.3.0.dev0
to 0.3.0
when releasing version 0.3.0.docs/source/changelog.rst
, merging snippets in docs/source/changelog/*/
as appropriate.packaging/
folder with author and project informationscripts/release
. See scripts/release --help
for details.python -m pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple 'ptychography==0.1.0rc0'
Use of Qt GUI to show processing of reconstruction
Make Libertem ready for ptycho data (input and reconstruction output) using UDFs
Update tox envs, azuer pipelines etc :-)
In #35 the coverage is poor because now most of the work is done within a Numba-compiled function, which only generates coverage if it is run with disabled Numba compilation.
For that reason we should run a separate CI pipeline with disabled Numba compilation on selected tests like in LiberTEM to get coverage.
Implement stitching procedure from Nashed et al. 2014. This is required at the end of a reconstruction performed individually for more than one subset of diffraction data due to phase shifts being different for each subset and adjust the geometrical positions to one complete object.
In particular #60 has a lot of Numba code. We should have a separate Numba coverage job like LiberTEM. :-)
Hi,
where to store the ptycho data sets? Data from DESY is too big. Github obviously has a limit of 25 MB, and we are in the GB range.
Cheers,
Heide
when starting the SSB example, upon creating the context:
ctx = lt.Context()
I get numerous copies of:
Task exception was never retrieved
future: <Task finished coro=<_wrap_awaitable() done, defined at /home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/asyncio/tasks.py:623> exception=RuntimeError('\n An attempt has been made to start a new process before the\n current process has finished its bootstrapping phase.\n\n This probably means that you are not using fork to start your\n child processes and you have forgotten to use the proper idiom\n in the main module:\n\n if __name__ == \'__main__\':\n freeze_support()\n ...\n\n The "freeze_support()" line can be omitted if the program\n is not going to be frozen to produce an executable.')>
Traceback (most recent call last):
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/asyncio/tasks.py", line 630, in _wrap_awaitable
return (yield from awaitable.__await__())
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/site-packages/distributed/core.py", line 285, in _
await self.start()
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/site-packages/distributed/nanny.py", line 298, in start
response = await self.instantiate()
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/site-packages/distributed/nanny.py", line 381, in instantiate
result = await self.process.start()
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/site-packages/distributed/nanny.py", line 578, in start
await self.process.start()
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/site-packages/distributed/process.py", line 33, in _call_and_set_future
res = func(*args, **kwargs)
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/site-packages/distributed/process.py", line 203, in _start
process.start()
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/cri/Software/conda/miniconda/miniconda3/envs/ppp4_py37/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Software runs on a HPE ProLiant DL385 Gen10, 2x Epyc 7F72, 512 GB RAM under Debian Linux, testing distribution.
Distribute diffraction images in a way that the corresponding raster positions are neighbored. This will enable:
Test the following: Given a subset (or the whole object) which is roughly reconstructed using a coarse distribution of raster points, what happens when this image is used as start value for a smaller subset with finer rastering. What do we loose in comparison with a reconstruction of all diffraction images used for reconstruction.
Right now, the package name is ptychography40
but the folder is named ptychography
and it is installed as import ptychography
, right?
Should we rename everything to be ptychography40
?
Since we don't push directly, but work with forks, our documentation should include that information for new contributors.
This is missing in https://github.com/LiberTEM/LiberTEM as well, should be fixed in both.
It should be easy to find our publication(s) related to this project, for example our live processing paper and maybe others. Could we add a list directly to the README?
In addition, would be also nice to support CITATION.cff, see LiberTEM/LiberTEM#1083
...namely make_connection and make_acquisition
When following the SSB example in ssb.html , of course, libertem is imported.
However, it is not mentioned in the installation.
I may take care of this, but not sure where/how to add this:
Best wishes
Markus
Solvers require calculating a next sample vector from evaluating the error and/or local gradient of the forward model with respect to the measured data.
In LiberTEM, the data and computation can be distributed, parallelized and serialized as desired. Using this approach for solvers requires a forward model that can be distributed in the same fashion so that parts of the sample can be evaluated separately and the new sample vector resp. delta is merged from partial sample vector results.
At the moment, LiberTEM is distributed via pip. For this project, pip could not be the right solution, because we have additional needs for the alpaka backend:
[1] automatic detecting at build time is not a good idea because a usual workflow on HPC is installing the packages on the login node (with has no GPUs) and allocate GPUs afterwards
Develop a dummy alpaka backend function with python binding #10
The reconstruction algorithm RAAR (= Relaxed Averaged Alternating Reflection) used in Jena for coherent diffraction imaging exist in python and in pytorch. Pytorch is beneficial when using GPUs, since some optimization features are automatically applied. In Jena, differences between the results where observed although started with the same seed. Only for experimental data sets. And no obvious difference in the code was visible. That means the accuracy needs to be tested.
From #32 - we should run tests w/ data access, and also run the notebook in CI
SSB should detect undersampling and throw a warning, and give help to find good parameters for the illumination for a given scan resolution.
Please document, that the SSB prototype needs liberTEM 0.6~dev, because of the CUDA support.
Test this idea with DESY data
First step: add reconstruction methods from WP2 plus interfaces to libertem
Second step: Evaluate concerning:
[x] opened PR for fix in Pybind11: pybind/pybind11#2240
[x] new install method for Alpaka: alpaka-group/alpaka#1016
[x] new release canidate with many improvements for installation: https://github.com/alpaka-group/alpaka/tree/release-0.5.0
There's now another open source SSB implementation available here: https://gitlab.com/pyptychostem/pyptychostem
We could compare results with that implementation in our unit tests.
Improve code, test performance
Update Alpaka version and make further backends running. Currently, it runs at FZJ and DESY on CUDA and CPU.
Then do benchmark tests.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.