Giter Club home page Giter Club logo

hcipy's People

Contributors

braingram avatar ehpor avatar gillesorban avatar ivalaginja avatar jamienoss avatar jmilou avatar kstlaurent avatar npourre avatar remisoummer avatar spbos avatar syhaffert avatar vkooten avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hcipy's Issues

New enhancement idea

Hello everyone,

I created my own AO simulations using the hcipy library. a really important work, I would like to thank you very much for this.

I have a problem with the simulation calculation speed. The simulation times are quite long (sometimes over 10 hours) depending on the parameters I use.

As a suggestion in this regard, I think that we can reduce these times to very short times by using 3rd party compilers/librarys such as numba.

I will try it on my fork as soon as possible. I even want to have this simulation process calculated in real time if I succeed.

What are your thoughts on this subject?

evaluate_supersampled fails for some array sizes

Currently, the new implementation of evaluate_supersampled() splits up the grid into q parts for each dimension, and evaluates the grid on each subset. This reduces the memory usage to that of a single evaluation. However, in the current implementation also the grid dimensions themselves need to be divisible by q. So supersampling a field generator on a 256x256 with 2x oversampling works, but on a 257x257 grid fails as 257 is not divisible by 2.

There is no workaround available, other then to only use supersamplings that are allowed, potentially increasing the computation time by doing so.

The fix for the function requires a complete rewrite.

Instrument models

In #134 @vkooten discusses implementing a full instrument model for the Keck AO system as an example. This made me wonder.

Do we want full instrument models in the core code of hcipy? Do we make it a separate package? And if we do add it. What interface do we want? We probably want to add an additional folder in the hcipy structure for actual instruments.

I am very positive to the idea of adding actual instrument models in hcipy. That would make development of new control algorithms for example pretty quick. And it would also be relatively easy for new people to pick up full AO simulations.

Resolve compatibility with Scipy 1.2

Scipy deprecated imsave and removed it from ver. 1.2.0. Thereby, an imsave import statement in HCIPy prevents it from running in environments w/ scipy 1.2.0 or higher.

Numpy AttributeError

Hello dear developers.
I am getting this error. I am guessing this error happening with newest scipy and numpy versions.
After I install python 3.9 (x64) then numpy (1.18.5 - also I tried with 1.19.4) and scipy (1.4.1 - also I tried 1.5.4).
My code was running older versions but now some part (atmosphere I guess) is now working.

Maybe in requirments file you should specify which versions what you develepod. (just idea)

Here is my code:

`
cn2 = [[9.57, 9, 649], [10.83, 9, 1299], [4.37, 15, 2598], [6.58, 25, 5196], [3.71, 40, 10392], [6.71, 21, 20785]]

cn_squared = Cn_squared_from_fried_parameter(r0, 500e-9)

atm_layers = []

for i in cn2:
layer = InfiniteAtmosphericLayer(pupil_grid, i[0], L0, i[1], i[2])
atm_layers.append(layer)

atmosphere = MultiLayerAtmosphere(layers=atm_layers)

atmosphere.Cn_squared = cn_squared

atmosphere.reset()

atmosphere.evolve_until(1)
`

Error:

Traceback (most recent call last):
File "..\PycharmProjects\AO\dag_ao.py", line 165, in
imshow_field(atmosphere.phase_for(wavelength), cmap='RdBu')
File "C:\Users\rbalb\AppData\Local\Programs\Python\Python39\lib\site-packages\hcipy\plotting\field.py", line 75, in imshow_field
grid = field.grid
AttributeError: 'numpy.ndarray' object has no attribute 'grid'

Parallelizing pyramid modulation

Hi all,

I'm using HCIPy to simulate an adaptive optics system for the TMT, and I'm finding that by far the longest step in my closed-loop simulation is the pyramid modulation (specifically the for loop in the forward function of the ModulatedPyramidWavefrontSensorOptics class). It should be possible to parallelize the operations being done in the for loop, since there's no reason why each modulation step needs to be computed sequentially rather than in parallel. Shortening this step would make a big difference in making HCIPy practical for high time resolution simulations!

I tried to implement a parallelized version of the for loop using the multiprocessing package, but I'm having trouble doing so because python can't pickle PyramidWavefrontSensorOptics (I also get the error that the 'TipTiltMirror' object has no attribute 'is_regular' but I don't know what that means). As a result, I'm finding that I need to re-create the PYWFS and TTM objects in each of the parallelized threads, which makes the speed improvement from parallelization much worse.

If you folks can think of a practical way to make these objects picklable or otherwise parallelize / speed up the modulation, I would be very eager to use those capabilities.

Cheers,

Becky

Add support for Python 3.8

Python 3.8 was released on October 14th, 2019. We will add support when Anaconda releases a version for 3.8, as the CI relies on a Miniconda installation. Currently, there is no reason for HCIPy not to work on Python 3.8; it's just not automatically tested by CI at the moment.

Atmospheric refraction

I am currently working on adding atmospheric refraction. I can do two things:

  1. use a python library that has various methods implemented and verified and I make an optical element interface to the functions.
  2. implement all the functions myself.

2 is substantially more work but option 1 will require an external library. Any thoughts?

Rotation of odd regular polygons

Rotation is currently not being handled correctly by the regular_polygon_aperture() aperture generator. The function is currently exploiting a symmetry for speed improvement, but this mirror symmetry in x is not rotated correctly. Even regular polygons, such as hexagons are not affected.

Workaround is rotating the grid, instead of the aperture.

This has low priority, as nobody is actually using those odd regular polygons, and there is a workaround available, but this will need to be fixed at some point.

Standard asphere surface profiles

I am currently working on the finishing touch of my aspheric microlens arrays. This will be pushed later this week. For this I needed to define an aspheric surface. I have now followed the definition of Zemax for an EvenAsphereSurface. But there are many other types of surface profiles such as the uneven asphere, even + uneven asphere, surfaces that only change in x or y.

Does it make sense to make a separate file in the optics folder where all these standard surface shapes are defined?

vector Apodizing Phase Plate is broken

When generating a vAPP object, the code returns an error because the Class tries to supply a wavelength to the GeometricPhaseElement. Which is not required anymore.

The fix is:
GeometricPhaseElement.init(self, phase_pattern, leakage, retardance_offset, wavelength)
-->
GeometricPhaseElement.init(self, phase_pattern, leakage, retardance_offset)

Planar to spherical propagation

For several end-to-end instrument simulations it would be helpful if there is a spherical to planar propagator based on Fresnel diffraction. I think the Zemax documentation has a pretty good description of the different transforms and regions. I wanted to open an issue to discuss the implementation.

I was thinking of adding a function like:
propagator = choose_propagator(wavefront_in, distance)

I have assumed that the wavefront_in has an internal Gaussian beam that keeps track of the divergence during all propagations. Then based on the divergence of the Gaussian beam it selects Fresnel/ASP or the SphericalFresnelPropagator.

For the propagator itself, we can just add SphericalFresnelPropagator to differentiate it from the Planar version. And I think it is best to not rename the old propagator to keep backwards compatibility.

Any thoughts?

Remove some arguments in make_focal_grid()

Hi all,

Currently, when generating a grid with the make_focal_grid() function, the user can set the spatial resolution by either using the 'spatial_resolution' argument or the 'pupil_diameter' + 'focal_length' + 'reference_wavelength' arguments. The latter three arguments only save the user one line of code, namely 'spatial_resolution = lambda * f / D'. While adding 9 lines of code to the function, making it harder to read and understand.

Therefore, the suggestion is to remove the arguments 'pupil_diameter', 'focal_length' and 'reference_wavelength'. To make sure that the user knows what is expected, it should be clear in the documentation how to calculate the spatial resolution (which is already the case).

What do you think?

GifWriter is failing for Pillow 7.1.0+

This is visible in the standard tests. Error does not occur for Pillow 7.0.0. This may have to do with the rewrite of png files in Pillow. The error that occurred with Pillow 7.1.0 was python-pillow/Pillow#4518, for which a fix was released with Pillow 7.1.1. In the new version, the error message has changed to:

    def test_gif_writer():
        grid = make_pupil_grid(256)
    
        mw = GifWriter('test.gif')
    
        for i in range(25):
                field = Field(np.random.randn(grid.size), grid)
    
                plt.clf()
                imshow_field(field)
    
                mw.add_frame()
    
>       mw.close()

tests\test_plotting.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
hcipy\plotting\animation.py:139: in close
    self.convert()
hcipy\plotting\animation.py:132: in convert
    return self.convert_to_gif(self.filename, self.path_to_frames, self.framerate, num_files_to_convert=self.num_frames)
hcipy\plotting\animation.py:125: in convert_to_gif
    loop=0)
C:\ProgramData\Anaconda3\lib\site-packages\PIL\Image.py:2134: in save
    save_handler(self, fp, filename)
C:\ProgramData\Anaconda3\lib\site-packages\PIL\GifImagePlugin.py:499: in _save_all
    _save(im, fp, filename, save_all=True)
C:\ProgramData\Anaconda3\lib\site-packages\PIL\GifImagePlugin.py:510: in _save
    if not save_all or not _write_multiple_frames(im, fp, palette):
C:\ProgramData\Anaconda3\lib\site-packages\PIL\GifImagePlugin.py:430: in _write_multiple_frames
    for im_frame in ImageSequence.Iterator(imSequence):
C:\ProgramData\Anaconda3\lib\site-packages\PIL\ImageSequence.py:49: in __next__
    self.im.seek(self.position)
C:\ProgramData\Anaconda3\lib\site-packages\PIL\PngImagePlugin.py:748: in seek
    self._seek(f)
C:\ProgramData\Anaconda3\lib\site-packages\PIL\PngImagePlugin.py:784: in _seek
    ImageFile._safe_read(self.fp, self.__prepare_idat)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _  

fp = <_io.BufferedReader name='test.gif_frames\\00000.png'>, size = 8192

    def _safe_read(fp, size):
        """
        Reads large blocks in a safe way.  Unlike fp.read(n), this function
        doesn't trust the user.  If the requested size is larger than
        SAFEBLOCK, the file is read block by block.
    
        :param fp: File handle.  Must implement a <b>read</b> method.
        :param size: Number of bytes to read.
        :returns: A string containing up to <i>size</i> bytes of data.
        """
        if size <= 0:
            return b""
        if size <= SAFEBLOCK:
>           return fp.read(size)
E           ValueError: read of closed file

C:\ProgramData\Anaconda3\lib\site-packages\PIL\ImageFile.py:549: ValueError

I'm not sure yet whether this is a bug in HCIPy or in Pillow.

Atmosphere evolution bug

Aloha,

First of all, thank you for your wonderful work! I have been using it for several years now for my research.

It seems to me that a bug may have crept into the code for evolving atmospheres? Very nonphysical looking ridges (i.e. vertical and horizontal lines) appear from the edges of the phase screen as I evolve a single layer atmosphere. I could provide some boilerplate code if that would help? It seems to happen independently of how granular my time-steps are.

I first noticed this late last year, but at the time suspected it had something to do with my switch in hardware (maybe the RNG in python for m1 hardware was somehow to blame?) but have now confirmed across a few platforms.

Is this something anyone else has noticed? Thanks for any help!

Wavefront creation fails on Magellan aperture...

>>> from hcipy import *
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> pupil_grid = make_pupil_grid(1024)
>>> aperture = make_magellan_aperture(True)
>>> wavefront = Wavefront(aperture, 1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/feldt/Python/HCIPY/venv/lib/python3.6/site-packages/hcipy/optics/wavefront.py", line 58, in __init__
    self._electric_field = electric_field.astype('complex')
AttributeError: 'function' object has no attribute 'astype'

I guess it's due to the wavefront object in wavefront.py containing both a variable and a property/function of the name "electric_field".

Curiously, it works on a circular aperture.

Python version is 3.6

fourier_transform not working as expected?

Hi,

I've just been running some simple tests with the fourier_transform class, and it seems that the Fourier transform is not invertible - i.e. fourier_transform.forward(fourier_transform.backward(x)) =/= x. Is this by design? I would have thought that this would not be the desired behaviour.

Here is a small snippet of code that can be used to illustrate this issue. As you can see from the images attached, I would expect Figure 3 to appear like Figure 1, but instead it seems identical to Figure 2 just on a different-sized grid.

import numpy as np
import matplotlib.pyplot as plt
from hcipy import *

pupilsize = 128
focalsize = 128
diameter = 12
mas_pix = 10
rad_pix = np.radians(mas_pix / 1000 / 3600)
wavelength = 1630e-9

#makes grids
pupil_grid = make_pupil_grid(pupilsize, diameter)
focal_grid = make_uniform_grid([focalsize,focalsize], [focalsize*rad_pix, focalsize*rad_pix])

#makes ft
fouriertransform = make_fourier_transform(pupil_grid, focal_grid)

#makes object to FT
c_e = LyotCoronagraph(pupil_grid, 1 - evaluate_supersampled(circular_aperture(2*wavelength/diameter), focal_grid, 8))
ce = c_e.focal_plane_mask._apodization

#computes and displays FTs
plt.figure()
imshow_field(ce)
backce = fouriertransform.backward(ce)
plt.figure()
imshow_field(backce)
forwardce = fouriertransform.forward(backce)
plt.figure()
imshow_field(forwardce)
plt.show()

Outputs:

err_fig1
err_fig2
err_fig3

Could you explain this behaviour? I would really appreciate some assistance on this, my confusion on this issue has really been putting a spanner in my research.

Thanks!

Broken OD WFSs

Most, if not all, are broken (looking at the code, since 0.2) due to using old FraunhoferPropagators and make_focal_grid signatures. Can you fix these @syhaffert?

Perfect coronagraph doesn't work with (partially) polarized light

Hi,

I tried to propagate a partially polarized wavefront through the perfect coronagraph, but it cannot calculate the correction because the shapes of the arrays are not correct. The error that is given back is:

File "test_perfect_coronagraph.py", line 15, in <module>
    pup_wf_post_coronagraph = coronagraph.forward(pup_wf)
  File "/Users/stevenbos/Documents/Code/hcipy/hcipy/coronagraphy/perfect_coronagraph.py", line 72, in forward
    correction = self.transformation.dot(self.coeffs * self.transformation_inverse.dot(wf.electric_field))
ValueError: shapes (1,16384) and (2,2,16384) not aligned: 16384 (dim 1) != 2 (dim 1)

Example code that reproduces the bug:

import numpy as np 
from hcipy import * 

Npix = 128

pupil_grid = make_pupil_grid(Npix)

aperture = circular_aperture(1)(pupil_grid)

coronagraph = PerfectCoronagraph(aperture)

pup_wf = Wavefront(aperture, input_stokes_vector=[1,0.1,0,0])

pup_wf_post_coronagraph = coronagraph.forward(pup_wf)

Building tutorials broken

This part of the codebase hasn't changed in a while. Also, the trace shows that the error is probably external to hcipy.

Trace:

Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/sphinx/config.py", line 319, in eval_config_file
    execfile_(filename, namespace)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/sphinx/util/pycompat.py", line 89, in execfile_
    exec(code, _globals)
  File "/home/travis/build/ehpor/hcipy/doc/conf.py", line 180, in <module>
    compile_tutorials.compile_all_tutorials()
  File "./compile_tutorials.py", line 215, in compile_all_tutorials
    tutorials[name] = compile_tutorial(name)
  File "./compile_tutorials.py", line 92, in compile_tutorial
    km, kc = ep.start_new_kernel(cwd=os.path.abspath(os.path.dirname(notebook_path)))
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/nbclient/util.py", line 74, in wrapped
    return just_run(coro(*args, **kwargs))
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/nbclient/util.py", line 53, in just_run
    return loop.run_until_complete(coro)
  File "/opt/python/3.7.1/lib/python3.7/asyncio/base_events.py", line 573, in run_until_complete
    return future.result()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/nbclient/client.py", line 376, in async_start_new_kernel
    assert self.km is not None
AssertionError

FFMpegWriter error on Mac

When generating movies on a Mac using the FFMpegWriter the code crashes. The codec is set to 'mpeg4', which should work on a Mac. But the following error occurs:

movie.add_frame(fig=fig)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/hcipy/plotting/animation.py", line 255, in add_frame
fig.savefig(self.p.stdin, format='png', transparent=False, dpi=dpi)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/figure.py", line 2180, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/backend_bases.py", line 2089, in print_figure
**kwargs)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py", line 532, in print_png
self.figure.dpi, metadata=metadata)
SystemError: <built-in method write of _io.BufferedWriter object at 0x11a34feb8> returned a result with an error set

Cannot import scipy.misc.imsave

After installing hcipy from pip, an import results in the error

ImportError: cannot import name 'imsave' from 'scipy.misc' (/usr/local/lib/python3.7/site-packages/scipy/misc/__init__.py)

imsave has been removed from scipy after version 1.2 (I'm running 1.3); could this dependency be removed from hcipy?

wavefront_control submodule

I'd like to add tip-tilt wavefront control strategies to HCIPy and saw that there's a wavefront_control module that's currently empty. I wanted to see if there were any plans to develop this functionality already, before I start adding to it?
(Also this should be labelled "enhancement" but I can't find where to set that, sorry!)

NoisyDetector enhancement: subsample field

Currently, when trying to integrate supersampled Wavefronts with the Noisy Detector, you first have to manually subsample the electric field to the correct grid size of the detector. I find this a bit cumbersome and unnecessary.

I think it would be more intuitive (and convenient) if we add the subsampling also to the integrate function of the Noisy Detector. The code would check if the wavefront grid shape matches the grid shape of the detector. Then, if necessary, it would subsample it to the correct shape.

@ehpor If you like this idea, I can code it up and create a new pull request this week.

ValueError on generating infinite atmospheric layer with rectangular grid

When trying to create an infinite atmospheric layer with a rectangular grid I get a ValueError. Full traceback:
Exception has occurred: ValueError
operands could not be broadcast together with shapes (257,257) (256,)
File "...\hcipy\hcipy\atmosphere\infinite_atmospheric_layer.py", line 112, in _make_AB_matrices
self.B_vertical = U * L
File "...\hcipy\hcipy\atmosphere\infinite_atmospheric_layer.py", line 34, in init
self._make_AB_matrices()
File "...\test.py", line 23, in
atmosphere = hp.InfiniteAtmosphericLayer(grid, Cn2, L0)

Switching the offset to the y-axis of the input grid also generates a ValueError. Full traceback:
Exception has occurred: ValueError
operands could not be broadcast together with shapes (257,257) (256,)
File "...\hcipy\hcipy\atmosphere\infinite_atmospheric_layer.py", line 131, in _make_AB_matrices
self.B_horizontal = U * L
File "...\hcipy\hcipy\atmosphere\infinite_atmospheric_layer.py", line 34, in init
self._make_AB_matrices()
File "...\test.py", line 23, in
atmosphere = hp.InfiniteAtmosphericLayer(grid, Cn2, L0)

Minimal example generating the error:
import hcipy as hp

N = 256
D = 1.
offset = 1

r0 = 0.2
wl = 656E-9
L0 = 50.

grid = hp.make_pupil_grid([N+offset, N], D)
Cn2 = hp.Cn_squared_from_fried_parameter(r0, wl)
atmosphere = hp.InfiniteAtmosphericLayer(grid, Cn2, L0)

Handling of sparse matrices in the inverse_* functions and SVDs

Reported by @VikramRadhakrishnan

With the possibility of ModeBasis having sparse transformation matrices to reduce memory usage (for example for deformable mirrors with a large number of actuators), comes a problem when inverting matrices with the built-in functions invert_* and calculation of SVDs. Currently this gives an exception. A workaround is available, but at the cost of having to store a copy of the densified matrix.

influence_functions = make_gaussian_influence_functions(pupil_grid, num_actuators_across, actuator_spacing)

# This fails:
projection_matrix = inverse_tikhonov(influence_functions.transformation_matrix, 1e-5)

# Workaround
projection_matrix = inverse_tikhonov(influence_functions.transformation_matrix.toarray(), 1e-5)

Intermittent failure of thin lens CI test

This test has been failing intermittently only on Linux with Python 3.9. It has been noted in #114 and #115 among others. This issue will further track this bug to reduce chatter on the mentioned PRs.

The thin lens CI test propagates light through a thin lens with a certain focal length. It the progressively propagates that light using a AngularSpectrumPropagator object through focus. It finds out where the maximum peak intensity occurs as function of distance from the lens. This distance should match the focal length of the thin lens. The test is performed twice: once at 30m focal length, the other at double that.

The intermittent nature makes this hard to debug. I've been unable to reproduce this on my Linux machine with identical package versions. Branch thin_lens_test_failure has been set up to debug.

It turns out the CI failure is because the AngularSpectrumPropagator returns NaNs for the propagated wavefront. But it does this only for the first of the two focal length tests. Test output copied below:

z [ 0.  3.  6.  9. 12. 15. 18. 21. 24. 27. 30. 33. 36. 39. 42. 45. 48. 51.
 54. 57. 60.]
peak [Field(1.94038034e-05), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan), Field(nan)]
z2 [  0.   6.  12.  18.  24.  30.  36.  42.  48.  54.  60.  66.  72.  78.
  84.  90.  96. 102. 108. 114. 120.]
peak2 [Field(1.94038034e-05), Field(3.35065073e-05), Field(4.375403e-05), Field(6.10293918e-05), Field(8.11504359e-05), Field(0.00011799), Field(0.00047752), Field(0.00038099), Field(0.001278), Field(0.00728061), Field(0.02073922), Field(0.00769245), Field(0.00100904), Field(0.00039627), Field(0.00048316), Field(0.00016863), Field(0.00010752), Field(6.44379246e-05), Field(8.44530374e-05), Field(9.41318839e-05), Field(6.92255138e-05)]

Drop support for Python 2

As Python 2 support will be dropped soon (on the 1st of January 2020; https://pythonclock.org/), we will need to progress as well. As of now, all dependencies still support Python 2, but this will not always be the case. My aim is to drop support for Python 2 on the 1.0 release, after integration of AO control.

Generating phase screens from MultiLayerAtmosphere?

Hello,

I've been successfully generating phase screens with:

layer = InfiniteAtmosphericLayer(...)
phase_screen = layer.phase_for(wavelength)

but it would be very useful to be able to generate phase screens from a MultiLayerAtmosphere as well. Is it possible to do this with the current version of the code?

Clarity on Atmosphere Adaptive Optics Example

Hi,
I am following example: https://github.com/ehpor/hcipy/blob/master/doc/getting_started/3_atmosphere_adaptive_optics.ipynb

I have the following confusion:

  1. focal_grid = make_focal_grid(pupil_grid, 8, 16, wavelength=wavelength)
    Is 16 the focal length of the converging lens and 8 number of the airy disk?

  2. When creating Cn2Square, why the wavelength used is different from the one specified earlier? Won't it create some sort of inconsistency? How will the use of different wavelength for creating atmospheric turbulence be justified than one used to create focal_grid?

Thanks.

Would you please add some examples of laser propagating throungh a lens?

Thanks for this specially designed wonderful light propagation simulation code. After carefully studying for a while, I want to do some simulations. However I am not sure how to do some common simultions, such as laser propagation through a lens. I think many others would have such kind of confusion, too. Would you please spare some time to add some examples? That must be very kind and patient of you.

Add unit keyword for image display in imshow_field()

Emiel and I talked about the option to scale the image display from imshow_field() by choice without having to manually scale the display grid. This would require to include a keyword that sets the grid scaling of the input image.

Iva:

I realized that the HCIPy images are simply in units of radians [...] On the same note though, would it be of use to include a keyword in imshow_field that scales the images in lambda/D instead, optionally?

Emiel:

[...] although it would be useful to have a
function for this, it shouldn't be automatic as you said (sometimes you
want to look at an image on a physical scale). Also most of the time I'm
using normalized coordinates anyway (ie. all my pupils have a diameter
of 1). In this case all the axes are already in lambda/D. If I use
physical units, and I want to see the image in physical coordinates I
always do something like:
imshow_field(image, image.grid.scaled(1 / plate_scale))
This will set the new grid (the second argument) before showing the new
Field. This is what I also use for on-sky data, letting me view the axes
both in arcsec, in mm on the detector, in pixels, or in lambda/D for
some wavelength. (Essentially I have a separate Grid for each case,
which are simply scaled versions of each other.) If you can find a good
way of automating this, I would be happy to merge/include that. I just
haven't found a way yet that is easier to remember than doing it
manually as above.

Iva:

It seems easy to just display your image in different units by redefining the grid as you show in your email, and I think the only way in which to make this more intuitive is to make a more "user friendly" input, which can be done in so many forms. One way would probably be to define a keyword which is used to feed in a scalar for exactly the kind of unit transformation like you do it on your grid when you scale it. Since all of them would be of the form 1/platescale on the grid of the input image to the plotting function, it would just take away the need for the user to redefine the grid, and they just provide the scaling factor (i.e. the wished for units). It would make it a tad easier to use, especially in cases where people don't use normalized pupils/wavelengths, and the way of doing it that you show does exactly what I've been thinking of. So I guess it would just be the matter of an extra keyword?

Emiel:

I could live with a grid_scaling_factor parameter in imshow_field(), or something to that effect.

Optical elements not pickleable again

Potential regression from #69

@chen-pin Can you put together a minimal working example that worked just after merging, but doesn't anymore? Or even simpler, a minimal working example that should work but doesn't at the moment?

Backward propagation on FraunhoferPropagator fails

Minimal reproducing code:

from hcipy import *

pupil_grid = make_pupil_grid(512, 1)
focal_grid = make_focal_grid(4, 16)
prop = FraunhoferPropagator(pupil_grid, focal_grid)

wf = Wavefront(circular_aperture(1)(focal_grid))
pup = prop.backward(wf)

Traceback:

Traceback (most recent call last):
  File ".\test.py", line 8, in <module>
    pup = prop.backward(wf)
  File "c:\users\emiel por\github\hcipy\hcipy\optics\optical_element.py", line 760, in res
    instance_data = self.get_instance_data(None, wavefront.grid, wavefront.wavelength)
  File "c:\users\emiel por\github\hcipy\hcipy\optics\optical_element.py", line 632, in get_instance_data
    self.make_instance(instance_data, input_grid, output_grid, wavelength)
  File "c:\users\emiel por\github\hcipy\hcipy\propagation\fraunhofer.py", line 38, in make_instance     
    instance_data.fourier_transform = make_fourier_transform(input_grid, instance_data.uv_grid)
  File "c:\users\emiel por\github\hcipy\hcipy\fourier\fourier_transform.py", line 162, in make_fourier_transform
    if input_grid.is_separated and input_grid.is_('cartesian') and output_grid.is_separated and output_grid.is_('cartesian') and input_grid.ndim in [1,2]:
AttributeError: 'function' object has no attribute 'is_separated'

Reported by @chen-pin

Compensation with tip/tilt mirror

I am trying to model an optical system that incorporates a tip/tilt mirror to compensate for atmospheric turbulence but am having trouble doing this elegantly - Do you have any example code implementing tip/tilt compensation that you would be willing to share?

Thanks!

Missing telescope apertures

PR #133 adds two more telescope apertures. Now we are only missing a couple more:

Are there anymore apertures that are useful to include?

I propose to use this issue to keep track of the last couple of missing telescope apertures.

Knife-edge coronagraph on different sides

I just pushed a branch where I added different directions of the Knife-edge coronagraph. It works and I have added tests. However, there is a slight asymmetry in the peak flux after the knife-edge between the 'left' and 'right' versions. It is the same for the 'up' and 'down' direction.

Here is a minimal working example that demonstrates this. I am not sure why this happens. I checked both even and odd grids, so it does not seem to be a symmetry issue due to the inclusion of a zero in the Fourier transform. The shape coronagraphic residuals do look similar.

from hcipy import *
import numpy as np
from matplotlib import pyplot as plt

grid = make_pupil_grid([256, 512])
aperture = circular_aperture(1)(grid)

focal_grid = make_focal_grid(q=5, num_airy=15)
prop = FraunhoferPropagator(grid, focal_grid)

lyot_stop = circular_aperture(0.95)(grid)

wf = Wavefront(aperture)
norm = prop(wf).power.max()

directions = ['left', 'right', 'down', 'up']
for i, direction in enumerate(directions):
if direction == 'left':
	pre_apodizer = np.exp(1j * 2 * np.pi * grid.x * -1)
elif direction == 'right':
	pre_apodizer = np.exp(1j * 2 * np.pi * grid.x * 1)
elif direction == 'up':
	pre_apodizer = np.exp(1j * 2 * np.pi * grid.y * 1)
else:
	pre_apodizer = np.exp(1j * 2 * np.pi * grid.y * -1)

knife = KnifeEdgeLyotCoronagraph(grid, knife_edge_side=direction, apodizer=pre_apodizer, lyot_stop=lyot_stop * np.conj(pre_apodizer))
wf_foc = prop(knife(wf))
print(wf_foc.power.max() / norm)

plt.subplot(2,2,i+1)
imshow_psf(wf_foc.power / norm, vmax=1e-2, vmin=1e-5)
plt.show()

Scaling of phase_dot_diameter for the ZernikeWavefrontSensor

On line 42 of zernike_wavefront_sensor.py, the phase_dot_diameter gets divided by lambda/D, but it should get multiplied by lambda/D, I think.

Edit: there might be something else going on, but at least in my personal simulation it seems to work with it flipped. To be investigated further.

@dsdoelman

TypeError: unsupported operand type(s) for /: 'float' and 'CartesianGrid'

Hello,

I installed the hcipy library to use on windows 10 (x64). I wanted to run "II - Wavefronts and optical systems" by writing it from scratch. However, there is an error in the python terminal.

"TypeError: unsupported operand type (s) for /: 'float' and 'CartesianGrid'"

The file that is the source of this error is "\ hcipy \ field \ util.py"

My python version:
Python 3.8.1 (tags / v3.8.1: 1b293b6, Dec 18 2019, 23:11:46)

Here is full error:

Traceback (most recent call last):
File "C:/Users/RECEP/OneDrive/Python/PycharmProjects/WORKBENCH/dag_ao.py", line 10, in
focal_grid = make_focal_grid(pupil_grid, 8, 16)
File "C:\Users\RECEP\AppData\Local\Programs\Python\Python38\lib\site-packages\hcipy\field\util.py", line 187, in make_focal_grid
delta = float(spatial_resolution) / q * np.ones(2)
TypeError: unsupported operand type(s) for /: 'float' and 'CartesianGrid'

Fields are not pickle-able

Reported by @chen-pin.

Loading a Field object from a fits file is still a Field, but doesn't contain the Field.grid.

Minimal working example:

from hcipy import *
import numpy as np
import pickle

a = make_pupil_grid(128)
b = Field(np.random.randn(a.size), a)

with open('test_a.pkl', 'wb') as f:
    pickle.dump(a, f)

with open('test_a.pkl', 'rb') as f:
    aa = pickle.load(f)
    print(aa)

with open('test_b.pkl', 'wb') as f:
    pickle.dump(b, f)

with open('test_b.pkl', 'rb') as f:
    bb = pickle.load(f)
    print(bb.grid)

Output:

CartesianGrid(RegularCoords)
Traceback (most recent call last):
  File ".\hcipy_pickle.py", line 20, in <module>
    print(bb.grid)
AttributeError: 'Field' object has no attribute 'grid'

NoisyDetector with photon noise is all photon noise!

output_field = large_poisson(output_field, thresh=1e6)

I believe this line should be += rather than = . As it is, the NoisyDetector with photon noise enabled just returns photon noise!

Aloha and thanks,
~Ian Cunnyngham

Edit: I guess I wasn't understanding how large_poisson worked, so this line is fine. Still scratching my head as to why my attempt to use NoisyDetector ends up with just photon noise

Divide by zero in make_zernike_basis with grids that include 0 as grid point.

The make_zernike_basis has a divide by zero error when used with a grid that has 0 as grid point.
The specific line that is causing the issue is line 139 in zernike.py in the function zernike_radial.

h3 = -4 * (q - 2) * (q - 3) / float((p + q - 2) * (p - q + 4))
h2 = h3 * (p + q) * (p - q + 2) / float(4 * (q - 1)) + (q - 2)
h1 = q * (q - 1) / 2.0 - q * h2 + h3 * (p + q + 2) * (p - q) / 8.0

r2 = zernike_radial(2, 2, r, cache)
res = h1 * zernike_radial(p, q, r, cache) + (h2 + h3 / r2) * zernike_radial(n, q - 2, r, cache)

The line with h3 / r2 is creates a NaN if zero is included as grid point.
This does not happen analytically because the limit r->0 converges, just like the sinc function.
We evaluate all polynomials numerically so there are no analytical formulas for the zero point.
I have not read the paper yet that the documentation quotes. Maybe they propose a solution for this.

A quick and dirty fix is to shift the grid by a small amount. I used a 1 um shift for a grid of 10m in diameter.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.