Giter Club home page Giter Club logo

Comments (15)

bnorthan avatar bnorthan commented on August 16, 2024

Hi @dscho, @ctrueden

So the first steps for this effort are to create a JTransform version of FFTMethods and then to create interfaces for fft ops, and then ops that wrap FFTMethods. That in progress in this branch.

https://github.com/imagej/imagej-ops/compare/fftmethods_wrapper

I was thinking we could have fft and ifft (inverse fft) interfaces as in matlab. Please let me know if you have thoughts on how we should design this.

from imagej-ops.

ctrueden avatar ctrueden commented on August 16, 2024

I am naive when it comes to FFTs, but I like fft and ifft as op names. Is that sufficient to capture all of the public static methods provided by FFTMethods? Because the most straightforward and general thing would be to expose each of those methods as an op, no? (Even if some/many of those methods have the same op name...)

from imagej-ops.

bnorthan avatar bnorthan commented on August 16, 2024

Hi Curtis

fft and ifft should capture all the variations for the image transforms. Each variation hopefully can be determined by the inputs.

There have been a couple of recent discussions on the listserv about 1D transforms. The typical scenario is that a user wants to transform an array of data. For example maybe they performed a measurement using a line profile and have an array of intensities. There was a 'Array.Fourier' method recently added to IJ1.

http://imagej.nih.gov/ij/macros/examples/TestArrayFourier.txt

So a variation that works on 1D measurement data is probably needed. I am thinking this will depend on the final version of the ROI interfaces.

from imagej-ops.

dscho avatar dscho commented on August 16, 2024

There was a 'Array.Fourier' method recently added to IJ1.

This is unfortunately a misnomer, but at least it is consistent with ImageJ 1.x' "Fourier" transformation: the Array.Fourier function calls a Fast Hartley Transform.

from imagej-ops.

bnorthan avatar bnorthan commented on August 16, 2024

@ctrueden, Below I've documented the steps used in @StephanPreibisch's convolution function (https://github.com/imglib/imglib2-algorithm-gpl/blob/master/src/main/java/net/imglib2/algorithm/fft2/FFTConvolution.java see run() )

The FFT ops (#76) were designed with this workflow in mind...

  1. Calculate the desired extended size of the image. The image has to be extended to avoid edge artifacts. Often it is extended by kernelDimensions/2-1 in each dimension. However the extension size should be an adjustable parameter as in some cases we may want a smaller extension size as to increase speed.
  2. compute the size of the complex-valued output and the required padding based on the prior extended input image (The final "extended" image size is calculated so that it is a "fast" fft size. FFTs are much faster for certain sizes).
  3. Using the size calculated above compute the new interval for the input image.
  4. Compute the new interval for the kernel.
  5. Now that we have calculated the sizes and offsets we use views to extend the kernel and image "virtually".
  6. Compute the FFTs
  7. Compute the complex conjugate of the FFT of the kernel (if needed for convolution).
  8. Apply an algorithm in the frequency domain. In the case of convolution multiply in the frequency domain. Other operations (such as Wiener Filter) could be applied at this step.
  9. Perform inverse FFT and unpad as to retrieve an output that is the same size as the input
        ////////////////////////////////////////////////////////////////////////
        // the first few lines of code here calculate the extended sizes and  
        // corresponding intervals of the input image and the kernel.  
        // Extension is needed to avoid edge artifacts
        // Also, since the FFT is faster at certain sizes, we want to extend to an
        // efficient FFT size
        ////////////////////////////////////////////////////////////////////////

        final int numDimensions = imgInterval.numDimensions();

                // 1. Calculate desired extended size of the image
        // the image has to be extended to avoid edge artifacts 
        // in this case it is extended by kernelDimensions/2-1 in each
        // dimension.  However the extension size could be a parameter
        // In some cases we may want a smaller extension size as to increase speed.
        final long[] newDimensions = new long[numDimensions];

        for (int d = 0; d < numDimensions; ++d) {
            newDimensions[d] =
                (int) imgInterval.dimension(d) + (int) kernelInterval.dimension(d) - 1;
        }

        // 2. compute the size of the complex-valued output and the required
        // padding based on the prior extended input image
        // (The image size is recalculated again so that it is a "fast" fft size. 
        // FFTs are much faster for certain sizes.) 

                final long[] paddedDimensions = new long[numDimensions];
        final long[] fftDimensions = new long[numDimensions];

        FFTMethods.dimensionsRealToComplexFast(FinalDimensions.wrap(newDimensions),
            paddedDimensions, fftDimensions);

        // 3.  Using the size calculated above compute the new interval for the input image
        final Interval imgConvolutionInterval =
            FFTMethods.paddingIntervalCentered(imgInterval, FinalDimensions
                .wrap(paddedDimensions));

        // do we need to recalculate for a new kernel ??
        if (fftImg != null) {
            for (int d = 0; d < imgConvolutionInterval.numDimensions(); d++) {
                if (imgConvolutionInterval.dimension(d) != fftImg.dimension(d)) {
                    fftImg = null;
                    break; 
                }
            }
        }

        // 4.  compute the new interval for the kernel image
        final Interval kernelConvolutionInterval =
            FFTMethods.paddingIntervalCentered(kernelInterval, FinalDimensions
                .wrap(paddedDimensions));

        // compute where to place the final Interval for the kernel so that the
        // coordinate in the center
        // of the kernel is at position (0,0).  
        final long[] min = new long[numDimensions];
        final long[] max = new long[numDimensions];

        for (int d = 0; d < numDimensions; ++d) {
            min[d] = kernelInterval.min(d) + kernelInterval.dimension(d) / 2;
            max[d] = min[d] + kernelConvolutionInterval.dimension(d) - 1;
        }

        ////////////////////////////////////////////////////////////////////////////
        // End of size and interval calculation
        ////////////////////////////////////////////////////////////////////////////

        // 5. now that we have calculated the sizes and offsets we use views to extend the 
        // kernel and image "virtually"

        // assemble the kernel (size of the input + extended periodic +
        // top left at center of input kernel)
        final RandomAccessibleInterval<K> kernelInput =
            Views.interval(Views.extendPeriodic(Views.interval(kernel,
                kernelConvolutionInterval)), new FinalInterval(min, max));

        // assemble the extended view of the image
        final RandomAccessibleInterval<T> imgInput =
            Views.interval(img, imgConvolutionInterval);

        // 6. compute the FFT's if they do not exist yet
        if (fftImg == null) {
            fftImg = FFT.realToComplex(imgInput, fftFactory);
        }
        if (fftKernel == null) {
            fftKernel = FFT.realToComplex(kernelInput, fftFactory);

            // 7. compute the complex conjugate of the FFT of the kernel (same as
            // mirroring the input image)
            // otherwise it corresponds to correlation and not convolution
            if (complexConjugate) {
                FFTMethods.complexConjugate(fftKernel);
            }
        }

        final Img<ComplexFloatType> fftconvolved;

        if (keepImgFFT) {
            fftconvolved = fftImg.copy();
        }
        else {
            fftconvolved = fftImg;
        }

        ///////////////////////////////////////////////////////////////////////////
        //  8. Here we apply an algorithm in the frequency domain.  In this case
        // it is simply a convolution.  Alternative filters (such as Wiener Filter) could
        // be applied at this step. 
        ///////////////////////////////////////////////////////////////////////////

        // multiply in place (convolution corresponds to multiplication in the frequency
        // domain
        multiplyComplex(fftconvolved, fftKernel);

        ///////////////////////////////////////////////////////////////////////////
        // End of algorithm.  
        ///////////////////////////////////////////////////////////////////////////

        // 9. inverse FFT in place, unpad as to retrieve an output that is the same size
                // as the input 
        FFT.complexToRealUnpad(fftconvolved, output);

from imagej-ops.

bnorthan avatar bnorthan commented on August 16, 2024

@ctrueden, @dscho, @dietzc, @StephanPreibisch

I updated the "milestones" above to reflect recent work and what needs to be done next. I can take a look at SymmetricSeparableConvolvolution next. I also want to update all these ops for ImgPlus as ImgPlus enhancements are done.

from imagej-ops.

ctrueden avatar ctrueden commented on August 16, 2024

Thanks @bnorthan for all your work on this!

On a general note: I want to keep these milestones achievable and granular. So e.g. for the GPU goals, those should be articulated separately in #60 and possibly other issues as appropriate, rather than blocking this issue from closing out once the initial work is complete.

Working on the SymmetricSeparableConvolution next makes sense. Hopefully it is a simple matter of wrapping the lower-level implementation. As always, speak up if any roadblocks.

from imagej-ops.

dietzc avatar dietzc commented on August 16, 2024

Some questions about the Convolver ops: Some implementations are using Img as input, rather than RandomAccessibleInterval or RandomAccessible. I think it would be a good idea to avoid Img whenever possible. We should rather use the Ops-Conversion from RandomAccessibleInterval based ops to Img ops.

What do you think @bnorthan @ctrueden?

from imagej-ops.

bnorthan avatar bnorthan commented on August 16, 2024

Hi Christian

The convolver ops generally have two implementations, a higher level one using Img and a lower level one that uses RandomAccessibleInterval.

The high level op uses the ImgFactory of the input to create the output, and to create memory for the FFT (if it is an FFT based convolver). The high level op(s) also extend the boundaries of the input and kernel (if required). In the future the high level op will look at the meta data to determine the type of the axis (this is needed so that XYZ (3D convolution) can be handled differently than XYT (a series of 2D convolutions)).

The low level ones, that use RandomAccessibleInterval do the actual work and can be called directly if everything is already set up.

Do you think it is possible to to totally bypass Img?? I'm not sure I understand how to get the ImgFactory in that case.

I know we could pass in the ImgFactory but that makes the signature more complicated. If you have thoughts regarding a more elegant design, let me know. I am happy to make changes.

from imagej-ops.

dietzc avatar dietzc commented on August 16, 2024

Hi Brian,

I think this is going to be a very fundamental discussion now, maybe @ctrueden @hinerm can also comment on this. My opinion regarding Img is as follows:

We should always void using Img, as it basically acts like an RandomAccessibleInterval (you can get the IterableInterval via Views). The only problem, as you already mentioned, is the ImgFactory. An user can always set the output RandomAccessibleInterval, IterableInterval, etc from outside and therefore control the underlying factory type of the output. If the user doesn't provide an output, we have to have a smart, but very general mechanism, which knows how to create appropriate outputs. For me this mechanism is implemented using the CreateImg interface. We have several input options here: If the incoming RandomAccessibleInterval actually is an Img (instanceof test ... even extendible for other types?!) we can use the same ImgFactory, if not, we have to use a smart heuristic which checks if we can use an ArrayImgFactory or a CellImgFactory. But this should be encapsulated in theCreateImg`.

In an Op which can create an output given some RandomAccessibleInterval and maybe an Type we would always just call CreateImg given some parameters. Of course we have to implement many Ops using CreateImg for all the permutations of Interval, Type, ImgFactory etc.. @danielseebacher and @tibuch will take care about this in the next couple of weeks (see #96). CreateLabeling btw. will act the same.

Does this make sense? Comments?

from imagej-ops.

bnorthan avatar bnorthan commented on August 16, 2024

Hi Christian

We have several input options here: If the incoming RandomAccessibleInterval actually is an Img >(instanceof test ... even extendible for other types?!) we can use the same ImgFactory

'Ideally' the ops I've written that use Img will be minimal (I still need to clean them up a bit to make sure they are as minimal as possible). Essentially they are using the ops framework to determine what type we have. If it's Img then the ops framework calls the Img version, which gets the ImgFactory from the Img, then the RandomAccessibleInterval version is called.

Is this OK?? Or do you think it would be better to just have one Op and do the instanceof test?? Either way is fine with me as long as we can handle all scenarios smoothly. For example what if we need meta-data?? Like axis type or calibrations? Long term are we looking at using ImgPlus for that anyway??

Please feel free to review the convolution ops with a critical eye. Even though the pull request was merged more review wouldn't be a bad thing.

from imagej-ops.

dietzc avatar dietzc commented on August 16, 2024

Hi Brian,

you are absolutely right, ImgPlus must also be considered. I think this discussion is similar to a discussion @ctrueden and I had a while ago: If we determine a pattern, for instance, for every Op which bases on a RandomAccessibleInterval input we have another Op with Img or ImgPlus, can we implicitly convert the RandomAccessibleInterval based operation?

The advantage having all "heuristics" in a central CreateImg or CreateLabeling or ... is, that the behaviour is certainly the same for all Ops, which are using the above mentoined ones.

I don't want to be too anal, but I'm running into similar problems with other Ops at the moment and I hope we can find a unified solution.

from imagej-ops.

ctrueden avatar ctrueden commented on August 16, 2024

It's hard for me to find time to think about the "high-level" vs. "low-level" issue, but I'll try to do a quick brain dump here, since it's a hot topic.

  • The Img vs. RandomAccessibleInterval case isn't the only one. We also have ImgPlus vs. Img and similar.
  • The issue comes down to whether the richer data object has any additional information that is useful to the op in question. The common pattern seems to be that there is extra useful-but-not-strictly-necessary information. E.g., an ImgPlus gives an Axis list that is handy, but default assumptions can be made in the case of Img or other EuclideanSpace.
  • Or in some cases, the extra info is necessary, but can be provided separately. E.g., an ImgFactory in the case of a lower level data object than Img.
  • So, it seems we could simply write the op to the lower level spec (e.g., with RandomAccessibleInterval and ImgFactory args) then have an op reducer (see #39) that converts calls with an Img to the other signature by grabbing the factory from the Img.
  • The trouble with op reducers will be type-safe method signatures. So another approach is to follow a type-safe pattern where we do in fact have two ops that extend a common abstract type that keeps things DRY—i.e., takes care of the repeated work of extracting ImgFactory from Img, and similar code for other common cases.

I guess I don't feel too strongly about exactly how it is done, but do want to see the code avoid repetition whenever possible. DRY, DRY, DRY...

from imagej-ops.

bnorthan avatar bnorthan commented on August 16, 2024

@kbellve has a huge amount of decon related code that would be very useful to ops. State of the art stuff including GPU. Karl this issue is where we are keeping track of the progress of ops deconvolution. Please feel free to chime in.

from imagej-ops.

bnorthan avatar bnorthan commented on August 16, 2024

@chalkie666 has written a script

"Showing a simulation of the systematic error effect of the objecive lens OTF (similar to a Gaussian blur) on the contrast of different spatial frequencies in a light microscopy image. Showing why optical microscope images should be deconvolved,to correct the systematic error of contrast vs. feature size: Contrast of smaller and smaller features is attenuatted more and more, up to the bad limit or noise floor, where deconvolution can no longer recover info."

https://github.com/chalkie666/imagejMacros/blob/master/exponentialChirpImag.ijm

Much of this code could be converted to ops tests and script examples.

from imagej-ops.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.