Giter Club home page Giter Club logo

Comments (7)

meshula avatar meshula commented on July 25, 2024

Hi @huskier, your problem has two parts. One is high speed capture, and two is compression. I assume you already have hardware capable of sustained write to disk at the speed you need. at 500fps, to record to a final file format at real time reads, you've got only 2ms per frame to write the data. OpenEXRCore should be able to manage the load in the particular case of storing frames as parts, using uncompressed scanline format.

I would recommend a multi-processor based approach as follows

  1. use hardware with enough bandwidth to do sustained reading to a mapped drive. you may want two dedicated interfaces, one for stream capture, and another for exr writing, to ensure enough dedicated bandwidth.

  2. memmap a raw capture area as a ring buffer to stream your captures to. Use a lock-free ring buffer so that you can keep writing without being blocked unless the ring is full. If the ring fills, you need more RAM, also more optimization or cores :)

  3. Preallocate your openexr output files; they should be in scanline format, not tiled. If you are going to record ten seconds, you might create ten, one openexr file per second, each with 500 parts, each part created with space for an uncompressed image.

  4. The C API in OpenEXRCore has been designed for sustained multipart writing from multiple cores. spin up as many cores as you can to pull from the ring buffer and write as scanline data. One strategy might be to make a work queue where one job is to pull one frame from the ring buffer and output it to the appropriate part of the appropriate exr file.

  5. be sure to dashboard your software so that you can monitor bandwidth and throughput as well as core utilization; this is to help understand where you need more RAM, disk space, CPU count, bandwidth, and where software may need to be optimized.

  6. When you have the above working, introduce compression, and monitor the dashboard. I'm sure uncompressed data can keep up the load assuming adequate hardware, but I don't know how much load compression introduces, and the ability to handle the load will be a balance between core count and compression settings. I'd recommend adding controls for compression to the dashboard so that you can see in real time the impact of changing the settings.

  7. When 0-6 are working, add a non-realtime component that transcodes the "one second" files to a more optimal form. In step 3, the one second files were preallocated to uncompressed size. The non-realtime jobs should ingest completed one second files, and output compressed ones where the parts are allocated to exact size, not maximal size.

  8. The one second strategy means that if errors occur due to faults or interruptions, you will only lose the seconds during the fault. The final step at the end of the recording session should be to archive the files of step 7, either as one second chunks, or in a final concatenation into a single archive. I'm not aware that we have unit tests for large numbers of parts. I am confident that 500 parts is not a problem, but each time you increase the part count by a power of two, there's a possibility of a latent code error. So please be aware of those boundaries, 512, 1024, 2048, etc., and do let us know if there is a problem.

@kdt3rd @peterhillman ~ Have you got uses cases that exercise this extreme regime, or any comments on the above? I have built high speed stream recorders for exr using the pattern above, using the C++ API, but with lower resolution, and much lower frame rate, where I had adequate time on my hardware to do the work. I think this performance regime requires the use of the C core directly, and that the C core should hold up to this requirement.

from openexr.

peterhillman avatar peterhillman commented on July 25, 2024

I've never experimented with writing EXRs in real time applications. We did write multiple OpenEXR images to the same file, simply appending them together, which you can do by using your own OStream object. That approach is slightly more efficient than writing a multiple part file with each frame as a separate part. However, that's not a recognized standard.

One question is whether it would be better to write a temporary file in real time in some custom file format, then post-process and convert it to something more standard afterwards? You could have a fixed size header at the beginning of the file, and give each frame a fixed size header (say 1024 bytes) to store frame-specific metadata, then the raw pixel data. Predictable header and frame sizes should give you random access to frames and also error resilience in case frames drop or get partially written. For color images, a raw non-debayered uncompressed image is likely to be smaller on disk than any RGB image format which has lossless compression.
OpenEXR images aren't really designed to be optimal for writing at high framerates; they are more optimized for reading, since most files are read more often than they are written.

from openexr.

meshula avatar meshula commented on July 25, 2024

Thanks Peter :)

To be clear my step (1) where I talk about creating a memmapped ring buffer is exactly as you propose; a custom layout, basically a struct on disk with raw data following, each of the same size.

All the rest is about trying to get things out to OpenEXR as close to real time as possible, hence step 6, which introduces compression but drops out of the real time regime, as a final conformation step.

I personally like the potential of OpenEXR in this domain ~ image fidelity I imagine to be an important characteristic, and by design EXR gives you that; other formats meant for video are typically highly compromised in favor of compression and make perceptual allowances that aren't suitable for science or engineering. Also to Peter's point about EXR being optimized for reading, this is a win for science! It's conceivable that your 500fps data is going to be run through a lot of read cycles, whether to feed a machine learning system or a recognition path. This type of data tends to be write once, read very, very many, so well within the design scope of EXR :)

from openexr.

huskier avatar huskier commented on July 25, 2024

Thank you for both of you, esp. meshula's comprehensive comment. @meshula @peterhillman

I personally like the potential of OpenEXR in this domain ~ image fidelity I imagine to be an important characteristic, and by design EXR gives you that; other formats meant for video are typically highly compromised in favor of compression and make perceptual allowances that aren't suitable for science or engineering.

I like this idea. There are few suitable file formats which storing uncompromised high quality image sequence for scientific and engineering application scenarios. We think OpenEXR may fill the gap.

If you are going to record ten seconds, you might create ten, one openexr file per second, each with 500 parts, each part created with space for an uncompressed image.

@meshula Do you have OpenEXR example files which storing multiple parts images? How do we view the multiple parts images with existing softwares? Are there applications which can do this besides programatically viewing them? It would be better to have open source demos for multiple parts OpenEXR files reading and writing.

One question is whether it would be better to write a temporary file in real time in some custom file format, then post-process and convert it to something more standard afterwards? You could have a fixed size header at the beginning of the file, and give each frame a fixed size header (say 1024 bytes) to store frame-specific metadata, then the raw pixel data. Predictable header and frame sizes should give you random access to frames and also error resilience in case frames drop or get partially written. For color images, a raw non-debayered uncompressed image is likely to be smaller on disk than any RGB image format which has lossless compression.

There is a Sequence File Format--Norpix Streampix proprietary file--for the raw data, and I've Just put my NorPix_sequence class for reading and writing sequence files on Github. However, the sequence file format need proprietary applications or customized applications to read and write. We prefer open source file formats and open source applications for our scenarios. The ecosystem matters!

from openexr.

lgritz avatar lgritz commented on July 25, 2024

Do you have OpenEXR example files which storing multiple parts images?

There are some simple example multi-part files here

Also, if you have OpenImageIO, its oiiotool utility can easily take separate images and turn them into a multipart image. The TLDR is:

oiiotool file1 file2 file3... --siappendall -o multipart.exr

How do we view the multiple parts images with existing softwares?

The aforementioned OpenImageIO (OIIO) package has a viewer which under ordinary circumstances wouldn't be much to brag about, but one thing it can do (last time I checked) is display different parts and cycle among them with < and > hotkeys.

from openexr.

cary-ilm avatar cary-ilm commented on July 25, 2024

The Beachball images from openexr-images are multipart as well: https://openexr.com/en/latest/test_images/index.html#beachball-example-image-sequence

from openexr.

huskier avatar huskier commented on July 25, 2024

Thank you, @lgritz and @cary-ilm.
We think that OpenEXR format has the potential to save image sequence.

We don't find the OpenImageIO (OIIO) viewer to view multiple parts in the EXR files. However, we've found that there are some OpenEXR tools in the OpenEXR repo, such as exrinfo, exrheader, and exrmultipart etc. Besides, we've found Matlab (since R2022b) coud read and analyze OpenEXR images. That's very cool!

from openexr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.