Giter Club home page Giter Club logo

Comments (11)

kfjahnke avatar kfjahnke commented on August 28, 2024 1

I'd prefer not to touch your code, but I'll keep you updated on what I come up with.

from openexr.

meshula avatar meshula commented on August 28, 2024

If you could upload your inputs to exr2envmap, the result, & your expected result that would be helpful. Due to the fact that cube maps inherently have larger solid angles per texel on the edges, I can't really guess if your result is as expected or not.

from openexr.

kfjahnke avatar kfjahnke commented on August 28, 2024

Okay. I scaled it down to 100X100 squares to save space. My upload environment.zip contains three images:

  • cubemap.exr has the initial cubemap
  • latlon.exr is the output from exr2envmap
  • expected.exr is what I expected as output

You can see how two of the vertical edges of the latlon rendition made by exr2envmap come out one pixel wide and two others come out two pixels wide. I would expect them to all be two pixels wide, one colour to the left, another to the right.

Looking at my expected output, what can also be seen is that the horizontal edges are rendered thinner in latlon.exr. This gives me another hint at what might cause differences. I use 'reflect' boundary conditions which look at pixels as small squares and put the point of reflection at the pixel's edge. The thinner rendition in latlon.exr looks as if the cube faces might have been 'looked at' with mirror boundary conditions, mirroring on the pixel center. That is common, but it cuts off half the marginal pixels (so to say) whereas reflect boundary conditions accommodate all pixels equally. With these two different approaches of boundary conditions for the square cube faces it's necessary beforehand to know which mode is used, because the 90 degrees fov have to be mapped correctly. I think that mapping the 90 degrees to -0.5,w-0.5 is more 'natural' than mapping it to 0,w-1. Can you say which is used in exr2envmap?

from openexr.

meshula avatar meshula commented on August 28, 2024

Florian & I wrote this nearly twenty years ago, trying to remember what we did and cross referencing it to the code :] yes, we are looking at pixel centers. when we resample, we take samples in a window, and the window does not appear to be compensated for solid angle when resampling a cube. So my intuition is that the sampling code needs correction to bias or unbias samples by projecting on the the cube. I don't think there's an issue with reflection, because the sampling simply sends out rays spherically, then fetches them from the appropriate faces.

from openexr.

kfjahnke avatar kfjahnke commented on August 28, 2024

Florian & I wrote this nearly twenty years ago

Looks like a skeleton in the cupboard coming back to haunt you ;-)

So this looks like you agree that there is an issue. Maybe my description of the issue wasn't as clear as could be, I've thought about it some, and now I'd express it like this: the three squares to the front and sides all appear in the output with visible vertical edges, but the back square is missing it's vertical edges.

Geometricaly, the output (like the flaw) is symmetric around the vertical, which indicates that there is a problem with the horizontal sampling of the sphere. To sample the sphere for the purpose at hand, you'd iterate over lat/lon coordinates. With target image width w, your step width d is 2pi / w, the first sample is at d/2 and the last at 2pi - d/2. Vertically, you start at d/2 and go to pi - d/2 (measuring from the pole at zero degrees - subtract pi/2 if you're working from the equator).

Given the sampling of the sphere, the next step is to convert to 3D rays, which is textbook stuff. Next you figure out the axis with the numerically largest coordinate value, and this plus the sign of that coordinate value yields the cube face. You divide the 3D ray by this maximal coordinate value, which gives you 2D x/y coordinates to the cube face (the third component becoming 1.0), relative to the cube face's center. Your cube-face-relative coordinates are now in (-1,1). Scale to cube-face image coordinates and interpolate at that position to yield the pixel - the precise scaling value depends on how you interpret the cube face, but with cube-face width c, I'd recommend to map the interval to (0,c). If you follow that logic, there is no way you can miss out on a one-pixel wide part of the cube face, because you 'land' right in the middle of it.

I do it like this in lux, but currently I work from six separate cube face images. I'm switching to use OIIO, and in the process I discovered that openEXR has dedicated environment map support, so I thought I might support the 1:6 stripe format as well. With several different ways to deal with environment maps (panotools, lux, OIIO and openEXR) I thought it would be interesting to compare the results in respect to sharpness - the approaches differ in what interpolators and filters they apply. But of course the results must agree in geometry before you can look at that aspect, and when I compared the output generated by exrenvmap, I noticed that the geometry was off. Hence this issue.

from openexr.

meshula avatar meshula commented on August 28, 2024

For the record in OpenEXR the vertical strip format originated from a very old DirectX convention and a need to bring HDR imagery into real time. To this day, I think everyone still uses lat long, despite lat long using half or more of the texels for the least interesting and most distorted part of the environment map!

OpenEXR cube maps are still a good place to store HDR environment data and IBL convolutions, though I feel that application didn't catch on. Exrenvmap is very old, and needs a rewrite with better math. I would consider the existing a code a reasonable reference for how to exercise the API to construct such an image, but the projection math is not exemplary, and the structure of the code is very much how we did C++ twenty years ago, and doesn't reflect modern practices, nor high performance practices.

from openexr.

kfjahnke avatar kfjahnke commented on August 28, 2024

I think there is one fundamental flaw in the cube map format as it is used in openEXR. The individual cubes are simply cut off at precisely ninety degrees, whereas proper interpolation near the edges would require a certain amount of support. This support can be built up artificially by generating it from adjoining cube faces - and, on the other hand, the artifacts arising from simply reflecting the content for interpolation purposes are not very pronounced, but all of this is a bother.

In lux, I use images for the six cube faces which can have more than ninety degrees field of view. Even with half a degree, you get plenty of 'headroom' even for interpolators with large support, and the flaws near the edges resulting from reflecting or mirroring content are no longer an issue. If you pick the 'frame' around the actual ninety degree square large enough, you can even use filters with very large support - I work with b-splines, which theoretically have infinite support in the prefilter, but you can usually neglect neighbourhood a few samples off because their effect vanishes to next to nothing. Given a lat/lon - or, as we say in panorama photography, a 'full spherical' - generating cube faces with slightly more fov is simple enough, and the resulting views are 'clean' around the edges. The only - slight - problem with the lux code is that it's using fixed mip levels, rather than the anisotropic filter OIIO uses to cater for pixels in different positions in the cube faces. lux does it for speed, so it can churn out the 60fps on a garden variety four-core, while the OIIO code is quite a mouthful and takes much longer to execute - but it should be ideal for a conversion program with high fidelity standards.

So we do have this legacy format, and it should be supported. You propose rewriting exr2envmap, which I think is a good idea. You may be interested in work I am currently doing along these lines. I have recently covered the generation of cubemaps from lat/lon environments with what you'd call 'better math'. Here is what I did:

  • To speed up the process, I am using multithreaded SIMD code provided by my own library, zimt
  • The texel data are generated using OIIO's texture system code
  • The code is available (MIT-licensed) from the examples section of the zimt repo

I am currently mulling over the reverse transformation - from a cubemap to a lat/lon environment - AFAICT OIIO does not support cubemaps as texture sources in it's texture system code, So I have to do this 'manually', and it will take me a while to figure out how best to deal with the missing support (I'll probably generate it, then use it to generate a better version, do that a few times - call it 'polishing' - just an idea). I'd also use OIIO here and just do a planar texture pickup, for which OIIO also provides code. Calculating the derivatives to properly steer the anisotropic antialiasing filter is a bit of extra work, but from what I see with using the OIIO code for the lat/lon environment lookup, the results are very nice indeed.

Using two libraries - zimt to do the 'stripmining', multithreading and SIMDization and OIIO for texel generation and I/O - the amount of code needed for the process is surprisingly little, and it relieves you of reinventing the wheel for both of these processes. Have a look if you like and tell me what you think. All my code for this program is MIT-licensed, and OIIO is 'from your own stable'.

from openexr.

cary-ilm avatar cary-ilm commented on August 28, 2024

We'd happily accept a contribution. Realistically, none of the core OpenEXR maintainers are likely to look into this any time soon. While your investigation and analysis are fresh, if you'd like to submit a PR with improvements to exrenvmap, we'd very much appreciate it.

from openexr.

kfjahnke avatar kfjahnke commented on August 28, 2024

Slow-ish progress, but now I have two programs to show:

https://github.com/kfjahnke/zimt/blob/main/examples/cubemap.cc
https://github.com/kfjahnke/zimt/blob/main/examples/latlon.cc

The first one converts a lat/lon environment map into a cubemap, and the second one does the inverse conversion. The 'better mathematics' consist in a multi-threaded implementation using SIMD and the use of OIIO's environment and texture lookup code. The problems with the cube face images being cut off at precisely ninety degrees fov are avoided be regenerating some support by interpolating from adjoining cube faces, so the internal representation can be filtered and even mip-mapped correctly. AFAICT, the results are geometrically correct and look appealing. Cubemap lookup is fast, I've thought out an access mode which avoids having to look at the cube faces as separate entities and can instead issue lookups to a single texture. Have a look! Comments welcome.

from openexr.

kfjahnke avatar kfjahnke commented on August 28, 2024

Slow-ish progress, but now I have two programs to show

I have now put together both conversions in a single program, and it's now in a new separate repository by itself. I called the program envutil. As it stands now, it can do the conversions using OIIO's quite elaborate filtering, fast bilinear, and an oversampled variant of bilinear pickup, which is quite fast and still has proper-looking output. The program will use highway, Vc or std::simd if present. It might be interesting to compare it's output withe exrenvmap, to see if it has similar scope and does what's needed - now with modern multithreaded SIMD code, which comes from my library zimt, in source. The program builds with cmake and has no external dependencies apart from OpenImageIO and, optionally, the SIMD back-end libraries, and the code is MIT-licensed.

from openexr.

meshula avatar meshula commented on August 28, 2024

Good thought to split it out, I'll give it a whirl as a replacement for what I use (which isn't exrenvmap ironically). I'd say that your program has a different scope than exrenvmap in the sense that exrenvmap doesn't have control over filtering, and conflates downsampling with luminance convolution, using a kernel that is no longer popular. I don't spot that envutil also supports convolutions to create an irradiance cascade for IBL, which exrenvmap was an early (premature) attempt at, so that might be another scope difference. Today, I think of exrenvmap as reference documentation for how to use OpenEXR's cube map interfaces, not as a canonical production tool. If you are hinting at whether envutil could replace exrenvmap, that's more a question for openimageio, although it would be nice to point to envutil from OpenEXR's documentation as a tool supporting EXR environment maps.

from openexr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.