Giter Club home page Giter Club logo

Comments (16)

peteihis avatar peteihis commented on June 5, 2024

And some results... This is editor dependent.

In mesh editors both GL and SW drawers seem to work but Raytracer does not do anything that would make sense. It's a killer to leave the object editor to rendered view mode and then open a large mesh... CPU shoots to 97% and nothing happens.

EDIT:
And more: To test something I doubled the max depth in GL drawer for perspective. Now it draws everything but the rendering depths in perspective drawing are wrong. Objects that should be behind something else may be drawn on the front. This happens with Show / Entire Scene in object viewers too. Seems that this might be the cause in the GL-case.

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

In SWD the problem seems to be overflow of ´Integer´. Triangels disappear at the distance Integer_MAX_VALUE / 65535. In GLD the the triangles start to disappear (when they do) at the exact same distance from the camera.

EDIT: That was the case with GL after I doubled the max depth. Originally the "far clipping" happens at half of that distance.

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

More testing:

  • Changing the type of zbuffer of SWCD to float or double solves the overflow problem. (Actually I have never understood why it would have to be int in the first place.) Float does not change memory consumption, but personally I would prefer to go double, which gives wildly more capacity and I don't think it would consume too much memory either. It is just one (though rather essential) layer of information in the entire view drawing process. ... But for future purposes it should be checked if the format can be made to match the depth buffer of GL.

  • To some extent using a floating point type value also helps with the inaccuracy in rendering lines in 3D but straight lines still tend to have the z-values wrong. Also the direction in which the line lies in relation to camera orientation is a factor.

  • The GLCD case seems more complex than this. The perspective version basically gets the exact same information to process as the parallel version but things start to slip already at sizes of tens of units. Distant parts start to be clipped off and depths are starting to go wrong. The larger the scale the worse it gets.

  • It also has bothered me for a long time that there are quite a few things a little bit off with things that the GL drawer does. For one thing that 3D-renderd and 2D-drawn lines don't match. I thought I'd bring all those up in a separate issue some day...

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

And continuing the monologue....

Just thinking about the proportions, GL crams all depth between 0f and 1f, which gives 7 digit accuracy, relative to the distance.

The int approach of SW stretched over a kilometer would have separation of 0.5 micrometers but unfortunately the scaling is fixed and in millimeter scale the world ends exactly at 32 meters. Somethig should be done about the 65535 multiplier if the scene is too large -- or then maybe go the GL way?

from artofillusion.

Sailsman63 avatar Sailsman63 commented on June 5, 2024

Okay, got a few questions. Some of them may be a bit of a tangent:

I imported an STL model about 10 × 15 × 50 meters in size, that was exported in millimeter scale,

Why, in a format that natively supports Floating-point vertex positions, would anyone make such an export?
How detailed is this model? (IE, how many vertices are we talking about?)

It's a killer to leave the object editor to rendered view mode and then open a large mesh... CPU shoots to 97% and nothing happens.

Probably means that the huge mesh is being broken down into too many rendering triangles. (Surface Error is too small) You're either running out of memory (Did you check the error stream for OOMs?) or it's just taking a long time to do all of the math. (This might get exponentially worse for really large meshes, as parts of them will end up being paged in and out of CPU cache many, many times)

I doubled the max depth in GL drawer for perspective. Now it draws everything but the rendering depths in perspective drawing are wrong. Objects that should be behind something else may be drawn on the front.

This sounds like "Z-Fighting," which happens when you start to get rounding errors in z-depth calculations.

Changing the type of zbuffer of SWCD to float or double solves the overflow problem. (Actually I have never understood why it would have to be int in the first place.)

Is this a private field, or part of the API? I wonder if, originally, integer math was enough faster to matter...

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

Why, in a format that natively supports Floating-point vertex positions, would anyone make such an export?

Why not? The client's model happens to 50 meters long and the standard dimensional unit the metric world uses in engineering is 1mm (Except for circuit board designers whose base unit is 1µm).
And as we have floating point numbers, I would not expect a reasonably small number like 50000.0 to be a problem. The actual float type might start to slip at the next digit.... but you still have a virtually limitless count of possible values there.

So this is a real world thing. I can do a job that I don't have an engineering tool for by rigging an AoI script for that purpose.

How detailed is this model? (IE, how many vertices are we talking about?)

Let's just say that the vertex count was certainly sufficient, but that particular computer had no problems handling it, not in the original form or as the AoI-triangle mesh. -- For the actual job I'll of course use only the the parts of the 'model universe' that are relevant to the job.

But the vertex count had nothing to do with it. The same thing happens with simple spheres and cubes and it happens with the surface error set to 100 or 500 what ever is about 0.5% - 1% of the model size.

CPU shoots to 97% and nothing happens.

Probably means that the huge mesh is being broken down into too many rendering triangles.

I'll have to take another look at this. I'm not sure any more what the exact setting were at that moment, I only remember that the problem occurred, when I would not have expected it. I think it had done something much tougher just a moment before...

This sounds like "Z-Fighting," which happens when you start to get rounding errors in z-depth calculations.

This is something much worse than rounding errors.

Using millimeters for the unit the error at around 20 m would already be several meters. If everything was (z) scaled down by the same factor, say 30000, you'd have 0.66666.. for 20 m and 0.666333... for 19,99 m. The error I',m seeing would probably be in the range of 0,1 or 0,2 or even higher. I'd find it hard to believe that the bug would be in GL, it has to be what the canvas drawer sends to it....

I have suspect, but I'll need to do some check runs. I find it a kind of strange that the parallel mode works (as it seems) correctly but the perspective mode fails with the exact same data.

SWCD zbuffer

Is this a private field, or part of the API? I wonder if, originally, integer math was enough faster to matter...

  protected float zbuffer[];

Of course, in the current code it is int.

Processing speed has very obviously been a driving factor there, I don't know what significance the magic number 65535 has to speed but one way of handling larger scales would be make that a variable and reduce it in powers of two or something... But I doubt if calculating in raw floats would be any slower. "Raw" meaning using the z-values as they are but chopped to float instead of scaling to [0.0 - 1.0].

from artofillusion.

Sailsman63 avatar Sailsman63 commented on June 5, 2024

I'd have thought that specifying to the nearest mm would be too precise for something that large... I wouldn't specify any nearer than the nearest cm for something on that scale, but never mind...

vertex count was certainly sufficient, but that particular computer had no problems handling it,

vertex count had nothing to do with it.

That question was more pointed at finding out how intricate the surface is. If it is highly detailed, or has curves, the subdivided rendering mesh might end up being huge, unless the surface error is allowed to increase.

much worse than rounding errors

you'd have 0.66666.. for 20 m and 0.666333... for 19,99 m.

Z-fighting is a known issue in many rendering situations. Most OpenGL Z-buffers are not actually floating-point, but are mapped onto an unsigned-integer space. We only treat them as floating point for certain types of transformations. Older implementations (Which AOI probably uses, as it would have been standard at the time) use a 16-bit buffer, which has a value range of 0 - 655535 (Look familiar?)

If two of your faces are closer in z-depth than 1/65535th of the distance between the near and far clipping planes, you will get Z-fighting

Which makes sense of something that you said in your first comments:

Triangels disappear at the distance Integer_MAX_VALUE / 65535. In GLD the the triangles start to disappear (when they do) at the exact same distance from the camera.

So, as wonky as it may seem in code, the SWD and GLD are using the same far-plane.

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

Triangels disappear at the distance Integer_MAX_VALUE / 65535. In GLD the the triangles start to disappear (when they do) at the exact same distance from the camera.

So, as wonky as it may seem in code, the SWD and GLD are using the same far-plane.

As I already said, that this was a mistaken observation.

The case with GL very different from SW.

  • SW just does not draw the triangles that go over or even touch the maximum range. There is no z-order confusion and the cut-off happens both in parallel and perspective modes exactly the same way.
  • GL works pretty well in parallel mode. If you'll have a look at the attached file, there are some z-issues with the smallest pawn, when the largest one is present, but the rest look OK. That fits the 1/65535 theory. In perspective mode then, the situation turns something else and (assuming 1 unit = 1 mm) the z-map with the largest ones is quite randomly several meters off... so thousads of units at least or in the 2nd digit of 65535. It does not seem to follow any specific distance rule, that I could see. It starts already a the 10-unit pawn if yo zoom back a bit. The smallest pawn may still look OK.

The 1/65535 separation would be good enough for interactive rendering (< 1mm for 50000 mm) but something about GL perspective mode does not seem to fit ....

Pawns scaled by pow10.zip

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

I had a look at the depth values: When the z-problem is at its worst, the depth range that the image uses may be something like [0.9999695 - 1.0] (as float). This was in perspective mode. When I dropped the view to parallel the range was [0.5909514 - 1.0]. Zooming to a smaller object (again in perspective) may give something like [0.9863279 - 1.0] and no z-problems detectable any more.

Got to check a bit deeper, what values end up given to GL in those cases.

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

Ok. I found it!

There is a mathematical booby trap in glFrustum mode of GL2. IMO just in criminal level of stupidity on their part....

I have a few things in mind that should help to handle it -- some of them need a bit testing. I may post a fix proposal over the weekend. And I'll probably continue to test some other small fixes too ...

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

I experimented a bit.

It is easily possible to GL to handle scenes with very large dimensions. Went up to an 1E7-size object which would make the max depth something in 1E8 range. Of course the the cut-off distance has to be moved forward accordingly and objects in 1 - 1000 size pretty much disappear. Strangely in some viewing angles some mid size object (in about 1E5 size range) show z-confusion, though the cut-off distance should take care of those. When an object of a larger scale appears in the back the z-flickering stops again....

There also seem to be some unwritten rules (well I have not dug up all information in the Kronos documentation) like that the near cut-off distance will always be ≥ 1.0 no matter what I set as minDepth in the code. → I still can not zoom close to very tiny objects.

There are thing in the handling of perspective viewing in GL, that seem to defy logic to me and can not help thinking that GPUs that we have nowadays could do a LOT better than OpenGL allows so far.... But there certainly are things that could be improved with just the tools that are available. In some cases I might suggest to do some pre-scaling to what is sent to GL. There are so many calculations already in the rendering path that one more should not hurt too much....

But let's see...

from artofillusion.

Sailsman63 avatar Sailsman63 commented on June 5, 2024

Keep in mind that the OpenGL code that we use was written against a very old version of OpenGL and was never updated. It might be worth re-writing the entire GLCanvasDrawer from scratch, but we should figure out how we want to re-arrange layering first.

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

OK. I agree, looking at what I keep finding, that some level of rewrite may not be a bad idea and it would seem that it would reflect on a few other things as well....

I'm definitely in uncharted waters, when it comes to GL. I'll just focus on taking a few obvious short comings under control both in GLCD and SWCD now, including the estimateDepthRange()s that play a role in the inefficiencies..

I agree, looking at what I keep finding, that some level of rewrite may not be a bad idea and definitely with a slightly bigger plan.

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

Just a minor update: SWCD calculating z as floats, handles a scene with objects in sizes 1mm to 10 km quite effortlessly and without any problems with the depth map. The file did not require subdivision of RenderingMeshes. Unless nobody absolutely opposes I'd like to have it that way.

I haven't checked the int/float effect on performance though but assuming 1 unit = 1 mm (and assuming my calculation is correct) floats could handle dimensions larger (quite a bit larger) than the known Universe...

GL then has it's limits. It can be made to adapt to the 1mm - 10 km ratio but not without some defects in either end of the scale. Currently I have a bug there that affects the perspective change animation and a few other things that need checking... but looks like the main problem can be tackled...

It seems to me like it had been the intention to keep the canvas drawers as similar as they possibly could be, but I'm beginning to see different use cases where one of them would be a more optimal tool than the other.

from artofillusion.

Sailsman63 avatar Sailsman63 commented on June 5, 2024

With SWCD calculation z as float handles a scene with object sizes 1mm to 10 km quite effortlessly and without any problems with the depth map.

If you've got a model that is scaled to 1 unit = 1mm, at that 10 km (10_000_000 units) locations will be rounded to the nearest unit. Tiny rounding errors (Single-bit difference) in calculating positions. (Which happen, for instance, when figuring Implicit surfaces, and when subdividing curved surfaces) will almost certainly cause Z-depth issues. The rounding error goes up from there.

What's worse, the rounding errors will be partly dependent on where you put your viewpoint, so they will be somewhat chaotic. Classic renderers use the integer variant to allow for some predictability - an integer splits the space into uniform, linear chunks. If an integer does not have fine enough resolution for the space being rendered, they start using zones. (More complicated - don't worry about it for now)

It's good to be aware of the limits here, but this is something that needs to be approached carefully.
If we want a little more precision, perhaps we can use the unsigned variants to give us a full 2^32 spaces in the 0-1 range? (New in java 8, which will be minimum for the next release anyway.)

from artofillusion.

peteihis avatar peteihis commented on June 5, 2024

Well, it is actually not nearly as bad as you might think: If I look at the 10M tall object, the camera will be placed some 50 - 100M units away (say 100 for simplicity) -- Then the z-separation is 10 units at best, which is perfectly fine as nothing smaller than 10k units even shows up. If I zoom to the 1 unit piece, the distance is 10 units and separation becomes 1e-6, (one nano meter for a 1 mm object) which is more than sufficient.

The camera is always placed at a distance that is relative to the magnification so it works pretty well in any case, but in perspective mode it is a perfect solution as separation gets coarser the farther away the objects are. The distance × 1E-7 × is certainly good enough.

Though, it is possible to create a situation where you place for example 1 mm tall objects 1 or 10 km away from each others and look at them in parallel mode. Then the z-map is correct only around the object that you set your working distance at. Still that situation does not easily happen by itself, you have to create it deliberately.

But what exactly is wrong here is, that the same calculation for the depth separation is used for both parallel and perspective modes (GL does that too) This does not make much sense to me. In perspective mode you'd expect the accuracy to be lower (or better said adapted for larger objects) when distances are greater. In parallel mode the separation should stay the same independent of the distance.

I gave this a bit of thought and basically as we don't want to put a slow if(perspective) into every depth conversion (there are quite a few of them), I'd write an inner class that creates the conversion calculator that is needed in each case and just z[pixel] = zCalc.toDepth(z); -- or something like that. Then the z-map can not be

Scaling the depth to the full integer range would, with 100 km deep range give 1/42,0 separation per millimeter. That is not so bad either.

That with the SWDrawer. The thinking that GL is based on seems pretty damaged to begin with. I could think of ways to compensate some of the issues there but with the built-in limitations there it'd be more just fighting windmills ... I don't know what any newer GL might have to offer though, but that's another study.

Then yet another finding, worth a thread of its own really: Raytracer rendering starts to fail already after 100 unit distances.... That looks like a systematical bug that has something to do with the world y-direction and the surfaces being illuminated by a light. Anti aliasing masks the problem. I have not studied it much yet but I'll start a separate issue about that once I have something to show.

from artofillusion.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.