Giter Club home page Giter Club logo

cdbv2-2023-summer-workshop's Introduction

Welcome to OGC's Github page.

cdbv2-2023-summer-workshop's People

Contributors

davidflor avatar jerstlouis avatar ryanfranz avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

jerstlouis

cdbv2-2023-summer-workshop's Issues

CDB 2 determinism concerns

  1. As I understand it, vector feature attributes are stored in one table per geo-package without any other spatial sorting organization. This could lead to possible random table IO performance concerns when processing large numbers of features where the only mitigation would be to structure the number of LoDs per geo-package differently to limit attribution table size. It may be desired for certain use case profiles to at least impose spatial coherent sorting criteria on the attribution table to best trade of the efficiency of normalization versus localization. It would be helpful to know that this mitigation is feasible.
  2. The approach to "batch optimization" of 3D cultural content (which I support in concept, but do not agree with in title. Batch optimization is an engine problem, and while spatially grouping content into a tile might force a simplistic engine to do so as a side effect, it shouldn't be the reason for doing so. That reason should rather be efficient I/O and processing of the spatial coherent content) should also consider certain profile use cases need to bound the amount of data that must be processed for deterministic memory usage and latency response. The current CDB has analogous file size limitations that cause some LoD data to be pushed finer in support of this that should also apply to the glTF tile blob approach.
  3. The glTF tile blob approach should be able to handle existing cultural model content that supports additive LoDs and not be exclusively tile exchange. Is such support in the current design?

3D Model format(s)?

Ryan and Jerome: which 3D model format(s) do you intend to use in the creating / exchanging / rendering each other's creations in this summer's workshop?

Significant Size discussions

The equation in CDB Volume 1 8.3.6 Organizing Models into Levels of Detail:

qhfKCnYwOdW7Y7KC

which is exactly the same as the one Ryan shared in https://github.com/opengeospatial/CDBV2-2023-Summer-Workshop/blob/main/VectorData.md#cdb-1x-significant-size

and corresponds to Table 3-1: CDB LOD vs. Model Resolution.

And this also corresponds to what I think it should all be if the Significant Size is what I call the smallestFeatureSizeInMeters, or explained more clearly:

details (that are smaller than the model as a whole) that should be visible before showing the next LOD, because they are larger than the model size criteria to include them at all at the next LOD

And this definition of the Significant Size also corresponds to how the Volume 6 OpenFlight starts defining it:

When assigning a Significant Size to a model LOD, the modeler needs to answer the following question: When I created a new model LOD, I did so to create additional detail in my model. What is the largest dimensional change in geometry for this new model LOD? In other words, what is the largest dimensional difference of a surface between this LOD and the next coarser LOD? In effect, the value of Significant Size corresponds to the “modeling difference” between the LOD and the next coarser LOD.

but I believe this is inconsistent to what it then formally states as the definition of the significant size:

Definition of Significant Size

The Significant Size is defined as the “size” of the model, expressed in meters. By extension, it applies equally well to a submodel represented by an Additive LOD. In the case of an Exchange LOD, the Significant Size is the difference between two representations of the model or submodel.
Estimating the Size of the Model

Many models have shapes that resemble a cube (with roughly equal length, width, and height), and thus their significant size can be simply estimated by the length of the diagonal of their bounding box. As the shape of a model departs from that of a simple cube, either with respect to aspect ratio, or with respect to the amount of negative space within its bounding box, the model’s significant size should be decreased proportional to the amount of departure.

Defining the Significant Size as both the "size of the model" (that I will call ModelSize) and the "dimensional difference of a surface between this LOD and the next coarser LOD" (that I will call ModelResolution) is really inconsistent with a refinement / replacement approach of the model LOD. If using a refinement approach, a model has the same ModelSize throughout all of its LODs, whereas its ModelResolution improves.
If a model is first shown at level A, with ModeSizeA and ModelResolutionA, then next Level B (A+1) will also include a version of that model, which presumably will have a ModelResolutionB roughly twice as fine (twice the amount of details, features half as big now distinguishable) compared to ModelResolutionA, but roughly the same ModelSizeB as ModelSizeA.

So it seems that this definition of the Significant Size is heavily biased towards and only (somewhat) consistent with an additive approach, where the size of the model being added is also the smallest new feature that was not included in the coarser model LOD.
Unless all we have is boxes being added at every level, the "Size of the model" cannot be the same thing as the small details being added to the same model.

But basically I think that an improved definition that separately considers model size (what I also called the sizeOfTheLargestFeatureOfTheModelInMeters) and model resolution (what I also called the smallestFeatureSizeInMeters) would be mostly consistent with the CDB 1.x original intent (though not necessarily any or all of the tools / produced content -- something to verify, at least for our own tools and the San Diego CDB in this sprint), and consistent with the current values in Table 3-1, but we would be clarifying that the lower bound in the Significant Sizes in that table refers to the model resolution and NOT to the overall model size.

Also, with the modern 3D content production pipelines, everything is typically laser-scanned at the highest resolution, and then simplified to the lower LODs with mesh simplification algorithms (e.g., as done with EPIC's Nanite pipeline, and I think also in the One World Terrain pipeline). This contrasts with how Volume 6 currently starts to describe Significant Size:

When a finer model LOD is created, the modeler typically adds additional geometric detail, additional features (such as markings), or refines the shape of curved surfaces (such as engines, wheels), etc.

Missing Tiles Problem

We have been considering the case where you have a heterogeneous tiled imagery layer which was sourced from various datasets, and may have different resolutions (maximum zoom levels) for different geospatial extents.

Our visualization client currently struggles with this -- we usually expect a consistent maximum layer where separate such layers would be used for identifying these areas with different resolutions, though in the past we had to use some work arounds for tile servers returning no data for some areas at a higher level, but returning valid data (of the same area) at a lower resolution. These work arounds, as I had implemented them, caused their own problems.

@ryanfranz

So I have thought a bit more about this missing tile problem, and I still think there is an issue with automatically trying to fall back to trying lower resolution when looking up a tile in a GeoPackage finds no record, or requesting a high resolution tiles returns a 404 or 204.

It may in fact be that the reason the low resolution tile exists is because it encompasses both an area that has data and an area that does not have data (e.g., water), which in that low resolution tile might be 0-alpha (e.g., in GeoTIFF, PNG, JPEG XL).
This would also be a common scenario with vector tiles. In this case, you really do not want to be using the lower resolution tile, since it includes no additional information (and if you do display it, it would need to be rendered in addition to and underneath the higher resolution tile that does exist which overlaps with the low resolution one).

@jyutzler @joanma747

What would be the current established or best practice for handling such scenarios in GeoPackage and WMTS / OGC API - Tiles ?

Very much related to this is the concept of scenes which allow describing such distinct individual components of an overall data layer -- I just proposed a Scenes requirements class to OGC API - Coverages (and could easily extend to Maps, Tiles, even Features).

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.