Comments (5)
This is a note I sent to Tony Calavano @ SUL:
Tony,
I have a funny feeling like you may be able to help me more that I can help you at the moment, and that the problems Stu alluded to yesterday may be related. I'm no JP2 expert by any stretch, as you'll soon see.
A couple of weeks ago Ben was having a problem with OpenSeadragon and I think what we realized is that JP2s that have precincts often have a tile size that is equal to the full image (is this always true, or could one also specify Stiles={w,h} ?), and when I parse the JP2 header in Loris I'm only looking at tile size[1].
I think I can adjust Loris to read in the precinct sizes without much difficulty, (if you have a copy of the JP2 spec, see table A.15 on pg. 24). But is there is a precinct size for each decomposition level? Or are the multiple arguments I see passed to Cprecincts in various recipes related to the quality layers? Or something else?
It comes down to this: if you have a JP2 with precincts, what do you need to server to report? The IIIF info.json data structure looks like this[2], and my concern is that we actually need to be reporting different tile sizes for each scale factor, which would be impossible right now. Or is the structure OK, and would you just need the first precincts parameter?
Advice? Thanks,
Jon
from api.
Any progress on this? It would be a big change that we should make ASAP if necessary.
from api.
Here is Tony's response:
I just double checked our code, and we do not explicitly define a tile size. It looks like Kakadu defines the tile size as the size of the image in the jp2 when using precincts. You are able to have a jp2 with both tiles and precincts of different sizes, but I have not spent too much time with this beyond creating massive images.
There is a precinct size per resolution level. The size depends on how the jp2 was created (at least with Kakadu). We use Cprecincts={256,256},{256,256},{128,128}. This gives the two highest resoulution levels a precinct size of 256, and the rest a size of 128. I'd have to dig through our documentation to find the reasoning, I can't recall it off the top of my head.
I believe that we would potentially only need the tile size in info.json to report the highest resolution precinct size.
I think he's correct...if we're saying you need to get the region, then scale, then rotate...you only really care about the tile size at the highest resolution, right? Or am I thinking too much like an implementer?
from api.
Well, the use case is to get the best results from the server. If the server defines 256x256 at one resolution and 128x128 at a lower resolution (as above), we can't specify that in info.json and the server will have to retrieve 4 128x128 tiles to build the 256x256 one requested.
My feeling is that with the scale_factor -> h/w math, we're going to end up in a LOT of pain trying to actually get this right. However an implementation note saying that tile sizes at different resolutions should be the same for optimal performance of this API might not go amiss?
from api.
Too low down the stack to fix.
from api.
Related Issues (20)
- Processing requirements for bgColor and linked canvas/scenes
- 3D: angles are not defined consistently HOT 1
- Definition of RotateTransform in the draft 3D API HOT 2
- Latest stable version is missing from Change Discovery 1.0 API header HOT 2
- Clarify transform applied to camera and light in 3D draft HOT 2
- Added field size property to OrthographicCamera HOT 6
- Content State links to 0.9 spec for "section 6"
- URI encoding in Content State HOT 5
- Bodies and targets clarification in content-state section linking HTTP GET Parameter HOT 1
- Inconsistent terminology for intensity units of lights in draft 3D api and example manifest
- Possible discrepancy between Pres. API spec. and JSON-LD Context re. viewingDirection HOT 2
- painting motivations in search examples are unclear
- Recipes will need to deal with versions for 3.0 and 4.0
- Can Ranges reference Canvases in *other* Manifests?
- Can't distinguish between seek to time X and stop, versus seek to X and play HOT 1
- Conflict between GeoJSON and Prezi 4 term `transform` ?
- Allow navPlace and navDate on Annotations?
- Does `lookAt` track an object or is it a shortcut for a point? HOT 1
- Can `lookAt` look at an arbitrary volume? HOT 1
- Consider a `provides` property on Annotation HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from api.