Comments (4)
Hi Kerim
I've pushed an attempted fix for the compile problem under MinGw, to branch iss16. Please pull that and see if you can successfully compile.
from mappedtensor.
Regarding access time, MappedTensor
does the best it can to read data in contiguous chunks, and in the fewest possible disk accesses. Is the data located on a network drive? That can of course slow down access.
In general, accessing contiguous regions of a file is fast, while accessing bits and pieces is slow. So accessing mtVar(:, 1)
will often be much faster than accessing mtVar(1, :)
, even if mtVar
is square. So if there's a way you can store your data transposed, then accessing the elements you need will be much faster.
Regarding the second run being much faster than the first, this is a disk access caching issue. On the first run the data is actually read from the drive / network. The OS then caches this data in memory, so the second run is reading from memory rather than from disk. This all happens behind the scenes as far as MappedTensor
is concerned, so there's no way for me to affect that.
from mappedtensor.
Thank you for reply Dylan,
You are right, now I can compile all three mex functions without any problem. And also <MappedTensor>
has not errors highlighted by red. Thank you for fixing these features.
I forgot to tell you that my data is stored on a external hard drive. Now few minuts ago I launched the same code in loop, just to see will access to he data be faster:
a = zeros(1,10^7,'uint8');
N = 10^4;
tic
for n = 1:4000
a(1,(n-1)*N+1:n*N) = mtVar(1,(n-1)*N+1:n*N);
end
toc
I tried to transponse datamtVar = mtVar';
but that didn't help. But I think you meant that I should transponse data and write it on my disk and only then use mtVar(1, :)
instead of using mtVar(:, 1)
? If so, I think I could try it (on external disk) after the loop is completed.
How do you think, if the data were stored on a local hard drive, would time access to the same data be few seconds/minutes? I can't check it now because I don't have enough space on my local hard drive.
If there is a way to store data of a complicated format? I mean my data is recorded kind of a 8 bytes Int16
, 4 bytes Int32
, 1024 bytes Single
and that is repeated N
times started from an Offset = 5000 bytes
. If I used memmapfile, then I would write somethink like:
memVar = memmapfile([r_path r_file],'Offset',5000 ,'Format',...
{'Int16',4,'a';...
'Int32',1,'b';...
'Single',256,'c'},'Repeat',N,'Writable',true);
Is it possible to do somethink similar using MappedTensor
?
from mappedtensor.
I think the access will definitely not be faster in a loop — MappedTensor
does the best it can do to read the data efficiently, and looping in Matlab definitely won't be faster than that.
Re transposing, yes you're right. I mean transposing the data when you write it to disk, not transposing from Matlab. If you need to read the data many times, then it might be worthwhile to use MappedTensor
to transpose the data and write it back to disk; then you can use the transposed data file from then on. But if you only need to read the data once to process it, then this approach won't help you.
No, there's no way to store complicated data formats using MappedTensor
, since is must appear as a single Matlab variable. You can however use several binary files for storing the different fields, and access them together.
from mappedtensor.
Related Issues (15)
- Allow integer types for index variables HOT 21
- Shortcut casting causes errors HOT 1
- Shortcut casting still causes errors HOT 1
- Constructor fails with complex input HOT 2
- Does it have to inherit from a handle object? HOT 6
- Data is corrupted when subsampling a large file (8Gb) on windows HOT 13
- Error with uint indexes HOT 1
- Indexing a mapped tensor after permutation
- Add overloaded `prod` method HOT 3
- Direct conversion of a tensor to a `MappedTensor` in the constructor HOT 3
- Overloaded `plus` method bug HOT 3
- a=cast(a) could be implemented with strClass HOT 1
- class char silently converts to double HOT 17
- Empty index causes error HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mappedtensor.