Comments (13)
@elizarov Just pointed me in the direction of Memory
, which seems to do exactly what I need. No I need to understand if there are ways to easily allocate it.
from kotlinx-io.
It is not supposed to be allocated easily. It is a resource that is to be carefully managed. Right now we are thinking that scoped primitive that gives you memory for a while should be Ok. Tell us more about your use-case, though.
from kotlinx-io.
I use manual placement of objects in a JVM ByteBuffer
to avoid boxing. Currently, it us solved by specialized readers and writers like these ones and emulate value-types. Current tests show that it allows almost completely eliminate boxing overhead on non-primitive buffers (tested it for complex numbers).
Currently I use JVM ByteBuffer
, but obviously, I can't move it to multiplatform since IOBuffer
works quite differently. Memory
seems to do the trick (read and write primitives, create non-copying view-slices, etc). And it seems to be backed by ByteBuffer
, but of course I will need some way to allocate it and keep it allocated while the Buffer
that holds it is alive.
from kotlinx-io.
In future, I will probably want to connect to something like Apache Arrow for cross-language data transport and use its memory model, but it is currently out of scope.
from kotlinx-io.
The problem is that different platform has different memory management so it is unclear how can we define an MPP common allocator for Memory
so that it can be fully-functional and relatively safe. This is why the only planned function is something like that (IoBuffer
will always have Memory
inside)
inline fun <R> withBuffer(size: Int, block: IoBuffer.() -> R): R
from kotlinx-io.
I think that for most cases a simple wrapper on top of ByteArray
will do. why not make an interface like RandomAccessBuffer
and make Memory
implement it. The operations like primitime get/set could be added on top of it as extensions instead of being extensions of Memeory
(Memory
could have its own set of extensions overriding those of interface). Than we can add other implementations like the one wrapping the ByteArray
or even that of Arrow storage.
from kotlinx-io.
Those extensions couldn't be on top of RandomAccessBuffer
because they can't be implemented efficiently: all primitive get/set operations will be significantly slow (comparing to `ByteBuffer.getShort/Int/Long...). The idea is that on JVM Memory is an inline class that is represented as a ByteBuffer in runtime and all functions are inline so writing any code with Memory will be compiled to the corresponding bytecode working with ByteBuffer so all Hot Spot optimizations are enabled. Any kind of wrapping or handmade primitive reading implementations will reduce performance.
from kotlinx-io.
Indeed, but we still need some kind of multiplatform implementation for this. There are several ways to solve that. One is to make primitive read/write members instead of extensions. It will allow to use "slow" access for "slow memory" (ByteArray
) and optimized access to ByteBuffer
. It will probably work, but not very kotlinish. Another way (the one I usually do in kotlin) is to separate storage and access. Meaning that you have a storage class like Memory
with minimal functionality and then accessor class like MemoryReader
or MemoryWriter
that takes actual Memory
as a parameter and could use factory function like Memory.read()
to create it. This factory funciton could find out (in runtime) what exact Memory
implementation is used and then use optimized access methods if they do exist. It will bring only minimal runtime overhead and looks quite simple from the user side. We can also automatically free the memory if it is initialized and no accessor holds it at the moment. I can write a prototype later if you are interested.
from kotlinx-io.
Here is the prototype: https://github.com/mipt-npm/kmath/tree/dev/kmath-memory/src/commonMain/kotlin/scientifik/memory
It ended up very similar to the current IO implementation (I've stolen most of JS part). The difference is that Memory
is interface and could have multiple implementations on the same platform. It could allow better flexibility in future. For example it is possible that we will need some kind of special representation for shared memory, when it is available.
Another feature (not really used yet) is a release
mechanism. It is supposed that Memory
is initiated when first reader
or writer
is taken from it (it is possible to make initialization lazy), then it is released when all readers and writers are released. In this case one can control memory release process in native or some other case.
I currently did not implement array reads since I am not sure I understand a use-case for them. They could be done via MemorySpec
. MemorySpec
could be optimized for specific memory type, it could check on specific memory type from MemoryReader::memory
and use optimized access operations if type matches. Also user could use specific MemorySpec
optimized for specific memory type.
from kotlinx-io.
IMO, I think something like that can and should be implemented on top of the current memory class but not necessarily in kotlinx.io
.
It introduces unnecessary runtime indirection for what is supposed to be a thin abstraction over platform-specific raw memory implementation. If in future (Like project panama) there's another implementation, it can be added as another actual
module for the same target. Similar to how ktor-client-* has multiple implementations for the same core client.
Although it would be nice if one the actual
s could be your prototype. Which would make everyone happy. Not sure if expect
/actual
would ever allow this use case.
I'm not sure if this is currently possible as I haven't gotten to this stage in my project yet but the MemorySpec
bit might be achieved with kotlinx.serialization
.
from kotlinx-io.
I can agree that this is not basically an IO problem. But it seems for me that one Memory
for one platform does not solve all possible use-cases. It is possible to have different memory variants in the same platform. Split actuals are not always a good solution because you need to actually take different module and recompile everything to make the change.
Of course, I can build everything on top of existing Memory
implementation and then add my own interface on top of it, but the problem of inability to allocate memory in common
still exists.
I do not see any memory indirection here. Maybe you are talking about virtual calls? Well, the API adds a single additional virtual call, and I do not see how it could affect anything.
A compiler plugin to determine the MemorySpec
could be done in the same way as it is done in kotlinx.serialization
. I mentioned it before. Maybe even current plugin could be tricked to do it, but I am not sure. For mathematical tasks it probably not needed (we work with limited number of simple objects and it is quite easy to implement specification for each of them), but it Kotlin tries to implement value-types surrogate through that, it is possible.
from kotlinx-io.
Will the new Memory
class have methods to set/get in native byte order? As supposed to the current big-endian only getters and setters?
from kotlinx-io.
We're rebooting the kotlinx-io development (see #131), all issues related to the previous versions will be closed. Consider reopening it if the issue remains (or the feature is still missing) in a new version.
from kotlinx-io.
Related Issues (20)
- Singular createDirectory HOT 1
- Unify js and wasmJs module loading routines
- Construct a ByteArrayInputStream from a ByteString without copying bytes or using UnsafeByteStringOperations HOT 2
- Add a `Path.toFile()` converter on JVM HOT 5
- Make ByteString implement Iterable HOT 5
- Path uses io.File instead of nio.Path HOT 4
- Replace required reason APIs with equipment Foundations APIs on Apple targets
- Map each code point of an ill-formed UTF-8 subsequence to a replacement character individually HOT 1
- KDoc for RawSink and RawSource doesn't mention IOException
- Provide linuxArm32Hfp target HOT 2
- Make functions to read and write code points public
- Validate UTF-8 encoding/decoding correctness
- Make sure current API could be later extended to support charsets other than UTF-8
- Better segment pools HOT 8
- Find workaround for WASI's fd_readdir unsupported on NodeJS@Windows HOT 5
- Filter intermediate source sets out of generated documentation
- Consider alternative Sink.writeString implementations on JVM
- Move kotlinx.io.files package to a separate module
- Float reading and writing is inconstent on Kotlin JS HOT 4
- Add an option to throw an exception from Source.readString decoding errors
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kotlinx-io.