Giter Club home page Giter Club logo

Comments (65)

vellamike avatar vellamike commented on August 16, 2024

Could we have an example of one of the 2.2MB files which contain a frame in the big PCISPH scene? I'd like to see where different compression techniques get us.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

Here it is:
https://gist.github.com/tarelli/6229869/raw/aa9f0d9089021a0ec4160ee36cdb18ac3115f5af/gistfile1.txt

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

gzip compresses the 2.2MB file to 122KB -6% compression ratio.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

Few questions:
1 . Do we really need to transfer numbers with such a precision?
2. We can change JSON to display abbreviated names i.e t for type, p for position
Using abbr. names alone saves about 500kb on an uncompressed file

But the above, plus compression is nothing compared to what we can save using binary.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024
  1. You read my mind, I was literally thinking that this second, most of
    the unique data is stored there.
  2. I doubt this would make a difference with compression.
  3. Really? Why do you think Binary will save so much compared to
    compression?. I estimate that by reducing the precision to 2 significant
    figures and using LZW compression we can get to about 1% compression.

On 14 August 2013 14:30, Mariusz Sasinski [email protected] wrote:

Few questions:
1 . Do we really need to transfer numbers with such a precision?
2. We can change JSON to display abbreviated names i.e t for type, p for
position
Using abbr. names alone saves about 500kb on an uncompressed file

But the above, plus compression is nothing compared to what we can save
using binary.


Reply to this email directly or view it on GitHubhttps://github.com//issues/9#issuecomment-22635260
.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024
  1. Precision is definitely one of the low hanging fruits we can grab. For visualization purposes the current precision is not necessary.
  2. Shouldn't make a difference once it's compressed.
  3. Binary might save us in terms of performance for the unpacking I think but I would be surprised if it would give us completely different compression rates.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

While I agree that #2 will not make a huge difference, on a larger sets even 1% difference will be significant. In addition to that, please remember that we're dealing with strings here, and ever char removed from the string will help to save memory and processing power. sending 500kb less to gzip/lzw to compress may help quite a bit if we're trying to generate->compress->send->uncompress->process every 20ms or so.

As far as binary, I'm sure that it would help with memory, cpu, and transfer, and it also can be compressed (even 1% may help)

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

Nice MIke - I would've guessed compression would shrink the file around 60% but looks like I was dead wrong!

If we remove precision digits steady state fluctuations are going to disappear from the visualization, and the scene will look still while the simulation is still going on. Not that it matters, but if we can avoid it would be better in my opinion.

Given that compression will bring size down that much, I vote for implementing compression for now and as we work with bigger scenes see what other improvements are needed. Before it comes to removing precision digits I would like to see how binary representation actually fares vs compression.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

Giovanni - I see your point about the scene appearing static, but I'm sure
if we go to 3sf rather than the current ~17 we will still see the same
effect?

On 14 August 2013 14:51, Giovanni Idili [email protected] wrote:

Nice MIke - I would've guessed compression would shrink the file around
60% but looks like I was dead wrong!

If we remove precision digits steady state fluctuations are going to
disappear from the visualization, and the scene will look still while the
simulation is still going on. Not that it matters, but if we can avoid it
would be better in my opinion.

Given that compression will bring size down that much, I vote for
implementing compression for now and as we work with bigger scenes see what
other improvements are needed. Before it comes to removing precision digits
I would like to see how binary representation actually fares vs
compression.


Reply to this email directly or view it on GitHubhttps://github.com//issues/9#issuecomment-22636638
.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

@gidili are you sure that using float with such a great precision will give us the right results anyway?

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

We need to measure how many digits are needed for fluctuations to not disappear. I would be surprised if we need more than 6.

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

@vellamike @msasinski @tarelli you guys are right - I didn't look at the file, I wasn't expecting 15 decimal digits. In my experience working with those numbers 7 digits should be more than enough.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

Some more information on compression: http://buildnewgames.com/optimizing-websockets-bandwidth/

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

Some benchmarks on JSON compression: http://web-resource-optimization.blogspot.ie/2011/06/json-compression-algorithms.html

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

By leaving only two decimal digits (during visualisation, not computation, fluctuations still happen) and shortening some names I managed to go from 2.2MB to 1.3MB for the big scene. Not enough but it's a start. I will be now looking at compression but we might have to wait on some further development related to Websocket compression, see this.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

Does it look the same with only two decimal digits in visualisation?

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

Pretty much yes, I tried also with just one but then quantisation starts being visible.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

That's great. If you have a moment could you try and see what it looks like with fewer particles in visualisation? e.g drop every 2nd particle. We might be able to get by with very few particles (surface particles for instance).

That link seems encouraging.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

You can still figure out what the shape is (the higher the density of points the less it becomes noticeable) but in general I'm not a big fan of removing points "randomly". Also we can't get too far with it, sampling one point every three still gives circa 400KB with that big scene (which isn't really that big given future perspective) per each update and compression would still be needed. On the other hand calculating which ones are on the surface would be interesting to explore and I just sent an email which also covers that, do you know of any algorithm we could use in realtime to do this?

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

It appears there are a number of ways to compute a 3D convex
hullhttp://stackoverflow.com/questions/18416861/how-to-find-convex-hull-in-a-3-dimensional-space.
However, I valuable information is then being lost. (Although to be fair it
jus depends on exactly what you want to do).

I answered your
emailhttps://groups.google.com/forum/#!topic/openworm-discuss/9GXYssTzyBw
too
so lets continue the discussion there.

On 11 December 2013 13:58, Matteo Cantarelli [email protected]:

You can still figure out what the shape is (the higher the density of
points the less it becomes noticeable) but in general I'm not a big fan of
removing points "randomly". Also we can't get too far with it, sampling one
point every three still gives circa 400KB with that big scene (which isn't
be really that big given future perspective) per each update and
compression would still be needed. On the other hand calculating which ones
are on the surface would be interesting to explore and I just sent an
emailhttps://groups.google.com/forum/#!topic/openworm-discuss/9GXYssTzyBwwhich also covers that, do you know of any algorithm we could use in
realtime to do this?


Reply to this email directly or view it on GitHubhttps://github.com//issues/9#issuecomment-30321897
.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

Does Geppetto currently bundles the data into bigger chunks before sending it? Is gzip being used?
If the answer is no to both questions, how hard would it be to bundle the data and send it every 500ms, and enable gzip?

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

I'm still figuring out about "Enabling gzip". The proper way to do this is to have the server compressing and the browser decompressing based on a flag which is handshaked at the start. But we use WebSockets and it looks like the support for this is not available yet, see this.
The alternative which stays is to manually compress the message inside the web socket servlet and then use javascript to decompress it but I don't even know whether this is a good idea attempting since I'm afraid performances of write/reading data would compensate the benefit of sending smaller updates. Any previous experience with this?

from org.geppetto.

borismarin avatar borismarin commented on August 16, 2024

I have some experience with compression for data acquisition/streaming over ethernet, and LZO seemed to offer the best throughput (though compression rates are smaller than lz77/gzip). We've also experimented with static Huffman, but evidently it is only decent for data with similar statistics (we had two trees, one for control and other for data streams, and that was the best cost/benefit solution).
btw, https://github.com/ning/jvm-compressor-benchmark/wiki
edit: LZ4 seems very promising as well. It is kind of new, haven't experimented with it yet...

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

I've been thinking some more about this issue. Imagine we had one million particles (quite realistic for a worm swimming in deep water) and we wanted to stream at 20fps:

particles = 1e6
fps = 20
bytes_per_particle = 24
bytes_per_second = particles*bytes_per_particle*fps
=> B/s = 480000000.0 = ~0.5GB/s

I think transmitting this is hugely challenging, even if we did achieve the necessary 1% compression ratio (5MB/s), I can't imagine how the client machine is going to be decompressing such a huge quantity of data.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

@vellamike There is some additional overhead for TCP transport, and websocket packet. Also, in calculating bytes_per_particle have you considered that websockets use UTF-8 for transport? If we consider network collisions, latency and other stuff that will affect throughput then our situation looks very bleak :(

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

This is an "optimistic" calculation so I haven't looked at second order
effects like that, but I agree that transmitting all the SPH data as it is
generated will not be possible with today's technology.
On 21 Dec 2013 12:01, "Mariusz Sasinski" [email protected] wrote:

@vellamike https://github.com/vellamike There is some additional
overhead for TCP transport, and websocket packet. Also, in calculating
bytes_per_particle have you considered that websockets use UTF-8 for
transport? If we consider network collisions, latency and other stuff that
will affect throughput then our situation looks very bleak :(


Reply to this email directly or view it on GitHubhttps://github.com//issues/9#issuecomment-31062026
.

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

@vellamike thanks for raising this. As you have in the issue description and as we discussed before, one of the options to reduce the amount of data sent is to map particle set to 3D geometries and only send vertices. This would reduce the amount of data sent enormously (let's say a muscle cell is in the 100s of particles, to represent it we'd only send a dozen vertices or so). Sending particle by particle was never meant to be a long term solution. As we work with bigger and bigger scenes we are going to have to move away from sending all the particle coordinates, one way or the other.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

@gidili Agreed, but this still has issues:

  1. It only solves a subset of the problem - visualisation of the fluid surface. If we need the particles themselves for calculation of things like viscosity,density,turbulence etc. I assume this will have to be done server-side.
  2. Calculation of 3D geometries using convex hulls is computationally intensive, in general for d dimensions and N particles convex hull algorithms scale as O(N^[(d/2)+1]) ref so for our case the scaling will be O(N^(5/2)). It could easily turn out that computation of the convex hull is more intensive than the SPH algorithm itself.

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

@vellamike 1) yes you have the particles server-side and no need to send them / stream them anywhere to calculate that stuff. 2) There are many ways of "cheating" in terms of visualisation, we don't necessarily need the visual representation of what's going on to be point by point accurate (for example the fluid) as long as the simulation itself is.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

How would 1) work for someone who wanted to do mathematical analysis of the fluid flows? Would they need to write their own OSGI bundles or is there another way?

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

Well for starters I'd imagine that if you wanna just do fluids you don't need the worm or any of the other stuff, but this may be true or not. Regardless, for the direction things are going, you'd just select which particles you want to "watch" (a population of hundreds, thousands or millions) and only those values (position, velocity, etc.) will be sent to the client. There's gonna be a given amount of data that could be defined as "too much data", in those extreme scenarios it could be that you can only do that kind of stuff on a local network (or even localhost). We're gonna have to play the balancing game there and I am sure there's tons of other issues we are not considering, but it's good to be thinking about these problems now.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

One additional hurdle to overcome - when the PCISPH and other currently under performing parts of the system are optimize we will have to deal with larger amounts of data and have less time to do it. So if we improve Sibernetic code to work @24fps from the current 6@FPS, instead of having to deal with 2MB every 20 ms we will have to deal with 8MB.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

@gidili @vellamike Our situation may not be as bleak as I've previously thought. We can improve our situation by implementing something similar to a proxy server.
If we can have Geppetto and Sibernetic produce the data and send it to a tiny proxy server using UNIX Sockets or UDP, we should be able to handle larger amounts of data.

The proxy would take the data, store it in a small local buffer, do simple processing (compress, decimate) and send the dataset to a processing server.

Mini-proxy would not require large amounts of memory (even 100 steps would be just ~ 200MB), nor processing power so it would only slightly affect Geppetto/Sibernetic.

Processing server, on the other hand, would have all the time in the world to process the data further, save it or serve it. It could have support for TCP only, WebSockets, smoke signals etc. It could convert the data to convex, save to a database, HDF5 file, or act as websocket server. It could support multiple clients viewing same data.

This way we would decouple the code even further, and allow both Geppetto and Sibernetic to take advantage of the same functionality (processing, storing, serving).


*With this amount of data produced/processed nothing is simple.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

@msasinski would UNIX Sockets or UDP be able to handle 0.5GB/s? That's my conservative estimate for a scene we are likely to want to utilize.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

@vellamike I don't know, but it should be easy to find out.
So let's do it:

Let's just test Unix Sockets.

in one xterm do this : nc -lU /tmp/owSocket > /dev/null

and in the other do this: sudo pv /dev/zero | nc -U /tmp/owSocket

My system results: 11.9GB 0:00:13 [ 985MB/s] - that's about 8Gb/s .. It looks like it's possible :)
However it does use a lot of CPU, but at this point I think that's the only solution we have. Few of these 64 cores may have to be used for preprocessing :)

Another test with gzip
sudo pv /dev/zero | gzip -c -1 | nc -U /tmp/socketTest
clocks at 210MB/s

This is much worse than I expected, but it's just another hoop to jump through and not something impossible to overcome.

To summarize - if we can get GPU to do most of the heavy lifting, and use CPU to preprocess the data, it's possible that with few CPU cores and 1GBit connection we can deal with current limitations of our system, and make scalable.
Of course it's not going to be easy and it may take a while but it should be possible.

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

Now let's do some TCP
We will use iperf for that

start iperf server

 iperf -s

and let's check our bandwidth (TCP)
in another window run

 iperf -c localhost

On my system with TCP window size: 85.3 KByte I got 59.5 GBytes 51.1 Gbits/sec

Problem solved?

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

Those numbers look pretty encouraging @msasinski ! Very impressive analysis.

One question - is /dev/zero a realistic way to test gzip? Perhaps I'm missing something, but isn't that just asking gzip to compress null characters, which should be computationally less intensive?

from org.geppetto.

msasinski avatar msasinski commented on August 16, 2024

@vellamike No, unfortunately these number are not realistic, but I was just checking if we can even dream about using this solution. There will be a lot of different things that will impact the final numbers, and at this point it's impossible to account for all of them. It's not going to be easy, but from my perspective we don't have any other choice ,but to try to make it work.

from org.geppetto.

charles-cooper avatar charles-cooper commented on August 16, 2024

http://www.bitmover.com/lmbench/

There are also multithreaded compression algorithms, check out pigz.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

@richstoner can BinaryJS be used without a nodejs server? We use websocket streamed from Java so we would need Java libraries server side and pure Javascript libraries client side.

from org.geppetto.

Neurophile avatar Neurophile commented on August 16, 2024

Maybe I'm missing something, but why not render on the server side and use one of the many stock methods for sending compressed image data?

If you are wanting to stream anything over the internet, 5 mb/s (bits, not bytes) is going to be a realistic upper limit for all but the very highest tiers of internet service.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

If you render server side you are removing the possibility to interact with the scene (beside camera controls in the future it will also be possible to select different entities and explore components at different scales).

The current progress of this is that we have two branches, one for org.geppetto.core and one for org.geppetto.frontend. We are currently experimenting with LZ4. The problem with this now is that the Java library we are using to compress the data server side doesn't yet support the latest LZ4 stream specification, see this bug I opened and this other related. The library is open source and on GitHub, we could fork it and update it ourselves if anybody had the bandwidth to do it.

from org.geppetto.

mlolson avatar mlolson commented on August 16, 2024

I came across the "Snappy" algorithm yesterday, and I'm wondering if it might work for this. Apparently it is fast, although not quite as fast as lz4. Here are implementations in java and node.js:

https://github.com/xerial/snappy-java
https://github.com/kesla/node-snappy

To get the node module to work on the browser side I've been trying to use browserify to create a standalone module, but so far this isn't quite working.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

Interesting! Keep me posted :)

from org.geppetto.

perqa avatar perqa commented on August 16, 2024

Have you tried UDT?

From the home page @ http://udt.sourceforge.net/index.html:

"UDT is a reliable UDP based application level data transport protocol for distributed data intensive applications over wide area high-speed networks. UDT uses UDP to transfer bulk data with its own reliability control and congestion control mechanisms. The new protocol can transfer data at a much higher speed than TCP does. UDT is also a highly configurable framework that can accommodate various congestion control algorithms. (Presentation: PPT 450KB / Poster: PDF 435KB )"

from org.geppetto.

domenic avatar domenic commented on August 16, 2024

As a first pass, I would suggest using gzip on the server side before sending the data, then unzipping on the client side with https://github.com/imaya/zlib.js (possibly in a web worker to avoid janking the user experience while it decompresses; remember your frame budget is ~16 ms to make the app feel responsive to user input).

It'd be interesting to see how much gain changing the file format to a binary one (possibly based on protocol buffers?) would gain in size and in decompression time, but the maintenance overhead would be an order of magnitude higher than just compression.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

@domenic do you still think you would have some time to help with this? :) the next Geppetto meeting will be on the 20th of January at 4pm GMT

from org.geppetto.

frenkield avatar frenkield commented on August 16, 2024

I did some experimentation with a compressed binary protocol. I also moved message compression and transmission into separate threads. On an EC2 GPU instance (g2.2xlarge) I'm getting about 8 fps in the browser for sphModel_Elastic.xml. This model has 16974 liquid particles and 1575 elastic particles.

Model: sphModel_Elastic.xml
Liquid particles: 16974
Elastic particles: 1575
Browser fps: 8
Binary scene size (doubles): 600kb
Binary compressed scene size: 260kb
Binary compression time: 55ms
SPH computation step: 26ms

For comparison:
JSON scene size: 5mb
JSON compressed scene size: 700kb

The binary representation is just 4 doubles per particle. One double for particle id and type, and 3 doubles for position. I haven't tried this with floats yet but I assume the compressed size would be around 200kb.

The g2.2xlarge is doing the whole SPH computation step in 26ms but the transport thread is only sending at about 8 fps. This is partly due to the 55ms required for message compression. But it seems like there are some other bottlenecks somewhere. Maybe I can get this up to 10 or 12 fps with a bit of refactoring.

The latest version of Virgo (3.6.3) supports the JSR-356 Java WebSocket 1.0 implementation. This has per-message deflate so that would eliminate the need for explicit compression. Chrome supports per-message deflate and maybe Firefox does as well but I'm not quite sure. So upgrading to Virgo 3.6.3 and refactoring GeppettoServlet to use JSR-356 might provide a good performance boost for the current implementation without any other code changes. I'm planning to give this a try next.

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

@frenkield that sounds quite promising, for comparison how many fps do you get in browser for the small elastic and small liquid scenes (default Geppetto samples) with the same setup?

Also I think that some of the contributors are on Virgo 3.6.3 already so upgrading shouldn't be a problem.

Thanks for looking into this, sorting it would be a major improvement!

from org.geppetto.

frenkield avatar frenkield commented on August 16, 2024

@gidili, I'm seeing up to 200 fps (scene frames processed per second really) with the small elastic scene. The small liquid scene is broken at the moment. I'll take a look later tonight and get that resolved.

I can leave it running for a few hours tomorrow if you want to take a look.

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

@frenkield yes that would be interesting if you get the chance, thanks. Are you doing this in your fork of org.geppetto.frontend?

What you are doing sounds like exactly what we have been wanting to try to test the limits of this approach with the smaller size of the data transferred but the additional compress / decompress time. Even more interesting in the case of the deflate option. Regardless, this should greatly improve most simulations and enable us to stream simulations that were only possible locally to the server or over local network.

After we test the limits of this approach, in terms of scene size, we can start working on other strategies to further reduce the amount of data sent (various configurable downsampling) by real-time simulators. At the same time we are introducing in Geppetto the capability to trigger simulations asynchronously for very big / computational intensive stuff using external simulators. Simulations will be queued and when completed a recording file with simulation data will be saved. The recording file will be then "played" at a later time, and the simulation will be then streamed to the client, so these improvements will be still relevant to optimising all sorts of scenarios where streaming is involved.

from org.geppetto.

frenkield avatar frenkield commented on August 16, 2024

@gidili, that sounds great. You guys have a lot of very interesting problems to solve. I'm glad I can help out.

You can check out the compressed protocol demo here:

A few notes:

  • I included the URL for the small liquid scene because the one that runs from the dropdown doesn't start. Everything is fine for me locally but on the EC2 instance it's broken. This happens on the clean Geppetto 0.2.2 release as well. I assume it's because it starts immediately after load but I haven't really looked into it. I might install the Oracle JDK to see if that changes anything.
  • I set updateCycle in the simulation bundle to 0. The default value is 20 so there's normally a minimum 20ms between frames.
  • It's logging fps in the console. I put some crude timing in the websocket onMessage callback.
  • The scene still loads with the uncompressed JSON so the initial load is slow for the larger scenes.

from org.geppetto.

frenkield avatar frenkield commented on August 16, 2024

Sorry, forgot to answer your question. Yes, I forked org.geppetto.frontend - and a few others as well. The bulk of the changes are in org.geppetto.frontend and org.geppetto.simulation.

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

@frenkield just tried your samples, looks like major improvements on both the small scene (goes much faster than the one on live.geppetto.org) and the big ones! Some of the gains (on the small scenes) may come from the fact that you are using a more powerful instance compared to live but the big scenes in particular are rendering at speed comparable to running on localhost in my experience if not slightly faster, which is a great result considering we are streaming over the internet. I am noticing a few glitches with simulation controls (unresponsive pause / start / stop buttons, could be just triggered by scene size) but other than that this is an impressive improvement already!

I will have a look at the code as soon as I can but I am eager to see what happens with the your other experiments with deflate and talk about how we can plan for some version of this to make into the dev branch of geppetto. You should join us at the next Geppetto dev meeting to discuss this :)

cc: @tarelli

from org.geppetto.

frenkield avatar frenkield commented on August 16, 2024

I think the unresponsiveness is due to the queuing of messages for transmission. For the large scenes the transmission rates don't quite keep up with the SPH solver so messages get queued up. The queue size is 20 so when you click stop, for example, the browser still has 20 frames it needs to process.

If the queue is full the transmission thread tosses out the oldest scene update when adding a new one. So you'll see skipped frames in the browser. This was the simplest thing for now but I imagine it's often preferable to preserve all frames and have the solver wait.

In any case the code will definitely need to be refactored. I jammed it in where it seemed easiest for now.

And thank you, I'd definitely like to attend the next dev meeting. When is it?

from org.geppetto.

gidili avatar gidili commented on August 16, 2024

@frenkield understood! The next meeting is Tuesday April-14 4-6pm GMT. It's every second Tuesday at the same time. If you can't make that one we can organize another chat just to discuss these improvements you're prototyping. If you send me an email at giovanni-at-openworm.org I will invite you to the Geppetto standing meeting and/or schedule something else if that doesn't work for you.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

@frenkield thanks a lot for this, as you could probably see from the history this is an old one.
We can update the script and what gets released to Virgo 3.6.3 no problem.

I was looking at the code, in particular I wonder if we need something which seems adhoc for scenes with particles or if we can find a more generic catch all kind of solution as we'll be streaming loads of numbers also in other simulations that have no particles at all.

I'm looking forward to meeting you at the Geppetto meeting next Tuesday, thanks again for helping with this!

from org.geppetto.

frenkield avatar frenkield commented on August 16, 2024

@tarelli, my pleasure. It's all super interesting.

I tried Virgo 3.6.3 and sadly it uses a version of Tomcat (7.0.53) that doesn't support the deflate option. The deflate option was added in Tomcat 7.0.56 so I assume the next Virgo release will include it. So that feature will probably have to wait. But I'm gonna try to get the 7.0.56 deflate functionality going in Virgo just to see if it actually works.

And yeah, my solution is far from optimal. It's gotta be completely refactored. I think maybe it should be divided into 4 separate features/changes:

  1. Move JSON serialization and message transmission into separate threads. This is pretty minor since it doesn't affect the protocol at all. And it doesn't involve any changes on the client. Currently the solver waits around a bit for message transmission to complete so this change alone helps performance.
  2. Add compression to the JSON protocol. This requires changes on both server and client but it keeps the protocol intact so it's not super invasive.
  3. Add the binary protocol. My current solution is really basic. Someone above in the thread suggested using protocol buffers so that might make it pretty straightforward.
  4. When Virgo support it, remove the manual compression in favor of websocket deflate.

Thanks!

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

At the Geppetto meeting we agreed with @frenkield to go ahead with the following two points. We'll capture the remaining two in separate cards.

  • Move JSON serialization and message transmission into separate threads. This is pretty minor since it doesn't affect the protocol at all. And it doesn't involve any changes on the client. Currently the solver waits around a bit for message transmission to complete so this change alone helps performance.
  • Add compression to the JSON protocol. This requires changes on both server and client but it keeps the protocol intact so it's not super invasive.

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

@frenkield how is this going? let me know if I can help in any way!

from org.geppetto.

frenkield avatar frenkield commented on August 16, 2024

@tarelli, it's going ok. I have it all working and reasonably well organized.

At the moment I'm trying to add some facilities to make the UI more responsive. Because of the message queuing the UI lags when viewing large scenes. Hopefully I'll have that all going tonight.

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

Really happy to see this resolved :)

from org.geppetto.

parthipanr avatar parthipanr commented on August 16, 2024

Is there any method to decompress the gzip compressed data in client side??

from org.geppetto.

tarelli avatar tarelli commented on August 16, 2024

@parthipanr we are using a library called pako

from org.geppetto.

vellamike avatar vellamike commented on August 16, 2024

you have a typo - pako

from org.geppetto.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.