ewjmulder / immerse Goto Github PK
View Code? Open in Web Editor NEWImmerse yourself in the waves of dynamic surround sound!
License: Other
Immerse yourself in the waves of dynamic surround sound!
License: Other
The performance is currently good enough to run on a Pine64, but not by a lot. So might be useful to do some actual profiling to find out what the current bottleneck is. And maybe in case of a big percentage to a small function, we can fix that to improve performance.
Currently, when calculating a dynamic volume, only the angle to the speaker is taken into account. But it makes sense that the same angle, but much difference in distance should also have it's effect on the volume. This might be tricky in the current setup, cause normalizing will remove this effect. Maybe it should be some kind of extra feature build in the system that the algorithms should be aware of. Needs some further thinking...
See also notes about this in #24.
Currently a lot of new threads are started for asynchronous tasks. We should use a thread pool for better use of resources.
If all issues for alpha are done, the last thing to do is some extensive real life testing to see if it all works as intended. Also a durability test might be good, running it for about an hour.
Some readme docs are already in place, but it needs:
NOTE: what is the relation between the README's and the wiki?
Maybe general description in README and forward to wiki for more details?
First only for Linux, use the /dev/snd/by-path mapping
Consider using the extra USB information from LibUSB:
Context context = new Context();
int result = LibUsb.init(context);
if (result != LibUsb.SUCCESS)
throw new LibUsbException("Unable to initialize libusb.", result);
// Read the USB device list
DeviceList list = new DeviceList();
int result2 = LibUsb.getDeviceList(null, list);
if (result2 < 0)
throw new LibUsbException("Unable to get device list", result);
try {
// Iterate over all devices and scan for the right one
for (Device device : list) {
DeviceDescriptor descriptor = new DeviceDescriptor();
result = LibUsb.getDeviceDescriptor(device, descriptor);
if (result != LibUsb.SUCCESS)
throw new LibUsbException("Unable to read device descriptor", result);
System.out.println("Bus number: " + LibUsb.getBusNumber(device));
System.out.println("Port number: " + LibUsb.getPortNumber(device));
System.out.println(descriptor.dump());
System.out.println("");
}
} finally {
// Ensure the allocated device list is freed
LibUsb.freeDeviceList(list, true);
}
More CPU in the background instead of stop the world from time to time is preferred
https://github.com/ewjmulder/tinylog/blob/v1.3/tinylog/src/org/pmw/tinylog/Tokenizer.java
new inner class MaxSizeToken that just cuts off first or last part that is too long
new inner class FixedSizeToken that combines behavior of min-size and max-size
Previous testing showed that all usable sound cards will have plughw:x,0 in their name, with x being a number from 0 till n. One of these will likely be the default. But it also seemed to suggest that the one that is in use as default is not accessible by it's plughw:x,0 index. But testing without any USB sound card that was not the case. So this should be further tested and the solution should be put into code here:
ScenarioPlayer:initializeStreams
This is the same idea as fixed speaker ratios, but can be implemented even easier when taken into account explicitly.
To minimize the impact of stop-the-world GC, we should produce as little garbage as possible. The main idea is to reuse as many data structures as possible used in a step calculation. This can be achieved in various ways, some ideas:
To keep in mind:
Even more dynamic would be to use audio input streams that change based on what is happening. Dynamic sound waves based on movement, etc. But the downside is that you must keep the read buffer very low for that.
Not a feature for the short term!
INFO: Main mixer changed from state NEW to WARMUP
Exception in thread "Warmup Mixer Executor" java.util.ConcurrentModificationException
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at one.util.streamex.AbstractStreamEx.rawCollect(AbstractStreamEx.java:68)
at one.util.streamex.AbstractStreamEx.toSet(AbstractStreamEx.java:1219)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.getScenariosInPlayback(ImmerseMixer.java:156)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.hasScenariosInPlayback(ImmerseMixer.java:147)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.lambda$warmup$2(ImmerseMixer.java:246)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.waitFor(ImmerseMixer.java:472)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.warmup(ImmerseMixer.java:246)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.lambda$initialize$0(ImmerseMixer.java:189)
at java.lang.Thread.run(Thread.java:748)
Can be quite bumpy (even 20ms steps) depending on the hardware.
The bigger the steps, the more risk of hickups in playback.
So smoothen by always writing some frames, even when current frame position has not changed in the sound card streams. We can do this by for example:
// TODO: support for secondsPerFullCircle
// TODO: support for circling around any of the axis
Create some kind of mechanism (maybe just plain .properties file) to configure some important parameters from the outside, like:
We have 2 measurements:
System.out.println("Frames needed diff: " + (maxFramesNeeded - minFramesNeeded));
We can also try to use current micro position to see the difference between streams
Will fluctuate but should be close to 0, the diff for 2 should also be close to 0, but not tested yet and unknown how reliable this is, maybe just based on frame position.
NB: Actually so far no hearable out of sync, so maybe no high prio.
// General remark: domain data does not have much validation itself, you can create whatever you like
// When starting a stream / mixer in audio-streaming, the validation takes place on the domain objects.
Does Immerse work on Windows?
Since we want this to be a general purpose library, better remove the Spring dependency and any big lib that is just used for a small piece. Optionally include a few classes from other open source projects directly to lower the footprint.
The audio-streaming module can run inside a JVM and controller directly from there. It would also be nice to be able to run immerse as a network service that can be controller from the network.
Send stats to stats gatherer, that will async send stats to a server / process and display for instance avg step time last sec/min/since start.
Use existing tools for gathering/saving and visualiztion, but maybe some internal class as proxy that can have different backends: sysout, GUI, network service
Because of the new optimization of the merge method, warmup should have overlapping scenarios to cover the full code base again. Also good for reduced warmup runtime. Maybe calculate scenario runtime in advanced and starting the next one after 30% runtime, so max 3/4 scenario's at the same time?
First go low profile for the alpha release, but when reaching a beta release, use Github Pages to get some more attention and a flashy website to point to.
Only for internal code quality: upgrade the Frame concept to a first class citizen by having some kind of FrameStream in and out as wrappers for the underlying byte streams. That will greatly simplify code in SoundCardStream. Also for Format we can have a better abstraction then the current Java AudioFormat. See for instance the PyhAudioFormat from the Program Your Home project.
Given the nature of Immerse, a stereo input stream has no added value, but it would be silly to fail on it. Possible handling: give channel as extra parameter or merge channels 'manually' before volume processing. Merging probably means either taking the highest amplitude or the average. Research this somewhat and maybe find a library that can do it. But in general providing a stereo input stream with quite different sound in the channels is not useful. Also add a warning in the logs about providing a stereo input stream, cause it should be prevented if possible.
For example:
Implement some kind of StepStatistics that will fill during the execution of a step and saved/cached/logged/sent somewhere. NOTE: if cached locally, this will influence memory management!
Possible data to include:
Using this info: http://soundfile.sapp.org/doc/WaveFormat/
It should be pretty simple to generate your own wave streams and use those for testing purposes.
Both for unit tests that need an input stream as for a generator that creates a certain sound, although there are better libs for that, since sine wave generation is not that trivial.
Everything already seems dynamically calculated, but it makes sense to have separate control over the volume of the audio resource. This can simulate things getting closer or further away or something turning up / down volume, talking softer / louder, etc.
Related to this is issue #25, that deals with dynamic volume based on distance. The same problems apply here, namely that in the current setup the normalizing process might be in the way. Or both this and #25 should be realized by some post-processing. That could work for this one, but for #25 we need the distance which might be lost by the time of post processing. Although recalculating is not that hard of course.
Seems like a audio stream generated by SineWaveAudioInputStreamGenerator is not restarting properly.
Scenario:
Scenario scenario = scenario(room, settings(fixed(generate(format, 500, 10_000)), fixed(5, 10, 10), fixed(5, 5, 5),
fixed(fixedSpeakerVolumeRatios), fractional(), forever()));
Output:
2018-04-23 14:41:47 [main] ImmerseMixer.initializeSoundCardStreams() - INFO: Exception for mixer info: 'PCH [plughw:0,0]'. Known Java Sound API issue, falling back to default audio device.
2018-04-23 14:41:47 [main] ImmerseMixer.lambda$32() - INFO: Main mixer changed from state NEW to WARMUP
2018-04-23 14:41:49 [Warmup Mixer Executor] ImmerseMixer.lambda$32() - INFO: Main mixer changed from state WARMUP to INITIALIZED
2018-04-23 14:41:49 [Warmup Mixer Executor] ImmerseMixer.warmup() - INFO: Warmup completed in 1.471 seconds
2018-04-23 14:41:49 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer changed from state INITIALIZED to STARTED
2018-04-23 14:41:49 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
Starting the first scenario will give hickups because of JVM warmup. Currently that is not solved by letting the AudioMixer start early, cause there is no scenario and that code is not hit. So create a fake warmup scenario that has an audio input stream of silence or random but just have fixed speaker volumes of 0 or so. At least let the scenario code run without producing sound for a few seconds so it is ready to start the next one at full JVM speed.
Note: also let this warmup scenario restart a few times, cause that code part also needs some warmup!
Does Immerse work on OSX?
Test Immerse to see where a possible limitation may be in the amount of sound cards supported / smooth playback, etc.
When there are no active scenario's, do not actually provide all 0's to the hardware
So do not 'keep the engine running' when there is no music to play.
Need to refill buffers first after new scenario pops up.
Is this a useful feature? What does it solve?
Since the input can be any odd audio file, it would be nice to support as many formats as possible. Currently only PCM signed is supported. But it should be fairly easy to include PCM unsigned and float, since it takes just a slightly different calculation to process the amplitude (volume) alteration.
See: SoundCardStream.calculateFrameBytes
PS: Output will always use PCM signed for simplicity.
Store TBD: Postgres / MongoDB / ??
What to persist:
For Windows and OSX support, we might consider using Port Audio:
http://portaudio.com/
millisSinceStart: 1507392342686
currentAngleInDegrees: 1.0767088153042857E11
radius: 5.0
x: 9.91740658163002
y: -5.905048347295192
Listener: (5.0, 5.0, 5.0)
Source: (9.91740658163002, -5.905048347295192, 10.0)
Angles: {1=60.70663513433179, 2=118.8112457397187, 3=108.78694786182166, 4=92.52067265399417, 5=21.99335813672193}
{1=0.0, 2=0.0, 3=0.0, 4=0.0, 5=1.0}
millisSinceStart: 2
currentAngleInDegrees: -89.85714285714286
See AudioSystem.isConversionSupported
See also: https://www.javaworld.com/article/2076227/java-se/add-mp3-capabilities-to-java-sound-with-spi.html for using the SPI implementation system. But actually no default conversion seems to be present, so maybe better use your own taste.
Actually the isConversionSupported returns true for a lot of possibilities, so do use that if it indeed actually works.
Continuous looping:
Use thread (pool) to read from AudioInputStream to BufferedInputStream or so and control the buffer size explicitly. Initially this can be set high to always have bytes available quickly when needing to read.
// TODO: add different AudioResources, file and maybe URL if that can be done with local resource (resolves to local file, no network needed)
// TODO: vary the recording mode, sample size, singed-ness (currently not supported by the generator)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.