Giter Club home page Giter Club logo

immerse's Introduction

Immerse

Build Status

Immerse yourself in the waves of dynamic surround sound!

Immerse is used by the Adventure Room project

Use Cases

Immerse is a perfect fit for:

  • An escape / adventure room with realistic audio effects
  • An audio enhanced display at an event, conference or museum
  • An (interactive) audio(visual) art installation or exhibition
  • Any room with some speakers and a demand for a cool audio experience

Immerse is no fit for:

  • Big commercial projects that require support agreements

Features

  • Play any number of audio scenarios over a collection of speakers distributed across a room
  • Stream audio to an unlimited amount* of speakers simultaneously
  • Scenarios can respond on events dynamically and in real time
  • Works well on cheap computer, sound card and speaker hardware

(*) Tested with up to 12 speakers, (a lot) more should be no problem

Getting Started

Read the Getting Started section of the documentation.

Modules

For implementation details on the different modules, see:

Releases

  • 0.5.0-ALPHA - 2018-05-01 - Contains all the needed functionality to start using Immerse, but has limited guarantee on stability and robustness.

Road Map

  • 0.8.0-BETA will contain some more features, bug fixes and a more stable system in general. It is planned for autumn 2018.
  • 1.0.0 will be feature complete for general usage of the library and should be stable enough to use in real life situations. It does not have a planned release date yet.

Contributing

Feature requests are welcome! Please use the Github issues for that. Pull requests are also welcome, but please first get in touch about your plans if it involves more than a simple bug fix or small correction.

Licence

Immerse is released under the Apache Licence 2.0

immerse's People

Contributors

ewjmulder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

orangebaowang

immerse's Issues

Perform extensive alpha testing before release

If all issues for alpha are done, the last thing to do is some extensive real life testing to see if it all works as intended. Also a durability test might be good, running it for about an hour.

Minimize the JVM garbage production per step by reusing arrays etc

To minimize the impact of stop-the-world GC, we should produce as little garbage as possible. The main idea is to reuse as many data structures as possible used in a step calculation. This can be achieved in various ways, some ideas:

  • Use a StepData object that is part of an ActiveScenario and holds all data structures needed (mainly arrays, maybe also maps?) that can be reused every step
  • These StepData objects then need to be object pooled to prevent littering old gen with these. Apache Commons Pool is a good library for this
  • Another option is to pool the data structures directly, but that introduces more complexity in acquiring and releasing.

To keep in mind:

  • All arrays should be the max possible size, when amountOfFrames needed covers the full buffer millis
  • You need an actual size as extra parameter all over the code handling arrays, cause the real array size will be that maximum
  • calculateAmountOfFramesNeeded should never return a higher value then needed for the full buffer millis (implementation should be checked on this)
  • byte[]'s for the sound card streams should be polled separately, because they are the only one 'escaping from the step' and written to the stream in a separate thread.
  • Measuring is knowing!

Add 'warmup' silenced scenario to get JVM ready to rock

Starting the first scenario will give hickups because of JVM warmup. Currently that is not solved by letting the AudioMixer start early, cause there is no scenario and that code is not hit. So create a fake warmup scenario that has an audio input stream of silence or random but just have fixed speaker volumes of 0 or so. At least let the scenario code run without producing sound for a few seconds so it is ready to start the next one at full JVM speed.
Note: also let this warmup scenario restart a few times, cause that code part also needs some warmup!

Optimization: stop the streams when there is only silence

When there are no active scenario's, do not actually provide all 0's to the hardware
So do not 'keep the engine running' when there is no music to play.
Need to refill buffers first after new scenario pops up.

Is this a useful feature? What does it solve?

Change volume based on distance between listener and source

Currently, when calculating a dynamic volume, only the angle to the speaker is taken into account. But it makes sense that the same angle, but much difference in distance should also have it's effect on the volume. This might be tricky in the current setup, cause normalizing will remove this effect. Maybe it should be some kind of extra feature build in the system that the algorithms should be aware of. Needs some further thinking...
See also notes about this in #24.

Setup Github Pages

First go low profile for the alpha release, but when reaching a beta release, use Github Pages to get some more attention and a flashy website to point to.

Persist configuration data

Store TBD: Postgres / MongoDB / ??

What to persist:

  • Speaker Matrix
  • Sound card data
  • Sound card to speaker mapping
  • Room and Scenario data

Add support for dynamic audio streams - low prio!

Even more dynamic would be to use audio input streams that change based on what is happening. Dynamic sound waves based on movement, etc. But the downside is that you must keep the read buffer very low for that.

Not a feature for the short term!

Detect physical mapping of sound cards

First only for Linux, use the /dev/snd/by-path mapping

Consider using the extra USB information from LibUSB:

       Context context = new Context();
        int result = LibUsb.init(context);
        if (result != LibUsb.SUCCESS)
            throw new LibUsbException("Unable to initialize libusb.", result);
        // Read the USB device list
        DeviceList list = new DeviceList();
        int result2 = LibUsb.getDeviceList(null, list);
        if (result2 < 0)
            throw new LibUsbException("Unable to get device list", result);

        try {
            // Iterate over all devices and scan for the right one
            for (Device device : list) {
                DeviceDescriptor descriptor = new DeviceDescriptor();
                result = LibUsb.getDeviceDescriptor(device, descriptor);
                if (result != LibUsb.SUCCESS)
                    throw new LibUsbException("Unable to read device descriptor", result);

                System.out.println("Bus number: " + LibUsb.getBusNumber(device));
                System.out.println("Port number: " + LibUsb.getPortNumber(device));
                System.out.println(descriptor.dump());
                System.out.println("");
            }
        } finally {
            // Ensure the allocated device list is freed
            LibUsb.freeDeviceList(list, true);
        }

Add support for stereo input streams

Given the nature of Immerse, a stereo input stream has no added value, but it would be silly to fail on it. Possible handling: give channel as extra parameter or merge channels 'manually' before volume processing. Merging probably means either taking the highest amplitude or the average. Research this somewhat and maybe find a library that can do it. But in general providing a stereo input stream with quite different sound in the channels is not useful. Also add a warning in the logs about providing a stereo input stream, cause it should be prevented if possible.

Fix bug: millisSinceStart can be 'underflown'

millisSinceStart: 1507392342686
currentAngleInDegrees: 1.0767088153042857E11
radius: 5.0
x: 9.91740658163002
y: -5.905048347295192
Listener: (5.0, 5.0, 5.0)
Source: (9.91740658163002, -5.905048347295192, 10.0)
Angles: {1=60.70663513433179, 2=118.8112457397187, 3=108.78694786182166, 4=92.52067265399417, 5=21.99335813672193}
{1=0.0, 2=0.0, 3=0.0, 4=0.0, 5=1.0}
millisSinceStart: 2
currentAngleInDegrees: -89.85714285714286

SineWaveAudioInputStreamGenerator restart issue

Seems like a audio stream generated by SineWaveAudioInputStreamGenerator is not restarting properly.

Scenario:
Scenario scenario = scenario(room, settings(fixed(generate(format, 500, 10_000)), fixed(5, 10, 10), fixed(5, 5, 5),
fixed(fixedSpeakerVolumeRatios), fractional(), forever()));

Output:
2018-04-23 14:41:47 [main] ImmerseMixer.initializeSoundCardStreams() - INFO: Exception for mixer info: 'PCH [plughw:0,0]'. Known Java Sound API issue, falling back to default audio device.
2018-04-23 14:41:47 [main] ImmerseMixer.lambda$32() - INFO: Main mixer changed from state NEW to WARMUP
2018-04-23 14:41:49 [Warmup Mixer Executor] ImmerseMixer.lambda$32() - INFO: Main mixer changed from state WARMUP to INITIALIZED
2018-04-23 14:41:49 [Warmup Mixer Executor] ImmerseMixer.warmup() - INFO: Warmup completed in 1.471 seconds
2018-04-23 14:41:49 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer changed from state INITIALIZED to STARTED
2018-04-23 14:41:49 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer restarted scenario Scenario
2018-04-23 14:41:59 [Main Mixer Worker] ImmerseMixer.lambda$32() - INFO: Main mixer started scenario Scenario

Wamrup should have overlapping scenarios

Because of the new optimization of the merge method, warmup should have overlapping scenarios to cover the full code base again. Also good for reduced warmup runtime. Maybe calculate scenario runtime in advanced and starting the next one after 30% runtime, so max 3/4 scenario's at the same time?

StepStatistics gathering for more insight

Implement some kind of StepStatistics that will fill during the execution of a step and saved/cached/logged/sent somewhere. NOTE: if cached locally, this will influence memory management!

Possible data to include:

  • amount of frames needed
  • step millis
  • nr of scenario's (re)started/stopped
  • ???

Smoothen amount of frames to write

Can be quite bumpy (even 20ms steps) depending on the hardware.
The bigger the steps, the more risk of hickups in playback.
So smoothen by always writing some frames, even when current frame position has not changed in the sound card streams. We can do this by for example:

  • Taking some kind of linear interpolation for the frame position based on past measurements
  • Using a real time clock to calculate the approximate frame position based on the previous (calculated) frame position

Write README documentation for 0.5.0-ALPHA

Some readme docs are already in place, but it needs:

  • more structure
  • a good general intro
  • a getting started section
  • some explanation about the audio-streaming internals

NOTE: what is the relation between the README's and the wiki?
Maybe general description in README and forward to wiki for more details?

To incorporate in documentation - domain remarks

// General remark: domain data does not have much validation itself, you can create whatever you like
// When starting a stream / mixer in audio-streaming, the validation takes place on the domain objects.

Investigate and activate Github plugins

For example:

  • build server (what's its name)
  • quality check (10 points company)
  • public kind of QA server thingy (pdm, checkstyle, findbugs, etc)
  • security report?
  • auto deploy to maven central upon release

Manual syncing / keeping track of sync status of soundcard streams

We have 2 measurements:

  1. System.out.println("Frames needed diff: " + (maxFramesNeeded - minFramesNeeded));

  2. We can also try to use current micro position to see the difference between streams

  3. Will fluctuate but should be close to 0, the diff for 2 should also be close to 0, but not tested yet and unknown how reliable this is, maybe just based on frame position.

NB: Actually so far no hearable out of sync, so maybe no high prio.

Immerse library should not be dependent on Spring

Since we want this to be a general purpose library, better remove the Spring dependency and any big lib that is just used for a small piece. Optionally include a few classes from other open source projects directly to lower the footprint.

Introduce extra audio abstractions like Frame and Format to make code more readable

Only for internal code quality: upgrade the Frame concept to a first class citizen by having some kind of FrameStream in and out as wrappers for the underlying byte streams. That will greatly simplify code in SoundCardStream. Also for Format we can have a better abstraction then the current Java AudioFormat. See for instance the PyhAudioFormat from the Program Your Home project.

Test and document behavior of plughw:x,0

Previous testing showed that all usable sound cards will have plughw:x,0 in their name, with x being a number from 0 till n. One of these will likely be the default. But it also seemed to suggest that the one that is in use as default is not accessible by it's plughw:x,0 index. But testing without any USB sound card that was not the case. So this should be further tested and the solution should be put into code here:
ScenarioPlayer:initializeStreams

Add support for PCM unsigned & PCM float

Since the input can be any odd audio file, it would be nice to support as many formats as possible. Currently only PCM signed is supported. But it should be fairly easy to include PCM unsigned and float, since it takes just a slightly different calculation to process the amplitude (volume) alteration.

See: SoundCardStream.calculateFrameBytes

PS: Output will always use PCM signed for simplicity.

Monitoring Immerse with graphical stats tool

Send stats to stats gatherer, that will async send stats to a server / process and display for instance avg step time last sec/min/since start.
Use existing tools for gathering/saving and visualiztion, but maybe some internal class as proxy that can have different backends: sysout, GUI, network service

Fix concurrent modification exception during warmup

INFO: Main mixer changed from state NEW to WARMUP
Exception in thread "Warmup Mixer Executor" java.util.ConcurrentModificationException
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1558)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at one.util.streamex.AbstractStreamEx.rawCollect(AbstractStreamEx.java:68)
at one.util.streamex.AbstractStreamEx.toSet(AbstractStreamEx.java:1219)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.getScenariosInPlayback(ImmerseMixer.java:156)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.hasScenariosInPlayback(ImmerseMixer.java:147)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.lambda$warmup$2(ImmerseMixer.java:246)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.waitFor(ImmerseMixer.java:472)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.warmup(ImmerseMixer.java:246)
at com.programyourhome.immerse.audiostreaming.mixer.ImmerseMixer.lambda$initialize$0(ImmerseMixer.java:189)
at java.lang.Thread.run(Thread.java:748)

Make various properties configurable

Create some kind of mechanism (maybe just plain .properties file) to configure some important parameters from the outside, like:

  • buffer millis
  • step sleep time
  • warmup config
  • etc

Add option for dynamic volume, independent of other features

Everything already seems dynamically calculated, but it makes sense to have separate control over the volume of the audio resource. This can simulate things getting closer or further away or something turning up / down volume, talking softer / louder, etc.

Related to this is issue #25, that deals with dynamic volume based on distance. The same problems apply here, namely that in the current setup the normalizing process might be in the way. Or both this and #25 should be realized by some post-processing. That could work for this one, but for #25 we need the distance which might be lost by the time of post processing. Although recalculating is not that hard of course.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.