Giter Club home page Giter Club logo

bucket-brigade's People

Contributors

dchudz avatar dependabot[bot] avatar dspeyer avatar eedwardsa avatar gwillen avatar jeffkaufman avatar jsoref avatar raemon avatar taymonbeal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

bucket-brigade's Issues

volume normalization thoughts

  • should be able to easily control volume of (1) backing track, (2) lead singers/instrumentalists, (3) audience as a group, to ensure that everybody can hear all three, but (1) and (2) get a configurable boost based on event runner's (or lead singers') taste.
  • "ahead" of that, should automatically normalize everybody's volume to around the same level before we apply those adjustments (or whatever other adjustments we like, e.g. turning down people who are off-key, if desired.) That can be done automatically by the browser, but I've seen several problems with that, so we should probably manage it ourselves. (One issue I've had, when testing this app, is long held notes getting detected as background noise and turned down to silence. Another issue, which I've had elsewhere when using Jitsi, is background noise, like computer fans, getting detected as quiet input and turned up to a roar.)

TypeError: Cannot read property 'server_sample_rate' of null

TypeError: Cannot read property 'server_sample_rate' of null
    at query_server_clock (https://echo.jefftk.com/net.js:242:45)
    at async ServerConnection.start (https://echo.jefftk.com/net.js:51:31)
    at async try_increase_batch_size_and_reload (https://echo.jefftk.com/app.js:1449:5)
    at async MessagePort.handle_message (https://echo.jefftk.com/app.js:1196:7)

When testing in chrome on a mobile device it says I am not using chrome

Device: iPhone XR
Chrome version: 86.0.4240.93

After tapping the dismiss button, it showed the following error:

TypeError: undefined is not When testing in chrome on a mobile device it says I am not using chromean object (evaluating 'navigator.permissions.query')
https://echo.jefftk.com/app.js:162:48
asyncFunctionResume@[native code]
https://echo.jefftk.com/app.js:1632:33
asyncFunctionResume@[native code]
module code@https://echo.jefftk.com/app.js:1698:11
evaluate@[native code]
moduleEvaluation@[native code]
[native code]
promiseReactionJob@[native code]

Cannot decode non-contiguous chunks!

Error: Cannot decode non-contiguous chunks!
    at AudioDecoder.decode_chunk (https://echo.jefftk.com/app.js:721:13)
    at MessagePort.handle_message (https://echo.jefftk.com/app.js:1222:36)

should automatically reconnect if this happens

Add samples

Add a concept of samples, where the system automatically keeps a sample for each user of their most recent singing. Exclude times when they're not actually singing, by not replacing the previous sample if the volume is too low. The person running sound should be able to ask for a sample of any person, to allow tweaking their volume or otherwise figuring out what to do with them.

Build a mixing console

We should make there be an easy way for someone to adjust peoples audio levels, since setting them fully automatically is not likely to work well.

TypeError: Cannot read property 'sample_rate' of undefined

net.js:12 Uncaught TypeError: Cannot read property 'sample_rate' of undefined
    at ServerConnection.get client_window_time [as client_window_time] (net.js:12)
    at app.js:1257
get client_window_time @ net.js:12
(anonymous) @ app.js:1257
requestAnimationFrame (async)
handle_message @ app.js:1232
async function (async)
handle_message @ app.js:1149

stress testing

We should figure out what the maximum number of clients it can handle is, and figure out how to scale it if that's not good enough

Add buses

Add a concept of buses or groups, where the person running sound can group some singers together and bring their volume up and down together.

TypeError: Cannot read property 'interval' of null

TypeError: Cannot read property 'interval' of null
TypeError: Cannot read property 'interval' of null
    at MessagePort.handle_message (https://echo.jefftk.com/app.js:1208:64)

On a couple occasions now, I've gotten this immediately after completing volume calibration.

Clarify ordering of singers

The text currently says "choose a number between 6 and 116", but then when you click on "Lead Song" it gives you a number below 6. Also, it's currently important to not pick an order before someone clicks the "Lead" button which could also be clarified in the text

Even more ideal would be a more intuitive user interface -- e.g. the order dialog could pop up only once someone clicks "Lead".

Computing the user summary is expensive

When testing with 1600 simulated users, I get about twice the throughput if I don't include the user summary

We are computing it for every user in response to every request, but it seems to me like we could cache it, and only update it when its inputs change.

On the other hand, it's possible that this summer is just not how we wanna handle things at all? It doesn't really make sense after you get to a very large number of users, and I'm already truncating it to just the first 50. With very large numbers of users, I think this should really only be going to whoever is running the soundboard?

mic calibration should use A-weighting

Currently, we are just using the acoustic energy of the input signal and equalizing these, which does not match human perception. The main problem is that lower notes have a lot of energy but humans don't perceive them as particularly loud, so people with low voices and much quieter in the mix than people with high voices.

If we used a-weighting in evaluating the volume of the input signal, people would sound more like they were all singing at the same volume.

App crashed

Hi--I pressed the start button to start the audio test, and I got the following message:

This app has crashed. We're really sorry :-(
Please file a bug with the following information; it will help us fix it.

TypeError: Failed to construct 'MediaStreamAudioSourceNode': required member mediaStream is undefined.
TypeError: Failed to construct 'MediaStreamAudioSourceNode': required member mediaStream is undefined.
at EventTarget.start_bucket (https://echo.jefftk.com/app.js:562:20)
at start (https://echo.jefftk.com/demo.js:940:20)
at async HTMLButtonElement.start_stop (https://echo.jefftk.com/demo.js:483:5)

I'll try again.

(I'm on a Mac (macOS 10.15.7), using Chrome (Version 87.0.4280.67 (Official Build) (x86_64)).

support rounds

I think we could support singing rounds by writing not only to where the user currently is but also a fixed number of beats earlier. There would need to be a way for users to set the number of beats, and we would need metronome support (#48, including a way to set the tempo)

Best practices

We should make sure we have all the best practices for using the app together in one place, and show them during the 'tutorial'. For example:

  • No bluetooth headphones / bluetooth audio devices of any kind
  • Wired network (vs WiFi), if possible
  • Close all other tabs, or as many as possible
  • Close any unnecessary apps

(Obviously video chat is effectively necessary for using the app, and most people won't be able to move that to another machine, but it might be worth mentioning as a last resort.)

Server can freeze

When I hit the server really hard and stress testing, sometimes the Python program becomes unresponsive

First-singer timing note (never assign anybody to time "0")

This isn't exactly an "issue", but something I realized we aren't currently enforcing and maybe should be: The first client should NEVER connect at 0. I think right now we start them at 3, but this isn't enforced anywhere other than the randomization algorithm never picking 0.

The reason for this is that, now that we've introduced backing tracks, there is effectively always a singer at 0 if the backing track is enabled; and while the backing track takes up zero time (their 'write pointer' is right at 0), this still means that any request for audio before 0, i.e. "in the future", will result in the backing track skipping for the first singer (in the same way that getting too close behind another singer will have the same effect.) And if you set your offset to 0, it is possible for your actual read time to drift slightly negative, depending on various factors.

TypeError: Cannot read property 'postMessage' of undefined

TypeError: Cannot read property 'postMessage' of undefined
    at ClockedRingBuffer.read_into (http://localhost:8000/audio-worklet.js:110:17)
    at AudioWorkletProcessor.process_normal (http://localhost:8000/audio-worklet.js:517:41)
    at http://localhost:8000/audio-worklet.js:641:14
    at AudioWorkletProcessor.try_do (http://localhost:8000/audio-worklet.js:393:7)
    at AudioWorkletProcessor.process (http://localhost:8000/audio-worklet.js:604:10)

Cannot read property 'sampleRate' of undefined

On pressing start, before any calibration started

TypeError: Cannot read property 'sampleRate' of undefined
    at start (https://echo.jefftk.com/app.js:847:33)
    at async HTMLButtonElement.start_stop (https://echo.jefftk.com/app.js:492:5)

Calendar for coordinating shared usage

Keeping track of when people have booked to use the system is complicated, and I don't want to forget peoples times. I think I should make a calendar to track shared usage.

Track user volumes, and send to sound person

The only way the sound person can currently set levels is by listening to someone and seeing how their levels should change. On the other hand, sometimes someone will sing very quietly during volume calibration and then much louder in the real thing. For each user, send their most recent average acoustic energy (RMS of most recent packet) to the person running sound.

This means levels will update every 600 milliseconds, and to be about 2 seconds out of date, but this is a good place to start.

Enable some kind of "play existing audio backtrack"

It's likely that we'll need to use a backing track for most (all?) songs at Solstice, to prevent a variety of logistical screwups.

Eventually we'll need something fairly complex to handle an entire program of backup tracks, but for now just being able to play an existing link should be fine. I'm not sure about the implementation details.

Computation of time offset when initially connecting to server

There is a long rant in a comment in net.js (around line 240) about the way we compute stuff around the timestamps when we make our initial request to the server. I believe the current code is wrong for subtle reasons, and could be significantly simplified to just compute a client-server time offset, and use that later. (This would also eliminate the problem of storing a server timestamp, which is that we must start processing immediately after doing so, or it will fall behind.)

The current implementation attempts to subtract half the round-trip delay from the number we get from the server, so that we compute a time which is as close to "right now" on the server as possible. However, I believe we should actually subtract the full round-trip delay, which will result in a time which is earlier than server time, by exactly the amount of time it takes packets to get from us to the server, such that packets we send should reach the server at exactly that time.

This would dramatically simplify the calculations, I think it would make them more correct, and it would eliminate the bogus assumption we currently make that round-trip time is symmetrical (which is usually approximately true, but not actually true.)

Improve concurrency

Today our architecture looks like:

  1. Requests come in to Nginx, which handles HTTPS

  2. Nginx reverse proxies to a long-running http.server / BaseHTTPRequestHandler

This has two main problems:

  • We can't take advantage of any concurrency that the server might be able to provide

  • We're getting freezes (#42)

Instead, I think we should be doing something more like:

  1. Requests come in to Nginx, which handles HTTPS

  2. Nginx uses the wsgi protocol to delegate to a uwsgi server running all the components of the app that do not need global state, primarily opus decoding and encoding.

  3. For handling global state we run a singleton process, communication over UNIX socket or something.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.