jeffkaufman / bucket-brigade Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
TypeError: Cannot read property 'server_sample_rate' of null
at query_server_clock (https://echo.jefftk.com/net.js:242:45)
at async ServerConnection.start (https://echo.jefftk.com/net.js:51:31)
at async try_increase_batch_size_and_reload (https://echo.jefftk.com/app.js:1449:5)
at async MessagePort.handle_message (https://echo.jefftk.com/app.js:1196:7)
Forcing a refresh loses our calibration, and instead in this case we should raise a message that we're having trouble communicating with the server and keep trying in the background
Device: iPhone XR
Chrome version: 86.0.4240.93
After tapping the dismiss button, it showed the following error:
TypeError: undefined is not When testing in chrome on a mobile device it says I am not using chromean object (evaluating 'navigator.permissions.query')
https://echo.jefftk.com/app.js:162:48
asyncFunctionResume@[native code]
https://echo.jefftk.com/app.js:1632:33
asyncFunctionResume@[native code]
module code@https://echo.jefftk.com/app.js:1698:11
evaluate@[native code]
moduleEvaluation@[native code]
[native code]
promiseReactionJob@[native code]
Error: Cannot decode non-contiguous chunks!
at AudioDecoder.decode_chunk (https://echo.jefftk.com/app.js:721:13)
at MessagePort.handle_message (https://echo.jefftk.com/app.js:1222:36)
should automatically reconnect if this happens
Add a concept of samples, where the system automatically keeps a sample for each user of their most recent singing. Exclude times when they're not actually singing, by not replacing the previous sample if the volume is too low. The person running sound should be able to ask for a sample of any person, to allow tweaking their volume or otherwise figuring out what to do with them.
We should make there be an easy way for someone to adjust peoples audio levels, since setting them fully automatically is not likely to work well.
app.js:600 Uncaught Error: sample rate clock slippage excessive; what happened?
at AudioEncoder.encode_chunk (app.js:600)
at async MessagePort.handle_message (app.js:1149)
encode_chunk @ app.js:600
net.js:12 Uncaught TypeError: Cannot read property 'sample_rate' of undefined
at ServerConnection.get client_window_time [as client_window_time] (net.js:12)
at app.js:1257
get client_window_time @ net.js:12
(anonymous) @ app.js:1257
requestAnimationFrame (async)
handle_message @ app.js:1232
async function (async)
handle_message @ app.js:1149
We should figure out what the maximum number of clients it can handle is, and figure out how to scale it if that's not good enough
TypeError: Failed to construct 'MediaStreamAudioSourceNode': required member mediaStream is undefined.
TypeError: Failed to construct 'MediaStreamAudioSourceNode': required member mediaStream is undefined.
at EventTarget.start_bucket (https://echo.jefftk.com/app.js:562:20)
at start (https://echo.jefftk.com/demo.js:940:20)
at async HTMLButtonElement.start_stop (https://echo.jefftk.com/demo.js:483:5)
I'm not 100% sure what was going on here, but it seemed like if the lead singer picked a backing track, and then lost their wifi, the audio track would keep playing forever.
Add a concept of buses or groups, where the person running sound can group some singers together and bring their volume up and down together.
TypeError: Cannot read property 'interval' of null
TypeError: Cannot read property 'interval' of null
at MessagePort.handle_message (https://echo.jefftk.com/app.js:1208:64)
On a couple occasions now, I've gotten this immediately after completing volume calibration.
The text currently says "choose a number between 6 and 116", but then when you click on "Lead Song" it gives you a number below 6. Also, it's currently important to not pick an order before someone clicks the "Lead" button which could also be clarified in the text
Even more ideal would be a more intuitive user interface -- e.g. the order dialog could pop up only once someone clicks "Lead".
When testing with 1600 simulated users, I get about twice the throughput if I don't include the user summary
We are computing it for every user in response to every request, but it seems to me like we could cache it, and only update it when its inputs change.
On the other hand, it's possible that this summer is just not how we wanna handle things at all? It doesn't really make sense after you get to a very large number of users, and I'm already truncating it to just the first 50. With very large numbers of users, I think this should really only be going to whoever is running the soundboard?
Currently, we are just using the acoustic energy of the input signal and equalizing these, which does not match human perception. The main problem is that lower notes have a lot of energy but humans don't perceive them as particularly loud, so people with low voices and much quieter in the mix than people with high voices.
If we used a-weighting in evaluating the volume of the input signal, people would sound more like they were all singing at the same volume.
Hi--I pressed the start button to start the audio test, and I got the following message:
This app has crashed. We're really sorry :-(
Please file a bug with the following information; it will help us fix it.
TypeError: Failed to construct 'MediaStreamAudioSourceNode': required member mediaStream is undefined.
TypeError: Failed to construct 'MediaStreamAudioSourceNode': required member mediaStream is undefined.
at EventTarget.start_bucket (https://echo.jefftk.com/app.js:562:20)
at start (https://echo.jefftk.com/demo.js:940:20)
at async HTMLButtonElement.start_stop (https://echo.jefftk.com/demo.js:483:5)
I'll try again.
(I'm on a Mac (macOS 10.15.7), using Chrome (Version 87.0.4280.67 (Official Build) (x86_64)).
I think we could support singing rounds by writing not only to where the user currently is but also a fixed number of beats earlier. There would need to be a way for users to set the number of beats, and we would need metronome support (#48, including a way to set the tempo)
I had left myself connected, thinking I would sing a song, do some work, then sing another song. Maybe I can't do that? When I came back to the page, I saw the following message:
Error: Cannot decode non-contiguous chunks!
Error: Cannot decode non-contiguous chunks!
at AudioDecoder.decode_chunk (https://echo.jefftk.com/app.js:887:13)
at MessagePort.handle_message (https://echo.jefftk.com/app.js:1377:36)
We should make sure we have all the best practices for using the app together in one place, and show them during the 'tutorial'. For example:
(Obviously video chat is effectively necessary for using the app, and most people won't be able to move that to another machine, but it might be worth mentioning as a last resort.)
When I hit the server really hard and stress testing, sometimes the Python program becomes unresponsive
This isn't exactly an "issue", but something I realized we aren't currently enforcing and maybe should be: The first client should NEVER connect at 0. I think right now we start them at 3, but this isn't enforced anywhere other than the randomization algorithm never picking 0.
The reason for this is that, now that we've introduced backing tracks, there is effectively always a singer at 0 if the backing track is enabled; and while the backing track takes up zero time (their 'write pointer' is right at 0), this still means that any request for audio before 0, i.e. "in the future", will result in the backing track skipping for the first singer (in the same way that getting too close behind another singer will have the same effect.) And if you set your offset to 0, it is possible for your actual read time to drift slightly negative, depending on various factors.
Per Facebook conversation. These can only be heuristics but that might be good enough.
Something like, "your audio is not being sent to the server"
The microphone button should also be greyed out
If someone just wants to listen they shouldn't have to go through calibration
TypeError: Cannot read property 'postMessage' of undefined
at ClockedRingBuffer.read_into (http://localhost:8000/audio-worklet.js:110:17)
at AudioWorkletProcessor.process_normal (http://localhost:8000/audio-worklet.js:517:41)
at http://localhost:8000/audio-worklet.js:641:14
at AudioWorkletProcessor.try_do (http://localhost:8000/audio-worklet.js:393:7)
at AudioWorkletProcessor.process (http://localhost:8000/audio-worklet.js:604:10)
Uncaught TypeError: Cannot read property 'metadata' of null
at ServerConnection.send (net.js:119)
at async MessagePort.handle_message (app.js:1187)
On pressing start, before any calibration started
TypeError: Cannot read property 'sampleRate' of undefined
at start (https://echo.jefftk.com/app.js:847:33)
at async HTMLButtonElement.start_stop (https://echo.jefftk.com/app.js:492:5)
There will likely be circumstances wherein a user needs to refresh, and it would be good if they could do so and then quickly resume participating.
Keeping track of when people have booked to use the system is complicated, and I don't want to forget peoples times. I think I should make a calendar to track shared usage.
I was leading Seasons of Love and there was one other friend.
The website said it crashed, but it was still playing Seasons of Love.
Here's the error code:
Error: Cannot decode non-contiguous chunks!
Error: Cannot decode non-contiguous chunks!
at AudioDecoder.decode_chunk (https://echo.jefftk.com/app.js:803:13)
at MessagePort.handle_message (https://echo.jefftk.com/app.js:1288:36)
The only way the sound person can currently set levels is by listening to someone and seeing how their levels should change. On the other hand, sometimes someone will sing very quietly during volume calibration and then much louder in the real thing. For each user, send their most recent average acoustic energy (RMS of most recent packet) to the person running sound.
This means levels will update every 600 milliseconds, and to be about 2 seconds out of date, but this is a good place to start.
It's likely that we'll need to use a backing track for most (all?) songs at Solstice, to prevent a variety of logistical screwups.
Eventually we'll need something fairly complex to handle an entire program of backup tracks, but for now just being able to play an existing link should be fine. I'm not sure about the implementation details.
There is a long rant in a comment in net.js (around line 240) about the way we compute stuff around the timestamps when we make our initial request to the server. I believe the current code is wrong for subtle reasons, and could be significantly simplified to just compute a client-server time offset, and use that later. (This would also eliminate the problem of storing a server timestamp, which is that we must start processing immediately after doing so, or it will fall behind.)
The current implementation attempts to subtract half the round-trip delay from the number we get from the server, so that we compute a time which is as close to "right now" on the server as possible. However, I believe we should actually subtract the full round-trip delay, which will result in a time which is earlier than server time, by exactly the amount of time it takes packets to get from us to the server, such that packets we send should reach the server at exactly that time.
This would dramatically simplify the calculations, I think it would make them more correct, and it would eliminate the bogus assumption we currently make that round-trip time is symmetrical (which is usually approximately true, but not actually true.)
Otherwise when people come to use it they often have the metronome still running
Today our architecture looks like:
Requests come in to Nginx, which handles HTTPS
Nginx reverse proxies to a long-running http.server
/ BaseHTTPRequestHandler
This has two main problems:
We can't take advantage of any concurrency that the server might be able to provide
We're getting freezes (#42)
Instead, I think we should be doing something more like:
Requests come in to Nginx, which handles HTTPS
Nginx uses the wsgi protocol to delegate to a uwsgi server running all the components of the app that do not need global state, primarily opus decoding and encoding.
For handling global state we run a singleton process, communication over UNIX socket or something.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.