Giter Club home page Giter Club logo

to-the-sun / amanuensis Goto Github PK

View Code? Open in Web Editor NEW
42.0 4.0 8.0 16.03 MB

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. See the README for instructions and feel free to message me at soundcloud.com/to_the_sun.

License: GNU Affero General Public License v3.0

Max 99.91% Python 0.02% JavaScript 0.08%
songwriting recording automation midi songs instrument audio music rhythm video-game

amanuensis's People

Contributors

ajmaln avatar to-the-sun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

amanuensis's Issues

Random initialization of pitches within an octave for singingstream's instruments

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.
If you want to try it out, please get a hold of me! Playtesters wanted!

Details

A little while back the "octave" parameter saved for each device by singingstream.maxpat was removed, which allowed for easier access of the hotkeys in octave -2. The downside of this was that in initializing a new controller, the pitches randomly assigned to its buttons will lie all over the spectrum and may actually be fairly incongruous to play together. So it would be ideal if, for their random initialization only, the original functionality was reinstated, assigning pitches only within a single octave, thereby creating a specific modality or scale for the player to operate in.

Components

Pitch is chosen in the initialize_pitch subpatcher of singingstream.maxpat.

Basically, the functionality to initialize an octave at random for each device will need to be integrated back in to it. The random and + objects replaced with a quick check of the device's octave and a random 12 (12 pitches in an octave). The old code for initializing the octave is the right-hand portion of the now defunct p initialize_devices:

It must be noted that octaves are chosen per device whereas pitches are chosen per control on each device, so you will need to use a sprintf symout %s==|>device to get the device name from dict ---specs before checking for the octave with the above code.

Also, there are 11 octaves to choose from (hence the random 11) but the lowest one should actually be excluded because that is where the hotkeys lie. Don't want to go in circles making the same mistakes here, so the random 11 will need to be altered as well.

Deadline

This request should take no more than five days to complete.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Track recording LEDs get stuck on at start-up

Repository

https://github.com/to-the-sun/amanuensis

Details

I thought this bug had been fixed, but apparently not completely. Looking for someone to investigate the causes and potentially fix it.

  • Expected behavior
    Each track has a circular LED that lights up when recording is on and turns off when not recording. Obviously, when the program first starts up this LED should be off.

  • Actual behavior
    For some reason, there's something that happens while using the program that causes this LED to be on (on a track per track basis) from the beginning next time you start it up

  • How to reproduce
    As you can see in the following video, the LED is on from the very moment the program starts up. My first hunch was that it was somehow being kept on by the pattr system and sure enough, if I open up the list of client objects there is an "led" under each track and the one in question is set to 1.

    If this is the case, then the way to reproduce it would be to close the program while recording is still going on (and therefore the LED is also on).

Windows 10 / Max 7 / Python 3.5

  • Recording Of The Bug

https://youtu.be/EQ6psQdoGRg

Components

However, the LED really should not be exposed to the pattrsystem and I'm not sure how it is. So the first place to begin the investigation would be to open up track.maxpat

and try to figure out how it's being exposed, or if there really is some other reason for it to get stuck on like this.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

I will like to write about us for your blog

Hello, I have read and understand the concept of this work... So I will like to write about us for your blog if you give me the opportunity to do that... And I promise your will surely like it... Thanks and regards

Auto-count VST presets

Repository

https://github.com/to-the-sun/amanuensis

Details

VSTs loaded into The Amanuensis automatically load a random preset when selected. For this preset to be chosen, the total number of presets that VST has must be known. As it is, the program looks this up in [coll synthTaste.txt] but the information must initially be entered manually. It shouldn't be too hard to automate this initial count.

Each VSTi in the chosen folder could be temporarily loaded into [vst~], the list of all its presets queried and then counted, and then stored in [coll synthTaste.txt]. The expected format is: an index of the plug-in's file name, including extension but excluding the path, as a symbol referring to a simple integer that specifies the total number of presets. For example:

"AAS Player.dll", 152;

One could actually forgo storing this information in a [coll] by loading up each plug-in to query it every time information is needed, but that seems unnecessarily taxing resource-wise. So it would probably be best to only do this initially, and also to check first to see if the [coll] already has the information stored from a previous load up.

Here is a mockup of the patch that should be created:

image

Components

The required messages will coming through those objects at the top of the mockup. Because they can be [receive]d from anywhere, there is no specific component that will be modified by this change. A standalone patch can be created that achieves the desired actions, and it can be placed anywhere in the program as a whole.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Replace [seq~] with Gen

Repository

https://github.com/to-the-sun/amanuensis

Details

Currently all playback cues are stored in a seq~, but I have found discrepancies in timing this object. Basically it has to do with the fact that it cannot be known where in the current vector its read or write heads are compared with other objects relying on that same signal, causing some things to occur out of order on rare occasions. So for example, it's possible for a cue to be added and then immediately played as the seq~ reaches that point a split-second later.

The most ideal way for the program to operate would actually be to handle all of this in Gen instead, allowing for sample-accurate control over recording and playing. It could potentially even speed things up.

All of the relevant information for an audio cue could be stored in a buffer, with a channel for each element of the cues. This buffer could be referenced and manipulated from outside of Gen as well as from within. The functions of seq~ could be replicated in this manner, with a sample of the buffer for each audio cue stored. The most efficient strategy for cleaning up empty samples once audio cues are deleted (and consolidating space in the buffer) can be discussed.

Components

The main area of revamping would be p midiPallete in organism.maxpat, the location of the seq~. Actually this subpatcher would probably be removed entirely. Importing and exporting functionality would need to be replicated for a buffer, as well as the seq~ itself, but once done these components could probably be moved anywhere, most likely near or directly connected to the gen~ progression in p brain.

Other than that, one would need to look for the places where audio cues are added or deleted. All audio cues are currently added in p add_cues

and are currently deleted in cueCleanup of machine.maxpat as well as various places in theCrucible of organism.maxpat, however, with a buffer all deletions will need to be handled via the method in cueCleanup because searching for specific cues will not be so easy. In other words, cues will need to be deleted as the song comes upon them for playback, based upon whether or not they still lie within an active span (of coll A_tracks).

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Add Mac compatibility

I think the most pertinent change to be made now that this is an open-source project would be to include the half of the Max population using Macs.

Repository

https://github.com/to-the-sun/amanuensis

Components

The only thing I can think of that is Windows-only is the use of [DOShack] in 2 places in the code. The first is the call to start the Python script that does the actual rhythmic analysis (located in the [bestowConsciousness] subpatcher of organism.maxpat) and the second creates the folder for the project files to be stored in (located in the [createProjectFolder] subpatcher of machine.maxpat).

image

image

Proposal

It's my understanding that the Mac equivalent of [DOShack] is [shell]. So basically the section of code in each of those patchers containing [DOShack] and the formatting objects preceding it should be duplicated using [shell] instead. I say duplicated because I think it would be best to keep both portions of code side-by-side; that way the program as a whole will work for either platform, as whichever object is not meant for it will simply not work/load, but the other still will. Obviously the formatting and commands necessary to create a folder/launch a file for [shell] are different and that will need to be part of the change as well.

Actually, if I remember correctly [py] is a Mac external they can run a Python script internally. This would definitely be more ideal than launching the script externally and sending messages to and from it via UDP, as is currently the case. This would require more modifications, circumventing the sending and receiving of UDP which takes place in the [brain] subpatcher of organism.maxpat, and would be an option as well.

Creating option for opening VST GUIs

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.
If you want to try it out, please get a hold of me! Playtesters wanted!

Details

Currently when playing VST instruments the user must rely on presets, because there is no way to actually open up the VSTs' GUIs. Personally I have always had no problem with this functionality, but many if not most others would feel stifled without this ability, so this task request would be for enabling that option. It'd be great if someone could handle this while I work on other things!

VSTs are loaded into [vst~] and the command to open up its GUI is a simple open message sent to the object.

The other half of the request would be placing an "open VST GUI" option of some kind on The Amanuensis's UI as well.

Components

The backend portion of this request would involve synth.maxpat and the abstraction found in it vsti.maxpat.

Two things will need to happen here: first, a message designating the track/channel will need to be sent to that [gate 16] and immediately after, the open message sent through it.

That message will then have to navigate its way down to the actual [vst~] object. These objects are created dynamically with scripting, so they're not present in the above screenshot, but messages are delivered to them through the right inlet of [join @triggers 1]. Really it's not as complicated as it looks. A simple [routepass open] at the very beginning of vsti.maxpat may be all it takes.

On the front-end (sound.maxpat), I'd like to keep things as uncluttered as possible, so perhaps an option can be appended to the synth umenu to the effect of "[open the selected VST's name here UI]" and when chosen, the explained functionality is executed. No extra buttons would then be required, but the menu would need to have its options updated dynamically.

Deadline

I'm hoping for the right person this would be an easy task. Maybe something that takes no more than 10 days.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Implement automatic sidechain compression on playback tracks

Repository

https://github.com/to-the-sun/amanuensis

Details

One of the lofty goals of this system is to automate the mixing process as fully as possible as well. So far some compressors have been utilized, but that's about the extent of it. If you have any ideas on what an "auto-mixer" would look (and sound) like, I would love to discuss the notion with you. I have a few ideas.

One is an automatic ducking system, that detects transients (i.e. more staccato moments in tracks) and momentarily lowers the amplitude of less transient tracks at those times. It seems like a good idea in theory. Again, any comments are welcome.

I do already have a pretty robust transient detection patch that wouldn't require many modifications to utilize. If you're interested, get a hold of me.

Components

I imagine a fairly modular subpatcher could be created that encompasses this functionality. It could be a pretty fun project on its own. In this project the subpatcher would be placed in polyplaybars~.maxpat, which plays each recorded note individually in a separate voice of a poly~. It could be the last thing the signal flows through before leaving the patcher, first calculating the transient signal and sending it off to other voices, and then applying the transient signal from other voices in a subtractive sort of way.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Final recording cut off song beginning

Repository

https://github.com/to-the-sun/amanuensis

Details

I caught a bug on video. Looking for someone to investigate the causes and potentially fix it.

  • Expected behavior
    On the last loop through (having no user input during that time) the program records the entire extent of the song and this is exported as a WAV file. As you can see in the video, this "song" is 18 seconds long, therefore the exported audio file should be that duration as well.

  • Actual behavior
    The captured WAV file
    https://soundcloud.com/to_the_sun/2018-5-18-21-38-23
    wound up being only 11 seconds long. The first seven seconds were cut off.

  • How to reproduce
    This is the only time I've noticed this happening, so it may be tricky to reproduce. Try just using The Amanuensis as normal and see if, with a decent amount going on and multiple tracks involved, it happens again. Simply catching it happening again and sending me a link to the project folder (which includes a detailed log file) would be extremely helpful!

Windows 10 / Max 7 / Python 3.5

Components

This final recording takes place in p producer in machine.maxpat, specifically record~ ---product, ---product being a buffer with ample room for any length of song. Cues to start or stop recording (1 or 0) come through r ---record~ and when finished, the buffer is cropped to the total length of the song in p crop.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Disable recording source menu when recording

Repository

https://github.com/to-the-sun/amanuensis

Details

For each track The Amanuensis preps a large buffer, referred to throughout the program as a "palette", into which you will record everything necessary for loops to eventually playback. These buffers are pretty massive right now to avoid ever having to run out of space; I believe they take up something close to 1 GB of memory each. For this reason they are loaded only as necessary, according to the selections made in each track.

If the recording source is set to anything other than "nothing", a "palette" is loaded for that track. This can take a few seconds to happen and is without a doubt noticeable each time you change the recording source settings. If you try to change the recording source once the palette has already begun to be used, you will obviously screw things up. For this reason, the possibility of doing this should be disabled on the UI; currently it is not.

While the program is "locked" (it is looping and the progression ramp is running) is when this effect should take place on the recording source menu. This state is conveyed through s A_phasorLock as a 1 or 0. On a 1 the "nothing" option in the menu should become disabled if something other than "nothing" is already selected, and the rest of the options (or perhaps just the menu itself) should become disabled if "nothing" is already selected. Then they should be enabled again on a 0.

Components

The umenu in question is located in track.maxpat, which should be the only file involved.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Bypass analysis latency

Repository

https://github.com/to-the-sun/amanuensis

Details

Quite a while back the issue arose of the rhythmic analysis being too much computationally for the program to handle at the same time it was dealing with all the necessary DSP. Lagging and choppiness would occur with every note played. My solution was to move the analysis to an external Python script and communicate back and forth with it via UDP. This successfully offloaded the processing burden, but created a new issue, namely the latency required to make that round-trip. All sorts of bugs and admittedly somewhat-sloppy fixes have occurred in the meantime, calculating and compensating for this delay-gap.

It would therefore be an immense alleviation to find a way around this latency. It should be possible; the assumption has always been that the analysis needs to take place before each note triggers the decision of whether to start or stop recording, but if the analysis comes through at its own pace and simply updates a table with its results, notes can immediately look up their recording commands in it without delay.

This would mean that the analysis would be based on every note prior, but not the note itself, but this shouldn't actually be a problem. If anything, one would expect that in trying to determine if a beat falls into a rhythm, that that beat should not yet be included in that rhythm.

Components

If this strategy had been implemented sooner, a lot less of the program would be affected, but as it is changes may need to be made in a wide range of components. The primary one would be the Python script, consciousness.py. Rather than calculate a simple likelihood variable, a one or zero, it will need to generate every value that will or will not return a likelihood of one, so they can be used to fill a table back in the main program. This will result in less traffic than might seem at first, as only some dozens of values will ever be likely to change upon each analysis.

image

Basically, instead of simply deciding whether the incoming interval is at the top of a plateau, each aggregate that is incremented will need to be analyzed to see which portions of that range are or are not a plateau. The values that change can be sent back via UDP. Certain other variables will still also need to be sent in the usual manner, such as lock and tempo.

Additionally, the interval value will need to be calculated in the main program rather than the Python script, so it can be used for table-lookup. This change and whatever other slight modifications would need to be made in the [p collectPre-Cog] subpatcher of organism.maxpat, the place where messages are assembled before being sent off via UDP.

image

Initially and until these changes can test out as functional, most of the variable points back in the main program can just be sewn together directly. For example, [v pre-Gen~Moment] can simply be set with the frame value (from the 0th index of the stats buffer) at the same time the interval is sent off to the Python script for analysis. The Python script's return messages come in in [p receiveStats] (of organism.maxpat), so this would be the primary place to look to for these modifications to be made.

image

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

help

hey! i saw your post on the of forums. do you still need help? just sent me a mail, let's chat there if you like: [email protected]

greetings!

singing streams sometimes don't recalibrate

In singingStream.maxpat:

the "recalibrate" button doesn't always seem to work. As a useful enhancement, it would be better if some text were displayed to the user signaling when the recalibration period is about to begin and when it ends.

Separate read/write heads

Repository

https://github.com/to-the-sun/amanuensis

Details

Currently recording and playback follow the same loop; where the playback is being heard at any given time is the same place where new recording will occur. However there could be some significant advantages in utilizing separate read and write heads.
This would mean that the song would not extend until after a new recording has been captured, rather than the playback following along in silence with the write head as it records beyond the bounds of the song, extending it in real time. The read head could loop on what's already been recorded while the write head records off into space, giving the user a continual backing track without big gaps of silence.
Not having to extend the total length of the song in real time would simplify a lot of code as well. It could potentially even make things run faster, considering certain things would only update at broad intervals rather than in a continual fashion.
It also alleviates a major complexity dealing with the fact that every note has a variable delay as it waits for the UDP round-trip to the Python script and back before the analysis associated with it can be used. There are bugs that have still not been worked out caused by that lag spanning the loop point between end and beginning of song. But if the playback is on its own loop, it doesn't matter when the analysis (and therefore the command to start or stop recording) comes back.

Components

Most if not all of the changes would take place in the untitled [gen~] object located in the [p brain] subpatcher of organism.maxpat. The codebox within it is responsible for generating the ramp which specifies the current position in the song. The "ramp" variable there as well as [send~ ---phasor] and the continually updated fifth index of the "stats" buffer that it feeds would be the affected points. It remains to be seen whether anything beyond that scope would need to be altered.

image

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Create a loading message while MIDIguitar loads

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.
If you want to try it out, please get a hold of me! Playtesters wanted!

Details

The newly implemented integration of MIDIguitar with Singing Stream allows for the plug-in to be loaded dynamically on a per-input basis. This means that the user could cause this to happen at any time via UI controls, not just at start up. When it does, it causes a bit of lag which the user may not be expecting and is otherwise unexplained.

Therefore, this task request is for creating a loading "screen" for Singing Stream while this is happening. What I really mean when I say "loading screen", is a semi-transparent panel laid over the window with a comment explaining what's happening, in exactly the same manner as happens when inputs themselves are being loaded (see following screenshot). Because this functionality already exists in Singing Stream it shouldn't actually be too difficult to augment it in this way.

Components

On the main UI of singingstream.maxpat there is a panel with a scripting name of message and a comment with a scripting name of message_text. These will be the ones in question and everything can be executed using scripting. If you open up p UI in p target, you can see how this is already being done.

I would create a new subpatcher using p UI as your template. Everything can be sent through s ---scripting connected to a thispatcher in the main patch. It may look complicated at first but all you'll need to account for are

  • sending script send message_text set <insert text here>, script send message_text presentation_rect 1. 45. 230. 100., script send message_text hidden 0, script send message hidden 0 to initialize the message
  • sending script hide message, script hide message_text to hide the message
  • sending script send Sources ignoreclick $1, script send Activate/Deactivate ignoreclick $1, script send Recalibrate ignoreclick $1, script send Sustain ignoreclick $1, script send Channel ignoreclick $1, script send Octave ignoreclick $1, script send pitch ignoreclick $1, script send tonal ignoreclick $1 to prevent user interaction while the loading screen is up, where $1 is 0 or 1 as appropriate.

Deadline

I would expect that this should take no more than three days.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Disable UI elements while the program is busy

Details

If you know a little Max/MSP, I could use your help:

When the program starts up there is a window that pops up labeled "Preloading drum samples". This is a relic from before there was a UI, but once it closes it signals the moment the user can begin. Before that time, the program is busy loading things and the user should really not be attempting to change UI elements, etc.

Therefore the first task I would like to request done would be to send ignoreclick scripting messages to all of the UI elements in the program, preventing clicks while things are loading and then re-allowing them when loading is done. These moments in time are denoted by 1s and 0s coming through r ---conscious?, 0 when clicks should be ignored and 1 when clicks should be allowed. Scripting messages can be sended to thispatcher objects, placed as needed, and each UI object will need to be given a scripting name.

Components

The UI objects all exist in Amanuensis.maxpat and the bpatchers visible on its presentation view, track.maxpat and sound.maxpat, so this is where all the coding would need to be done.

Deadline

This task should take no more than three days to complete.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Samples menu not changing to reflect current sample

Repository

https://github.com/to-the-sun/amanuensis

Details

  • Project Name: The Amanuensis

  • Expected behavior
    in a track set up to play samples, there is a umenu that displays the current sample (either just played or selected). This should change corresponding to the sample heard from each incoming note.

  • Actual behavior
    This umenu is not changing either when notes come in or even when you manually select a different sample using the menu itself.

  • How to reproduce
    Start up The Amanuensis and make sure you have a drum samples folder chosen (in "settings"). Using the menu, set a track to record and play "samples" and play some MIDI through that track (if you don't have any other MIDI controller, you can always use the keyboard letter keys, in the formation of a piano, exactly like in Ableton Live if you're familiar). You should now be able to observe the aberrant behavior.

Windows 10 / Max 7 / Python 3.5

  • Recording Of The Bug
    This video demonstrates exactly what I've described. [Forgive the use of voice recognition; my hands are useless stumps at the moment, but I'll spare you that sob story]
    https://www.youtube.com/watch?v=f5AB-PH4Rkk

Components

The umenu in question is located in sound.maxpat. It remains to be seen where else from there the issue may lead.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Create a Python script that periodically appends log entries to a text file

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.
If you want to try it out, please get a hold of me! Playtesters wanted!

Details

I'm looking for someone who knows a little Python and could write a little script for this project.

For the sake of the diagnostic process, a log file is continually compiled documenting the behavior of the program as it moves along. I have been running into issues with this log getting too large. The object it stored in, dict, eventually runs up to hundreds of thousands of lines and starts taking up a lot of processing power just to store new entries. Therefore I am going to need to start clearing it periodically, writing a file to disk each time. Rather than have to write a new log file every time, it would be ideal to be able to append to the same text file. As far as I'm aware, this is not something that's possible in Max, but is with Python.

Components

All of the entries to be stored in the log are sent via UDP to a standalone version of log.maxpat.

Rather than have to get information after it's already been stored here, it may be ideal to just send it to the Python script immediately as it runs in the background. I already have a UDP receiver written in the Python script that handles the rhythmic analysis, consciousness.py, so that part would be taken care of.

Deadline

I'm hoping for the right person this would be an easy task. Maybe something that takes no more than 10 days.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Create a UI element denoting the presence of user input

Repository

https://github.com/to-the-sun/amanuensis

Details

One piece of information that would be useful to have and is not currently represented on the UI would be whether any given track has user input or not. This was previously visible on the presentation view of the bpatcher for organism.maxpat, which you can still see if you go into the patching view of the main Amanuensis.maxpat: a toggle labeled "user input". It was not however, track specific.

The perfect place to integrate this element would be in the top left corner of each track. When recording is taking place in any given track, an LED lights up there, emulating the ubiquitous circular "record" symbol (see screenshot).

I figure when there is user input on that track, but it is not recording, the LED could be lit up but not filled in, in other words just a circle. I think the circle is the appropriate symbol in that it is basically "a loop", which is what the track will be doing if there is user input on it.

The program gives you the length of its "memory span" (the default is eight seconds in "settings") after your last played note to play another before it considers user input to have stopped. This would be the most useful moment to know whether or not there is still "user input"; if the song reaches its end without any, it will not loop but trigger export.

Components

The UI component of this request would alter track.maxpat. The information needed about user input could be gleaned easily from polyinput.maxpat, which is loaded into a poly~ with a voice for each track (channel).

The output of that change is exactly where you would want to start working.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

First few notes are replayed after end of final loop

Repository

https://github.com/to-the-sun/amanuensis

Details

This bug has been happening for a while. Looking for someone to investigate the causes and potentially fix it.

  • Expected behavior
    When the song exits its final loop and gets ready to export, it should not play any more notes but simply wait for whatever sounds were happening to fade off, before doing so.

  • Actual behavior
    It seems that the beginning few notes of the entire song get looped back to and played at this point. The program must then wait for these notes to fade away before exporting. However, it doesn't seem like they actually wind up on the final recording. I have noticed while looking through log files that the ramp that drives the progression of the song does not seem to be locking cleanly at the end of everything; it should read 1.0 when the song is over, but it's always something like 1.02โ€ฆ .

  • How to reproduce
    Not sure exactly when it started, but it does it every time you finish a song, so it won't be hard to reproduce. Just get a song going and noticed that when it finishes, you hear a little something extraneous. An example can be seen in the following video.

Windows 10 / Max 7 / Python 3.5

Components

The first place to begin the investigation is in progression.genesp located in p brain of organism.maxpat, the place where the ramp that controls the progression of the song is generated.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Replace [coll synthTaste.txt] with a [dict]

Repository

https://github.com/to-the-sun/amanuensis

Details

This task request is related to issue #5 and should be relatively straightforward. At this point [coll synthTaste.txt] is a bit of a derelict object and all it is used for currently is to look up the total number of presets for any VST. However, this requires the saving of an additional text file. The cleaner solution would be to save these numbers in a [dict] and expose that to the pattr system, so that the information will automatically be saved with Amanuensis.json.

Components

As you can see in the screenshot, currently there are essentially only three places where the coll is used. To swap these out for dict should be relatively easy. Obviously the commands to store and retrieve data will need to be changed accordingly. The information can be stored in essentially the same format (see issue #5).

image

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Enable multiple settings presets

Repository

https://github.com/to-the-sun/amanuensis

Details

The Amanuensis saves a file called Amanuensis.json which stores all of the general settings for the program, including audio driver, folders, the tolerance and memory span, compressor settings, gain levels, as well as all of the last used sounds for each track. This information is stored and recalled through a single preset (number 1) via the pattrstorage object. However, it could be useful to store multiple presets so that if the system is taken to a different context (recording studio versus home, for instance) alternate settings could be recalled quickly.

Components

pattrstorage is designed to accommodate multiple presets very easily. One would only need to send the pattrstorage in the bottom left of Amanuensis.maxpat (selected in the screenshot) store x and x messages to store and recall presets as appropriate, where x is the new preset number.

Some sort of quick and dirty UI scheme could be thrown together at first, with simple number boxes labeled was something like, "store current settings as preset number" and "recall settings preset number". Eventually this functionality would probably be best at the top of the settings menu pop-up window, but could just be "under the hood" in the patching view of Amanuensis.maxpat for now.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Create a new menu for cueing up old projects

Repository

https://github.com/to-the-sun/amanuensis

Details

This task request is to create a new pop-up window on the main UI, similar to the "settings" menu that already exists, but filled with options for loading old projects. It would take the place of the "cue up a random project" button, which would then appear in that window.

image

The Amanuensis is designed to make the songwriting process move as quickly as possible. However, if you don't finish a song all in one sitting, the ability to load it up another day and pick up where you left off is desirable. This is the purpose of the "cue up a random project" button. This task request would largely be to implement an option to cue up a specific project, instead just a random one. I figure a menu could be populated based on entries in coll catalog.txt (the same place the random projects are chosen from) and selections could simply follow the same code, minus the randomly chosen part of course.

Along with cueing random projects and specific ones, there is already a function to load a backing track, which allows you to cue up any audio file and begin playing in the background. This option should already be functional and would just need to be added to the new UI. It makes sense with these other options in that it too can be used to start up an old project, the portion already recorded in the backing track would just be set in stone, as is not the case when loading up the actual project files, but sometimes this is desirable.

Components

you can see the old button for a "backing track" on the presentation view of machine.maxpat
image

Simply hook that up to the new window on the new UI. When a song name is deliberately chosen from the menu populated by catalog.txt, it can be fed directly into the marked point in this screenshot (in [p chooseSong] of machine.maxpat)

image

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Refresh catalog before each query of it

Repository

https://github.com/to-the-sun/amanuensis

Details

When the "cue up random project" button is clicked, coll catalog.txt is referenced for the list of available projects. Catalog.txt is checked each time the program starts up and any projects whose folders do not exist any longer are deleted from it. However, if a project folder is deleted while the program is running, it will have no knowledge of it, so ideally it would be better if the check was made each time catalog.txt is referenced and not just at startup.

Components

The subpatcher that handles this check and any deletions is called [p checkfordeletedfiles] and is located in machine.maxpat. It would be a simple change to have a receive trigger the loadbang again whenever necessary. Unfortunately that buddy stands in the way. It could be replaced with my abstraction specialbuddy which is more of a combination of buddy and bondo .

The place in the code where this check would need to be called is located in [p startfeed] just before anything reaches [p choosesong].

Really, this is some of the more ancient code in the program and could stand to be revamped. For one, [coll ---catalog] would be more ideally replaced with [coll catalog.txt] which would load the external file implicitly, eliminating the need for buddy . If this were to be done, the replacement should be made in every place coll ---catalog appears, which I believe it's about half a dozen places (you can search from the topmost Amanuensis.maxpat with control+f).

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Disable the MIDIguitar UI element when not applicable

Repository

https://github.com/to-the-sun/amanuensis

Details

MIDIguitar is a plug-in/program that is designed to translate the raw signal from a guitar into MIDI notes. I have found it very useful in combination with The Amanuensis in providing the accompanying MIDI required to run it (aside from the fun of playing cool synth sounds with your guitar). It also works with vocals, or really any signal that is tonal.

For that reason I have just integrated it with The Amanuensis. This takes all the set up out of the question; all you have to do is choose the input channel for it to analyze. However, I realize most people do not own the software, therefore this task request is simply to disable the UI elements referring to it if it is not present.

I have found the best way to do this sort of thing is to use scripting, sending the numbox and comment that reads, "MIDIguitar on input" ignoreclick messages as appropriate. To make it more obvious visually I usually put a semitransparent panel on top of them and hide and show it with scripting (if you open up singingstream.maxpat, you can see example of this).

To determine if the user does not have the plug-in, my first thought would be to utilize the error object to detect the error this generates.

Components

The only component involved will be the main UI on the patcher Amanuensis.maxpat. This screenshot shows the new UI elements I am referring to in the center of the top.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Convert the "preloading drum samples" pop-up window to an on-screen message

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.

If you want to try it out, please get a hold of me! Playtesters wanted!

Details

There is a window that pops up when The Amanuensis first loads, entitled Preloading Drum Samples. This task request would be to convert that window into a message on the main Amanuensis.maxpat UI, displaying the same information, but indicating more intuitively that the program is busy and not yet ready for user interaction.

This "message" would be in the exact same style singingstream.maxpat uses, with a semi transparent panel overlaid above the UI (but under the message comment itself) dynamically scripted to show and hide on command.

Components

If you open up singingstream.maxpat you can see how these UI elements are arranged on the presentation view:

They would need to be emulated in Amanuensis.maxpat as well. For an example of how the scripting works, look to p UI (in p target):

The subpatcher responsible for the pop-up window to be replaced is preloadsamples and p sampler of organism.maxpat. Because of the way it's set up, opening that subpatcher triggers the loading of samples and subsequent closing of the window, so you actually have to cause a quick stack overflow by clicking one of the buttons connected to eachother right next to it first, which disables the closing functionality so you can edit it (sloppy, right? Well this would alleviate that):

The first thing to do would be to delete the two message boxes leading to the thispatcher that are responsible for popping up and closing the window.

At this point you'll be left with an ordinary subpatcher, which is just what you want, and really the only thing you have to do is co-opt the messages that right now are updating the comment on the subpatcher's presentation view.

Deadline

This request should take no more than seven days to complete.

Communication

Reply to this post or contact me through Github for more details. I can also be found on discord @to_the_sun.

Proof of Work Done

https://github.com/to-the-sun

Automatically close singingstream when the amanuensis closes

Repository

https://github.com/to-the-sun/amanuensis

Details

As you can see in the following screenshot, singingstream.maxpat is loaded automatically when The Amanuensis starts up by sending the loadunique message to pcontrol. However, there is currently no functionality to close it automatically when The Amanuensis closes. This task request is for adding that functionality.

As far as I'm aware, pcontrol does not offer a way to close a window opened with loadunique . If there is that would be the ideal way to do it, otherwise a message will need to be sent to singingstream.maxpat when Amanuensis.maxpat freebangs, triggering a dispose message (not wclose) to thispatcher.

Components

Amanuensis.maxpat and singingstream.maxpat are the two components involved in this request.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Import singingstream.maxpat's saved presets on startup

Repository

https://github.com/to-the-sun/amanuensis

Details

As of right now, singingstream.maxpat saves a file called singingstream.json with all of the specs for each stream, including pitches, channel, calibration, etc. but it does not actually load this information yet when it starts up. For this file to be useful this will need to be implemented, the idea being that it will save your settings so you don't have to reset them every time.

The first thing that comes to mind is simply sending import singingStream.json to dict ---specs on load, as you can see selected in the screenshot,

but then a lot of ordering issues begin to rear their ugly heads. Other things are happening on load as well, such as information put forth about audio drivers and human interface devices. It needs to be made sure that dict ---specs is updated before those events try to access it.

The cleanest way to do it would be to instead simply load the .json file implicitly using the functionality of the object itself, i.e. naming it dict singingStream.json instead. This however would mean renaming it everywhere it appears in the program, which is dozens of places. Ideally though, this is what needs to be done.

Components

As you can see from the search (control+F) in the above screenshot it occurs in singingstream.maxpat 27 times. If you open Amanuensis.maxpat and search for it there as well, you can come across all the rest of its uses (I think it was 38 times?). A tedious task, but not such a difficult one.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

i really want to contribute to this project

I want to contribute to this project , am good in writing private policy , about us , Faq and any other form that deal with writing... if you need anything on copywriting , i will be glad to help and contribute .. thanks and regards

Create a new pop-up window for shuffling sounds

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.

If you want to try it out, please get a hold of me! Playtesters wanted!

Details

If you open up midiports.maxpat you will see a button labeled "shuffle instrument", followed by "[toggle] on new song [toggle] every [number] s". Shuffling your instrument (or more accurately, using the current terminology that has evolved, your "sound") causes the program to move the instrument you're playing to a new track, as well as give you a new sound to work with.

I won't get into the details, because this task request is simply to move these controls into their own menu on the current UI and give them a makeover so they are in line with the current style. In the following screenshot you can see the Settings and Projects pop-up windows and the buttons that open them. What I'm asking would be to create a similar window and button labeled "shuffle".

Components

Just like the other pop-up menus, this one would launch from Amanuensis.maxpat.

All of the actual functionality that would be put into the shuffle subpatcher could be copied and pasted from midiports.maxpat. It is the highlighted section in the following screenshot:

There is one connection that would need to be maintained between that portion of code and the rest in midiports.maxpat and that is the integer being sent to the rightmost inlet of p change_channel. That patch cord would need to be replaced with a send/receive pair.

Deadline

This request should take no more than 10 days to complete.

Communication

Reply to this post or contact me through Github for more details. I can also be found on discord @to_the_sun.

Proof of Work Done

https://github.com/to-the-sun

Create a mode where all notes are considered in rhythm

Repository

https://github.com/to-the-sun/amanuensis

Details

It probably would not be all that difficult to bypass the expensive rhythmic analysis done externally in consciousness.py and give every note played a likelihood of 1 , meaning it was in rhythm. This would be a sort of "god mode" which players brand-new to the system might appreciate, but would also enable a sort of classic looper functionality for The Amanuensis. Hit a button to turn it on and everything you play is captured; hit it again to turn it off and it starts to loop.

Although it would start to defeat the purpose of The Amanuensis, this would be a more deliberate way to construct a song. It would still be more useful and novel than an ordinary looper in that you could record many tracks of any varying length, i.e you'd still be constructing a whole song and not just loops. Perhaps a good way to set down a backing track and then let it work in the ordinary fashion from that point forward.

Components

It should be relatively easy to stitch in a bypass. A gate would need to be placed after p collectpre-cog in p brain of organism.maxpat

that diverts directly to just before p recallpre-cog in p receivedstats .

Between these two places the notes should be converted to messages in the format expected to be coming out of the UDP receiver: [frame] [pitch] [velocity] [channel] 1 1 na na na, 1s for recording and likelihood and nas for the others which should not be updated.

We can speak about how best to implement an update to the UI for control of that gate, but this would be the first task at hand.

Deadline

There is no deadline, but we can discuss how long it might take to execute.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Monitoring for sample and synth tracks

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.

If you want to try it out, please get a hold of me! Playtesters wanted!

Details

Tracks set to record from audio inputs can have their monitoring gain adjusted. This is imperative for them, since you may or may not want to hear the sound you're producing as you're producing it (i.e. when playing with an electric guitar with headphones on versus loud acoustic drums), however it would still be useful in the case of sample and synth tracks as well. Sometimes, when you cue up a new synth for example, it can be difficult to hear it among all the other sounds or it may be overpowering, and there's nothing that can be done about it currently.

Therefore this task request is to duplicate the monitoring gain control that appears on tracks recording from audio inputs (in red on the following screenshot) so that the same control also appears on those recording samples and synths

Components

Most of the changes will occur in sound.maxpat.

As you can see in the presentation view of the patcher (above) there are three rows and each represent a different selection in the recording sources menu of track.maxpat. Sound.maxpat is loaded into a bpatcher in track.maxpat and offset is used to move which area is displayed. So all you would have to do as far as the UI is copy the controls for monitoring gain and arrange them in the other two rows as well.

In the patching view, these are the controls that would need to be copied

Different naming would have to be used for the copies of course, but whereas the audio input monitoring signal comes directly from adc~, the other two sources will need to have it routed from elsewhere.

Playing synths already have their audio grouped by track in the numbered vsti.maxpat abstractions (of synth.maxpat).

You would just need to route the audio to the appropriate sound.maxpat instance (utilizing #1 on both the sending and receiving ends) for monitoring, before it is sent to ---speakerL and ---speakerR.

Samples are played in polyplay~.maxpat

which is loaded into a poly~ which has voices called up as needed. This means the signals can be conveyed to sound.maxpat in the same manner, but instead of getting the current track from #1, it must be gotten from v midiChannel and used to set a pair of send~s dynamically. Once the monitoring has been accomplished in sound.maxpat, the signals can be sent to ---speakerL and ---speakerR as well, instead of relying on the out~s of polyplay~.maxpat.

Deadline

This request should take no more than 14 days to complete.

Communication

Reply to this post or contact me through Github for more details. I can also be found on discord @to_the_sun.

Proof of Work Done

https://github.com/to-the-sun

Create octave controls for the MIDI keyboard

Repository

https://github.com/to-the-sun/amanuensis

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.

If you want to try it out, please get a hold of me! Playtesters wanted!

Details

If you don't have an instrument with you, there is one instrument that can always be used by default: your computer's keyboard. There are 15 keys that will play MIDI, giving you a little over an octave to work with. These keys and the pitches they generate are arranged in emulation of a piano keyboard (exactly the same as in Ableton Live) as you can see in the following image.


The one difference in the actual functionality of the keyboard for The Amanuensis is in the Z and X keys. They currently don't do anything, but in Ableton Live they change the octave you are dealing with up and down by one. Not that Ableton Live is some golden grail to emulate, but this is useful and many people are already familiar with the setup.

Components

The MIDI keyboard is handled in p midikeys of midiports.maxpat.

The required changes should be fairly self-explanatory, looking at that screenshot. The numbers in the message boxes will need to be changed to represent notes within an octave rather than absolute numbers and then they will need to be added to a variable (of 12) denoting the current octave.

The same key object can be used to identify when Z and X are pressed. If I'm not mistaken it doesn't matter in the case of the letter keys, but it might not be a bad idea to switch this patch while you're at it into using the "platform-independent" right outlet of key and keyup. Just a thought.

Deadline

This request should take no more than three days to complete.

Communication

Reply to this post or contact me through Github for more details.

Proof of Work Done

https://github.com/to-the-sun

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.