Giter Club home page Giter Club logo

Comments (31)

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot I am sorry, but I do not understand your question.

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 How about this project, what is the effect of the music generated by ai, how can I test this project of yours, or where can I get the data myself and run it to see the final result?

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot You can listen to output samples from all projects here: https://soundcloud.com/aleksandr-sigalov-61/albums

You can start with Los Angeles Music Composer: https://github.com/asigalov61/Los-Angeles-Music-Composer

Demo Google Colab with the model is located here: https://colab.research.google.com/github/asigalov61/Los-Angeles-Music-Composer/blob/main/Los_Angeles_Music_Composer.ipynb

Data/Dataset maker is here: https://github.com/asigalov61/Los-Angeles-Music-Composer/blob/main/Training-Data/Los_Angeles_Music_Composer_Training_Dataset_Maker.ipynb

And training code is here: https://github.com/asigalov61/Los-Angeles-Music-Composer/blob/main/Training-Code/Los_Angeles_Music_Composer_Maker.ipynb

Hope this helps and answers your questions.

Thank you.

Alex

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Okay, thanks, I would like to ask, which thesis technology used in this, at the same time, I see that the code does not generate midi files, in fact, I just want to generate songs through ai, while the last can change the midi file, which is currently the most used open source model on the market, or have a ready-made github address not

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Listened to it, felt good, but it seems I did not use to see the use of the model used, generally I want to focus on the input and output, the input is a midi file, the output is also a midi file

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 There is also that this is geared towards the generation of ai music, right, not just the generation of piano music, right?

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot Los Angeles Music Composer takes MIDI as input and generates MIDI as output. It is a multi-instrumental model (12 instruments) so you can select any available instrument.

With Los Angeles Music Composer you can generate music, continue music, inpaint music, and harmonize melodies. It's all in MIDI format.

If you want audio, then you can try Riffusion or MuBerrt.

I hope this helps and answers your question.

You can try writing to me in your language too and I will try to translate it myself to better understand you (if you want).

Alex

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 This is your last test to generate the music file of the song well, https://soundcloud.com/aleksandr-sigalov-61/albums, feel good, I am now ready to use you to test this project, run the code, training, testing, while getting the data set, to run your code, in fact, the input and output of this model What is it, such as I input rock, he will return a rock music mid to me, or I input a file mid, he generated a mid type of this mid to me, I want to know these

Translated with www.DeepL.com/Translator (free version)

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 You guys do ai generate songs, model input is text or mid it, such as text is the type such as rock, and then the output is mid rock music, and then by uploading the mid output this kind of music similar to the mid it

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot If you want text-to-music (text to MIDI), Los Angeles Music Composer will have that function soon. I will post the colab shortly and I will let you know.

If you want MIDI to MIDI, then use Los Angeles Music Composer demo colab. It supports custom MIDI continuation.

Alex.

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot Try this: https://colab.research.google.com/github/asigalov61/Los-Angeles-Music-Composer/blob/main/Los_Angeles_Music_Composer_TTM_Edition.ipynb

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 hello,Some do not know how I should test the running code without jupyter, because I want to test the entire workflow to facilitate learning, while the model calls encapsulated into a call to the interface to load the model online and provide services

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot If you do not want to use Jupyter, then you will have to assemble the code yourself. Sorry about that.

Python versions are provided in the repo.

I.e. https://github.com/asigalov61/Los-Angeles-Music-Composer/blob/main/los_angeles_music_composer_ttm_edition.py

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Thank you, you have seen the demo, feel the effect of the generated music are poor, because the use of classical music training model, and then generate music, (this demo is also, https://github.com/netpi/compound-word-transformer-tensorflow), you can look at the reference a

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Hello, when you wrote this project, did you refer to Magenta, MuseGAN, Deepj project reference, or you

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Hello, are you referring to this paper to achieve
https://archives.ismir.net/ismir2021/paper/000017.pdf

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

Hello, please ask this project, what technical framework and thesis used to achieve, at the same time I found many parameters do not quite understand
image

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot This is a control sequence generator. It tells the model the desired parameters for a generation.

These are guidance parameters/tokens that help the model generate the desired music.

intro_mode_time = statistics.mode([0] + [y[0] for y in melody_chords if y[2] != 9 and y[0] != 0])

intro_mode_dur = statistics.mode([y[1] for y in melody_chords if y[2] != 9])

intro_mode_pitch = statistics.mode([y[3] for y in melody_chords if y[2] != 9])

intro_mode_velocity = statistics.mode([y[4] for y in melody_chords if y[2] != 9])

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Hello that this project (https://github.com/asigalov61/Los-Angeles-Music-Composer) from MuseNet implementation well, or this project reference that paper implementation, I want to understand the concrete implementation of the principle and architecture, to facilitate my learning debugging code

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

2023131180590.zip
I tested it, input mid, output mid, the generated music is short, how do I control the length of the generated music, this file is a sample of my test, you can test it, I think the music you tested in soundcloud.com/aleksandr-sigalov-61/ inside is good

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

How do I reproduce the songs you generated on soundcloud.com/aleksandr-sigalov-61/ and find that the same mid file generates the same, I don't know if I didn't specify the type (such as guitar, rock, violin, etc., resulting in the same mid input, after the output mid, repeatedly the same mid I don't know where there is a parameter to control this piece, and then there is no detailed documentation)

Translated with www.DeepL.com/Translator (free version)

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 How do I reproduce the songs you generated on soundcloud.com/aleksandr-sigalov-61/ and find that the same mid file generates the same, I don't know if I didn't specify the type (such as guitar, rock, violin, etc., resulting in the same mid input, after the output mid, repeatedly the same mid I don't know where there is a parameter to control this piece, and then there is no detailed documentation)

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot You can control the length of the generated music by increasing the number of tokens to generate parameters.

If you want to specify instruments you will have to prime the model with the tokens of the instruments you want. There is no option to do that so you will have to write the code yourself.

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Actually, I was wondering how to modify the same mid to generate different kinds of mid, I saw in the code that there is (patch_map = [
[0, 1, 2, 3, 4, 5, 6, 7], # Piano
[24, 25, 26, 27, 28, 29, 30], # Guitar
[32, 33, 34, 35, 36, 37, 38, 39], # Bass
[40, 41], # Violin
[42, 43], # Cello
[46], # Harp
[56, 57, 58, 59, 60], # Trumpet
[64, 65, 66, 67, 68, 69, 70, 71], # Sax
[72, 73, 74, 75, 76, 77, 78], # Flute
[-1], # Drums
[52, 53], # Choir
[16, 17, 18, 19, 20] # Organ
]) do control the generation of different mid, but I do not know where to modify the parameters of the generation time, is to generate mid music, generated music and the original music as long as the code details where to modify it

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 I don't quite understand this sentence (You can control the length of the generated music by increasing the number of tokens to generate parameters.), which line of this parameter code specifically controls the length of the generated music, and also, (If you want to specify instruments you will have to prime the model with the tokens of the instruments you want. There is no option to do that so you will have to There is no option to do that so you will have to write the code yourself.) I don't quite understand this sentence, which specific line of code to modify it, because the code does not have enough comments, it looks difficult to understand, I hope you can help me

这句话我不太懂(你可以通过增加生成参数的令牌数量来控制生成音乐的长度),这个参数代码的哪一行具体控制生成音乐的长度,而且,(如果你想指定工具,你必须用你想要的工具的标记来为模型做准备。没有选择这样做所以你必须去那里没有选择这样做所以你必须自己编写代码。)我不太明白这句话,具体要修改哪一行代码呢,因为代码没有足够的注释,看起来很难理解,希望你能帮我

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot Sorry about the difficulty you are experiencing with my code....

Priming the model with instruments is done like this:

 #@title Improv Generator

#@markdown Select desired instruments (any combination is fine)

Piano = True #@param {type:"boolean"}
Guitar = True #@param {type:"boolean"}
Bass = True #@param {type:"boolean"}
Violin = False #@param {type:"boolean"}
Cello = False #@param {type:"boolean"}
Harp = False #@param {type:"boolean"}
Trumpet = False #@param {type:"boolean"}
Clarinet = False #@param {type:"boolean"}
Flute = False #@param {type:"boolean"}
Drums = True #@param {type:"boolean"}
Choir = False #@param {type:"boolean"}
Organ = False #@param {type:"boolean"}

#@markdown Improv Timings and Velocity
desired_prime_time = 10 #@param {type:"slider", min:0, max:127, step:1}
desired_prime_duration = 12 #@param {type:"slider", min:1, max:126, step:1}
desired_velocity = 6 #@param {type:"slider", min:1, max:8, step:1}

#@markdown Model settings

number_of_tokens_tp_generate = 512 #@param {type:"slider", min:32, max:4064, step:32}
number_of_batches_to_generate = 4 #@param {type:"slider", min:1, max:16, step:1}
temperature = 1 #@param {type:"slider", min:0.1, max:1, step:0.1}

instruments = []

if Piano:
  instruments += [
                  0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((0 * 128) + 60)+1152, # instrument number(1) + start pitch(60)
                 ]

if Guitar:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((1 * 128) + 60)+1152, # instrument number(1) + start pitch(60)
                  ]

if Bass:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((2 * 128) + 48)+1152, # instrument number(2) + start pitch(48)
                  ]
if Violin:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((3 * 128) + 72)+1152, # instrument number(3) + start pitch(72)
                  ]

if Cello:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((4 * 128) + 48)+1152, # instrument number(4) + start pitch(48)
                  ]

if Harp:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((5 * 128) + 72)+1152, # instrument number(5) + start pitch(72)
                  ]

if Trumpet:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((6 * 128) + 72)+1152, # instrument number(6) + start pitch(72)
                  ]

if Clarinet:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((7 * 128) + 72)+1152, # instrument number(7) + start pitch(72)
                  ]

if Flute:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((8 * 128) + 72)+1152, # instrument number(8) + start pitch(72)
                  ]

if Drums:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((9 * 128) + 35)+1152, # instrument number(9) + start pitch(35)
                  ]

if Choir:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((10 * 128) + 72)+1152, # instrument number(10) + start pitch(72)
                  ]

if Organ:
  instruments += [0, # delta start time (0)
                  ((desired_prime_duration * 8) + (desired_velocity-1))+128,
                  ((11 * 128) + 60)+1152, # instrument number(11) + start pitch(60)
                  ]

instruments[0] = desired_prime_time
instruments[3::3] = [0] * len(instruments[3::3])

outy = instruments

#===================================================================

print('=' * 70)
print('Los Angeles Music Composer Model Improvisation Generator')
print('=' * 70)

print('Generation settings:')
print('=' * 70)
print('Model temperature:', temperature)

print('=' * 70)
print('Selected Improv sequence:')
print(outy)
print('=' * 70)

inp = [outy] * number_of_batches_to_generate

inp = torch.LongTensor(inp).cuda()

#start_time = time()

out = model.module.generate(inp, 
                      number_of_tokens_tp_generate, 
                      temperature=temperature, 
                      return_prime=True, 
                      min_stop_token=0, 
                      verbose=True)

out0 = out.tolist()

print('=' * 70)
print('Done!')
print('=' * 70)
#print('Generation took', time() - start_time, "seconds")
print('=' * 70)

#======================================================================

print('Rendering results...')
print('=' * 70)

for i in range(number_of_batches_to_generate):

  print('=' * 70)
  print('Batch #', i)
  print('=' * 70)

  out1 = out0[i]

  print('Sample INTs', out1[:12])
  print('=' * 70)

  if len(out) != 0:
    
      song = out1
      song_f = []
      tim = 0
      dur = 0
      vel = 0
      pitch = 0
      channel = 0

      son = []
      song1 = []

      for s in song:
        if s >= 128 and s < (12*128)+1152:
          son.append(s)
        else:
          if len(son) == 3:
            song1.append(son)
          son = []
          son.append(s)
                      
      for ss in song1:

        tim += ss[0] * 10

        dur = (((ss[1]-128) // 8)+1) * 20
        vel = (((ss[1]-128) % 8)+1) * 15
    
        channel = (ss[2]-1152) // 128
        pitch = (ss[2]-1152) % 128
                        
        song_f.append(['note', tim, dur, channel, pitch, vel ])

      detailed_stats = TMIDIX.Tegridy_SONG_to_MIDI_Converter(song_f,
                                                          output_signature = 'Los Angeles Music Composer',  
                                                          output_file_name = '/content/Los-Angeles-Music-Composer-Music-Composition_'+str(i), 
                                                          track_name='Project Los Angeles',
                                                          list_of_MIDI_patches=[0, 24, 32, 40, 42, 46, 56, 71, 73, 0, 53, 19, 0, 0, 0, 0],
                                                          number_of_ticks_per_quarter=500)


      print('=' * 70)
      print('Displaying resulting composition...')
      print('=' * 70)

      fname = '/content/Los-Angeles-Music-Composer-Music-Composition_'+str(i)

      x = []
      y =[]
      c = []

      colors = ['red', 'yellow', 'green', 'cyan', 'blue', 'pink', 'orange', 'purple', 'gray', 'white', 'gold', 'silver']

      for s in song_f:
        x.append(s[1] / 1000)
        y.append(s[4])
        c.append(colors[s[3]])

      FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname + '.mid'), str(fname + '.wav'))
      display(Audio(str(fname + '.wav'), rate=16000))

      plt.figure(figsize=(14,5))
      ax=plt.axes(title=fname)
      ax.set_facecolor('black')

      plt.scatter(x,y, c=c)
      plt.xlabel("Time")
      plt.ylabel("Pitch")
      plt.show() 

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61 Please upload this code to github, you can directly run the kind, thank you!

#@title Improv Generator

#@markdown Select desired instruments (any combination is fine)

Piano = True #@param {type:"boolean"}
Guitar = True #@param {type:"boolean"}
Bass = True #@param {type:"boolean"}
Violin = False #@param {type:"boolean"}
Cello = False #@param {type:"boolean"}
Harp = False #@param {type:"boolean"}
Trumpet = False #@param {type:"boolean"}
Clarinet = False #@param {type:"boolean"}
Flute = False #@param {type:"boolean"}
Drums = True #@param {type:"boolean"}
Choir = False #@param {type:"boolean"}
Organ = False #@param {type:"boolean"}

#@markdown Improv Timings and Velocity
desired_prime_time = 10 #@param {type:"slider", min:0, max:127, step:1}
desired_prime_duration = 12 #@param {type:"slider", min:1, max:126, step:1}
desired_velocity = 6 #@param {type:"slider", min:1, max:8, step:1}

#@markdown Model settings

number_of_tokens_tp_generate = 512 #@param {type:"slider", min:32, max:4064, step:32}
number_of_batches_to_generate = 4 #@param {type:"slider", min:1, max:16, step:1}
temperature = 1 #@param {type:"slider", min:0.1, max:1, step:0.1}

instruments = []

if Piano:
instruments += [
0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((0 * 128) + 60)+1152, # instrument number(1) + start pitch(60)
]

if Guitar:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((1 * 128) + 60)+1152, # instrument number(1) + start pitch(60)
]

if Bass:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((2 * 128) + 48)+1152, # instrument number(2) + start pitch(48)
]
if Violin:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((3 * 128) + 72)+1152, # instrument number(3) + start pitch(72)
]

if Cello:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((4 * 128) + 48)+1152, # instrument number(4) + start pitch(48)
]

if Harp:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((5 * 128) + 72)+1152, # instrument number(5) + start pitch(72)
]

if Trumpet:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((6 * 128) + 72)+1152, # instrument number(6) + start pitch(72)
]

if Clarinet:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((7 * 128) + 72)+1152, # instrument number(7) + start pitch(72)
]

if Flute:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((8 * 128) + 72)+1152, # instrument number(8) + start pitch(72)
]

if Drums:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((9 * 128) + 35)+1152, # instrument number(9) + start pitch(35)
]

if Choir:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((10 * 128) + 72)+1152, # instrument number(10) + start pitch(72)
]

if Organ:
instruments += [0, # delta start time (0)
((desired_prime_duration * 8) + (desired_velocity-1))+128,
((11 * 128) + 60)+1152, # instrument number(11) + start pitch(60)
]

instruments[0] = desired_prime_time
instruments[3::3] = [0] * len(instruments[3::3])

outy = instruments

#===================================================================

print('=' * 70)
print('Los Angeles Music Composer Model Improvisation Generator')
print('=' * 70)

print('Generation settings:')
print('=' * 70)
print('Model temperature:', temperature)

print('=' * 70)
print('Selected Improv sequence:')
print(outy)
print('=' * 70)

inp = [outy] * number_of_batches_to_generate

inp = torch.LongTensor(inp).cuda()

#start_time = time()

out = model.module.generate(inp,
number_of_tokens_tp_generate,
temperature=temperature,
return_prime=True,
min_stop_token=0,
verbose=True)

out0 = out.tolist()

print('=' * 70)
print('Done!')
print('=' * 70)
#print('Generation took', time() - start_time, "seconds")
print('=' * 70)

#======================================================================

print('Rendering results...')
print('=' * 70)

for i in range(number_of_batches_to_generate):

print('=' * 70)
print('Batch #', i)
print('=' * 70)

out1 = out0[i]

print('Sample INTs', out1[:12])
print('=' * 70)

if len(out) != 0:

  song = out1
  song_f = []
  tim = 0
  dur = 0
  vel = 0
  pitch = 0
  channel = 0

  son = []
  song1 = []

  for s in song:
    if s >= 128 and s < (12*128)+1152:
      son.append(s)
    else:
      if len(son) == 3:
        song1.append(son)
      son = []
      son.append(s)
                  
  for ss in song1:

    tim += ss[0] * 10

    dur = (((ss[1]-128) // 8)+1) * 20
    vel = (((ss[1]-128) % 8)+1) * 15

    channel = (ss[2]-1152) // 128
    pitch = (ss[2]-1152) % 128
                    
    song_f.append(['note', tim, dur, channel, pitch, vel ])

  detailed_stats = TMIDIX.Tegridy_SONG_to_MIDI_Converter(song_f,
                                                      output_signature = 'Los Angeles Music Composer',  
                                                      output_file_name = '/content/Los-Angeles-Music-Composer-Music-Composition_'+str(i), 
                                                      track_name='Project Los Angeles',
                                                      list_of_MIDI_patches=[0, 24, 32, 40, 42, 46, 56, 71, 73, 0, 53, 19, 0, 0, 0, 0],
                                                      number_of_ticks_per_quarter=500)


  print('=' * 70)
  print('Displaying resulting composition...')
  print('=' * 70)

  fname = '/content/Los-Angeles-Music-Composer-Music-Composition_'+str(i)

  x = []
  y =[]
  c = []

  colors = ['red', 'yellow', 'green', 'cyan', 'blue', 'pink', 'orange', 'purple', 'gray', 'white', 'gold', 'silver']

  for s in song_f:
    x.append(s[1] / 1000)
    y.append(s[4])
    c.append(colors[s[3]])

  FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname + '.mid'), str(fname + '.wav'))
  display(Audio(str(fname + '.wav'), rate=16000))

  plt.figure(figsize=(14,5))
  ax=plt.axes(title=fname)
  ax.set_facecolor('black')

  plt.scatter(x,y, c=c)
  plt.xlabel("Time")
  plt.ylabel("Pitch")
  plt.show() 

from tegridy-midi-dataset.

xuboot avatar xuboot commented on August 21, 2024

@asigalov61
image
I added your code, and when I ran it, I got an error that the model could not be found.I added your code, but I ran it and got an error, the model could not be found, I hope you will submit this code to github, my other code can run

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot Sorry about your difficulties.

You need to load the model in the colab:

https://colab.research.google.com/github/asigalov61/Los-Angeles-Music-Composer/blob/main/Los_Angeles_Music_Composer.ipynb

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot I will add it to the colab soon. Please give me some time.

from tegridy-midi-dataset.

asigalov61 avatar asigalov61 commented on August 21, 2024

@xuboot I added the code to the colab. Hope this will help you. :)

from tegridy-midi-dataset.

Related Issues (3)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.