Giter Club home page Giter Club logo

osumapper's Introduction

osumapper

An automatic beatmap generator using Tensorflow / Deep Learning.

Demo map 1 (low BPM): https://osu.ppy.sh/beatmapsets/1290030

Demo map 2 (high BPM): https://osu.ppy.sh/beatmapsets/1290026

Colaboratory

https://colab.research.google.com/github/kotritrona/osumapper/blob/master/v7.0/Colab.ipynb

For mania mode: mania_Colab.ipynb

Complete guide for a newcomer in osu! mapping

https://github.com/kotritrona/osumapper/wiki/Complete-guide:-creating-beatmap-using-osumapper

Installation & Model Running

Important tip for model training

Don't train with every single map in your osu!. That's not how machine learning works!

I would suggest you select only maps you think are well made, for instance a mapset that contains all 5.0 ~ 6.5โ˜† maps mapped by (insert mapper name).

Maplist.txt creation:

  • I have made a maplist generator under v7.0/ folder. Run node gen_maplist.js under the directory to start.
  • the other way to create a maplist.txt file to train the model is by using the maplist creator.py script (found in v6.2 folder). running this should overwrite the maplist.txt in the folder with a new one using the maps from the collection folder you have specified.

Model Specification

Structure diagram

  • Rhythm model
    • CNN/LSTM + dense layers
    • input music FFTs (7 time_windows x 32 fft_size x 2 (magnitude, phase))
    • additional input timing (is_1/1, is_1/4, is_1/2, is_the_other_1/4, BPM, tick_length, slider_length)
    • output (is_note, is_circle, is_slider, is_spinner, is_sliding, is_spinning) for 1/-1 classification
  • Momentum model
    • Same structure as above
    • output (momentum, angular_momentum) as regression
    • momentum is distance over time. It should be proportional to circle size which I may implement later.
    • angular_momentum is angle over time. currently unused.
    • it's only used in v6.2
  • Slider model
    • was designed to classify slider lengths and shapes
    • currently unused
  • Flow model
    • uses GAN to generate the flow.
    • takes 10 notes as a group and train them each time
    • Generator: some dense layers, input (randomness x 50), output (cos_list x 20, sin_list x 20)
    • this output is then fed into a map generator to build a map corresponding to the angular values
    • map constructor output: (x_start, y_start, vector_out_x, vector_out_y, x_end, y_end) x 10
    • Discriminator: simpleRNN, some dense layers, input โ†‘, output (1,) ranging from 0 to 1
    • every big epoch(?), trains generator for 7 epochs and then discriminator 3 epochs
    • trains 6 ~ 25 big epochs each group. mostly 6 epochs unless the generated map is out of the mapping region (0:512, 0:384).
  • Beatmap Converter
    • uses node.js to convert map data between JSON and .osu formats

Citing

If you want to cite osumapper in a scholarly work, please cite the github page. I'm not going to write a paper for it.

osumapper's People

Contributors

jvyden avatar kotritrona avatar zoogies avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

osumapper's Issues

Step 02 is broken.

When running, the line:
plot_history(history)
breaks and causes errors.

Error on step 4

Files from osz archive exported from osu!lazer. Unpacked and used .osu and .mp3 files.
image

TypeError: Cannot read property '1' of null
    at parseDiffdata (/content/osumapper/v7.0/load_map.js:79:81)
    at load_map (/content/osumapper/v7.0/load_map.js:676:15)
    at main (/content/osumapper/v7.0/load_map.js:1396:24)
    at Object.<anonymous> (/content/osumapper/v7.0/load_map.js:1484:1)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
    at Module.load (internal/modules/cjs/loader.js:863:32)
    at Function.Module._load (internal/modules/cjs/loader.js:708:14)
    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)
    at internal/main/run_main_module.js:17:47

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
[<ipython-input-23-994af1dafd16>](https://localhost:8080/#) in <cell line: 3>()
      1 from act_newmap_prep import *
      2 
----> 3 step4_read_new_map(uploaded_osu_name);

2 frames
[/content/osumapper/v7.0/act_newmap_prep.py](https://localhost:8080/#) in step4_read_new_map(file_path, divisor)
     26 
     27     start = time.time()
---> 28     read_and_save_osu_tester_file(file_path.strip(), filename="mapthis", divisor=divisor);
     29     end = time.time()

[/content/osumapper/v7.0/audio_tools.py](https://localhost:8080/#) in read_and_save_osu_tester_file(path, filename, json_name, divisor)
    201 
    202 def read_and_save_osu_tester_file(path, filename = "saved", json_name="mapthis.json", divisor=4):
--> 203     osu_dict, wav_file = read_osu_file(path, convert = True, json_name=json_name);
    204     sig, samplerate = librosa.load(wav_file, sr=None, mono=True);
    205     file_len = (sig.shape[0] / samplerate * 1000 - 3000);

[/content/osumapper/v7.0/audio_tools.py](https://localhost:8080/#) in read_osu_file(path, convert, wav_name, json_name)
     36     if(len(result) > 1):
     37         print(result.decode("utf-8"));
---> 38         raise Exception("Map Convert Failure");
     39 
     40     with open(json_name, encoding="utf-8") as map_json:

Exception: Map Convert Failure

error on step 4

image

TypeError Traceback (most recent call last)

in ()
1 from act_newmap_prep import *
2
----> 3 step4_read_new_map(uploaded_osu_name);

4 frames

/content/osumapper/v7.0/act_newmap_prep.py in step4_read_new_map(file_path, divisor)
26
27 start = time.time()
---> 28 read_and_save_osu_tester_file(file_path.strip(), filename="mapthis", divisor=divisor);
29 end = time.time()

/content/osumapper/v7.0/audio_tools.py in read_and_save_osu_tester_file(path, filename, json_name, divisor)
206
207 # ticks = ticks from each uninherited timing section
--> 208 ticks, timestamps, tick_lengths, slider_lengths = get_all_ticks_and_lengths_from_ts(osu_dict["timing"]["uts"], osu_dict["timing"]["ts"], file_len, divisor=divisor);
209
210 # old version to determine ticks (all from start)

/content/osumapper/v7.0/map_analyze.py in get_all_ticks_and_lengths_from_ts(uts_array, ts_array, end_time, divisor)
78 tick_len = [[uts["tickLength"]] * len(np.arange(uts["beginTime"], endtimes[i], uts["tickLength"] / divisor)) for i, uts in enumerate(uts_array)];
79 # slider_len = [[ts["sliderLength"]] * len(np.arange(ts["beginTime"], endtimes[i], ts["tickLength"] / divisor)) for i, ts in enumerate(ts_array)];
---> 80 slider_len = [get_slider_len_ts(ts_array, timestamp) for timestamp in np.concatenate(timestamps)];
81 return np.concatenate(ticks_from_uts), np.round(np.concatenate(timestamps)).astype(int), np.concatenate(tick_len), np.array(slider_len);
82

/content/osumapper/v7.0/map_analyze.py in (.0)
78 tick_len = [[uts["tickLength"]] * len(np.arange(uts["beginTime"], endtimes[i], uts["tickLength"] / divisor)) for i, uts in enumerate(uts_array)];
79 # slider_len = [[ts["sliderLength"]] * len(np.arange(ts["beginTime"], endtimes[i], ts["tickLength"] / divisor)) for i, ts in enumerate(ts_array)];
---> 80 slider_len = [get_slider_len_ts(ts_array, timestamp) for timestamp in np.concatenate(timestamps)];
81 return np.concatenate(ticks_from_uts), np.round(np.concatenate(timestamps)).astype(int), np.concatenate(tick_len), np.array(slider_len);
82

/content/osumapper/v7.0/map_analyze.py in get_slider_len_ts(ts_a, tick)
50
51 def get_slider_len_ts(ts_a, tick):
---> 52 if tick < ts_a[0]["beginTime"]:
53 return ts_a[0]["sliderLength"];
54 _out = 100;

TypeError: '<' not supported between instances of 'float' and 'NoneType'

map conversion error (affected all maps)

Number of filtered maps: 11
Error: ENOENT: no such file or directory, open '#C:\Users\saber\AppData\Local\osu!\Songs\1232750 Kagetora - Crazy banger\Kagetora. - Crazy banger (waywern2012) [PENGUINS].osu'
at Object.openSync (node:fs:495:3)
at Object.readFileSync (node:fs:396:35)
at main (c:\Users\saber\Desktop\osumapper-master\osumapper-master\v7.0\load_map.js:1395:26)
at Object. (c:\Users\saber\Desktop\osumapper-master\osumapper-master\v7.0\load_map.js:1484:1)
at Module._compile (node:internal/modules/cjs/loader:1108:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1137:10)
at Module.load (node:internal/modules/cjs/loader:973:32)
at Function.Module._load (node:internal/modules/cjs/loader:813:14)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12)
at node:internal/main/run_main_module:17:47 {
errno: -4058,
syscall: 'open',
code: 'ENOENT',
path: '#C:\Users\saber\AppData\Local\osu!\Songs\1232750 Kagetora - Crazy banger\Kagetora. - Crazy banger (waywern2012) [PENGUINS].osu'
}

Error on #0, path = #C:\Users\saber\AppData\Local\osu!\Songs\1232750 Kagetora - Crazy banger\Kagetora. - Crazy banger (waywern2012) [PENGUINS].osu, error = Map Convert Failure

Not big problem.

Developers, please make working versions for android.

I certainly do not insist, but if you want, then please do :)

thanks for noticing. good luck to you! ๐Ÿ‘ ;)

01_osumap_loader

Number of filtered maps: 6
Error on #0, path = D:\Main Personal\Songs\979887 Teminite & MDK - Space Invaders\Teminite & MDK - Space Invaders (Ciyus Miapah) [Dimensional Virtual Arcade].osu, error = [Errno 2] No such file or directory: 'temp_json_file.json'
Error on #1, path = D:\Main Personal\Songs\185250 ALiCE'S EMOTiON - Dark Flight Dreamer\ALiCE'S EMOTiON - Dark Flight Dreamer (Sakaue Nachi) [CRN's Extra].osu, error = [Errno 2] No such file or directory: 'temp_json_file.json'
Error on #2, path = D:\Main Personal\Songs\740672 Ni-Sokkususu - Shukusai no Elementalia\Ni-Sokkususu - Shukusai no Elementalia (SnowNiNo_) [KNeeSocKs].osu, error = [Errno 2] No such file or directory: 'temp_json_file.json'
Error on #3, path = D:\Main Personal\Songs\693750 Shinonome Natsuhi (CV_ Hinami Yuri) - Moratorium no Oto\Shinonome Natsuhi (CV Hinami Yuri) - Moratorium no Oto (newton-) [Standstill].osu, error = [Errno 2] No such file or directory: 'temp_json_file.json'
Error on #4, path = D:\Main Personal\Songs\88180 t+pazolite - cheatreal\t+pazolite - cheatreal (caren_sk) [RLC's Extra].osu, error = [Errno 2] No such file or directory: 'temp_json_file.json'
Error on #5, path = D:\Main Personal\Songs\205425 Nujabes - Lady Brown (feat Cise Starr)\Nujabes - Lady Brown (feat. Cise Starr) (Mawkawa) [Extra].osu, error = [Errno 2] No such file or directory: 'temp_json_file.json'

Issue with step 1

Yo i dunno if u r still working on this but I figured i'd try some stuff with it and see some results

I tried running step 1 however i was getting this error:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-5-7e5bd69e56b2> in <module>
     45 #     try:
     46     start = time.time()
---> 47     read_and_save_osu_file(mname.strip(), filename=os.path.join(mapdata_path, str(k)), divisor=divisor);
     48     end = time.time()
     49     print("Map data #" + str(k) + " saved! time = " + str(end - start) + " secs");

A:\Users\oykxf\Documents\osumapper\ipynb\osureader.py in read_and_save_osu_file(path, filename, divisor)
    327 #
    328 def read_and_save_osu_file(path, filename = "saved", divisor=4):
--> 329     osu_dict, wav_file = read_osu_file(path, convert = True);
    330     data, flow_data = get_map_notes(osu_dict, divisor=divisor);
    331     timestamps = [c[1] for c in data];

A:\Users\oykxf\Documents\osumapper\ipynb\osureader.py in read_osu_file(path, convert, wav_name, json_name)
     20     subprocess.call(["node", "load_map.js", "jq", path, json_name]);
     21 
---> 22     with open(json_name, encoding="utf-8") as map_json:
     23         map_dict = json.load(map_json); # not "loads" it is not a string
     24 

FileNotFoundError: [Errno 2] No such file or directory: 'temp_json_file.json'

i created a temp_json_file.json in the same directory with an empty object, and then got this error as a result:

Number of filtered maps: 146
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-6-7e5bd69e56b2> in <module>
     45 #     try:
     46     start = time.time()
---> 47     read_and_save_osu_file(mname.strip(), filename=os.path.join(mapdata_path, str(k)), divisor=divisor);
     48     end = time.time()
     49     print("Map data #" + str(k) + " saved! time = " + str(end - start) + " secs");

A:\Users\oykxf\Documents\osumapper\ipynb\osureader.py in read_and_save_osu_file(path, filename, divisor)
    327 #
    328 def read_and_save_osu_file(path, filename = "saved", divisor=4):
--> 329     osu_dict, wav_file = read_osu_file(path, convert = True);
    330     data, flow_data = get_map_notes(osu_dict, divisor=divisor);
    331     timestamps = [c[1] for c in data];

A:\Users\oykxf\Documents\osumapper\ipynb\osureader.py in read_osu_file(path, convert, wav_name, json_name)
     24 
     25         if convert:
---> 26             mp3_file = os.path.join(file_dir, map_dict["general"]["AudioFilename"]);
     27             subprocess.call([GLOBAL_VARS["ffmpeg_path"], "-y", "-i", mp3_file, wav_name]);
     28 

KeyError: 'general'

Any idea as to what I should do?

error on step 3

i've got this error on step 3
Screenshot 2022-06-30 213954
it didn't even let me drag in the osu file
might have something to do with these seperate issues in step 1
Screenshot 2022-06-30 224552
Screenshot 2022-06-30 221828
i can go around this by using microsoft edge instead of firefox but if anyone has any solutions that don't double the time that'd be great
(this previously worked but doesn't anymore for some reason)

predictions = step5_predict_notes(model, npz, params) giving error

Hi,
When i try to run the code in colab (for Osu!mania), it gives me following error in step 5:


ValueError Traceback (most recent call last)

in ()
7 # params = step5_set_params(note_density=0.4, hold_favor=0.2, divisor_favor=[0] * divisor, hold_max_ticks=8, hold_min_return=1, rotate_mode=4);
8
----> 9 predictions = step5_predict_notes(model, npz, params);
10 notes_each_key = step5_build_pattern(predictions, params, pattern_dataset=model_params["pattern_dataset"]);

10 frames

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
992 except Exception as e: # pylint:disable=broad-except
993 if hasattr(e, "ag_error_metadata"):
--> 994 raise e.ag_error_metadata.to_exception(e)
995 else:
996 raise

ValueError: in user code:

/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1586 predict_function  *
    return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1576 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1286 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2849 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3632 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1569 run_step  **
    outputs = model.predict_step(data)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1537 predict_step
    return self(x, training=False)
/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:1020 __call__
    input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py:269 assert_input_compatibility
    ', found shape=' + display_shape(x.shape))

ValueError: Input 0 is incompatible with layer functional_1: expected shape=(None, 16, 7, 32, 2), found shape=(None, 16, 7, 31, 2)

How to solve this?
Thanks in advance

a big problem!

too much data loaded at once!
should make a read_some_npzs instead of read_all_npzs, then gradually read them when training each epoch!
wwwwwwww

tomorrow try to fix it!

"[Errno 2] No such file or directory: 'temp_json_file.json"

Whenever I try to run the first code in 6,2, I'm getting this error message:

"Error on #0, path = C:\Beatmaps\BEATMAP_NAME.osu, error = [Errno 2] No such file or directory: 'temp_json_file.json'"

The Path to ffmpeg.exe is correct. The ffmpeg version 4.4.1.

ffmpeg

path

Capture

Stuck on block 5

Anyone plz help me. I'm very new to this and can't seem to find a tutorial.

NameError Traceback (most recent call last)
in ()
1 from act_rhythm_calc import *
2
----> 3 model = step5_load_model(model_file=model_params["rhythm_param"]);
4 npz = step5_load_npz();
5 params = model_params["rhythm_param"]

NameError: name 'model_params' is not defined

What do i need to do?

Error on step 4.

ValueError                                Traceback (most recent call last)
<ipython-input-8-994af1dafd16> in <module>()
      1 from act_newmap_prep import *
      2 
----> 3 step4_read_new_map(uploaded_osu_name);

2 frames
/content/osumapper/v7.0/map_analyze.py in get_all_ticks_and_lengths_from_ts(uts_array, ts_array, end_time, divisor)
     78     tick_len = [[uts["tickLength"]] * len(np.arange(uts["beginTime"], endtimes[i], uts["tickLength"] / divisor)) for i, uts in enumerate(uts_array)];
     79     # slider_len = [[ts["sliderLength"]] * len(np.arange(ts["beginTime"], endtimes[i], ts["tickLength"] / divisor)) for i, ts in enumerate(ts_array)];
---> 80     slider_len = [get_slider_len_ts(ts_array, timestamp) for timestamp in np.concatenate(timestamps)];
     81     return np.concatenate(ticks_from_uts), np.round(np.concatenate(timestamps)).astype(int), np.concatenate(tick_len), np.array(slider_len);
     82 

<__array_function__ internals> in concatenate(*args, **kwargs)

ValueError: need at least one array to concatenate```

Setup Problems


ModuleNotFoundError Traceback (most recent call last)
in
1 import import_ipynb
2 import os, re, time
----> 3 from osureader import *
4
5 GLOBAL_VARS["ffmpeg_path"] = "C:\ffmpeg\bin\ffmpeg.exe";

~\3D Objects\Osu Mapper\ipynb\osureader.py in
----> 1 import re, os, subprocess, json, soundfile
2 import numpy as np
3
4 workingdir = os.path.dirname(os.path.abspath(file));
5 os.chdir(workingdir);

ModuleNotFoundError: No module named 'soundfile'

(the code thing in the text editor of github dont work so the #s are spacing from the code)
i assume that this is because osureader.py doESNT EVEN DEFINE SOUNDFILE??
idk man pls help I installed all reqs by pip except jupyter which came from anaconda.

Error on step 5

TypeError Traceback (most recent call last)
in
----> 1 notes_each_key = step5_build_pattern(predictions, params);

C:\UserData\OSUAI\v7.0\mania_act_rhythm_calc.py in step5_build_pattern(rhythm_data, params, pattern_dataset)
337 if tick % metronome_length == 0:
338 if len(current_group_note_begin) > 0:
--> 339 note_begin_pattern, note_end_pattern = group_notes_to_pattern(pattern_data, current_group_note_begin, current_group_note_end, current_group_note_hold, hold_min_return=hold_min_return, rotate_mode=rotate_mode)
340
341 map_pattern_note_begin.append(note_begin_pattern)

C:\UserData\OSUAI\v7.0\mania_act_rhythm_calc.py in group_notes_to_pattern(data, b_group, e_group, h_group, hold_min_return, rotate_mode)
282 """
283 note_metronome_group, note_end_metronome_group, hold_metronome_group = get_metronome_groups(b_group, e_group, h_group)
--> 284 note_begin_pattern, note_end_pattern = get_converted_pattern_group(data, note_metronome_group, note_end_metronome_group, hold_metronome_group,
285 hold_min_return=hold_min_return,
286 rotate_mode=rotate_mode)

C:\UserData\OSUAI\v7.0\mania_act_rhythm_calc.py in get_converted_pattern_group(data, note_metronome_group, note_end_metronome_group, hold_metronome_group, hold_min_return, rotate_mode)
265
266 def get_converted_pattern_group(data, note_metronome_group, note_end_metronome_group, hold_metronome_group=[], hold_min_return=1, rotate_mode=4):
--> 267 note_begin_pattern, note_end_pattern, convert = get_pattern_group(data, note_metronome_group, note_end_metronome_group,
268 hold_metronome_group=hold_metronome_group,
269 hold_min_return=hold_min_return)

C:\UserData\OSUAI\v7.0\mania_act_rhythm_calc.py in get_pattern_group(data, note_metronome_group, note_end_metronome_group, hold_metronome_group, hold_min_return)
216 return randomized_group, randomized_group, False
217
--> 218 note_begin_patterns, note_end_patterns = get_data_pattern_groups(data, note_metronome_group, note_end_metronome_group, hold_metronome_group, hold_min_return)
219
220 if len(note_begin_patterns) > 0:

C:\UserData\OSUAI\v7.0\mania_act_rhythm_calc.py in get_data_pattern_groups(data, note_metronome_group, note_end_metronome_group, hold_metronome_group, hold_min_return)
147 match_avail_flags = array_to_flags(match_avail)
148
--> 149 avail_filter = bitwise_contains(avail_note_begin, match_avail_flags)
150
151 pattern_note_begin_filtered = pattern_note_begin[avail_filter]

C:\UserData\OSUAI\v7.0\mania_act_rhythm_calc.py in bitwise_contains(container, item)
127
128 def bitwise_contains(container, item):
--> 129 return np.bitwise_and(np.bitwise_not(container), item) == 0
130
131 def get_data_pattern_groups(data, note_metronome_group, note_end_metronome_group, hold_metronome_group=[], hold_min_return=1):

TypeError: ufunc 'invert' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

An error occurred in "notes_each_key = step5_build_pattern(predictions, params);" in Rhythm Predictor

I don't know what I did to break it, osumapper was still running an hour ago.

problem that makes the whole thing not function

On step 4 on the google colab
`/content/osumapper/v7.0/act_newmap_prep.py in step4_read_new_map(file_path, divisor)
26
27 start = time.time()
---> 28 read_and_save_osu_tester_file(file_path.strip(), filename="mapthis", divisor=divisor);
29 end = time.time()

/content/osumapper/v7.0/audio_tools.py in read_and_save_osu_tester_file(path, filename, json_name, divisor)
206
207 # ticks = ticks from each uninherited timing section
--> 208 ticks, timestamps, tick_lengths, slider_lengths = get_all_ticks_and_lengths_from_ts(osu_dict["timing"]["uts"], osu_dict["timing"]["ts"], file_len, divisor=divisor);
209
210 # old version to determine ticks (all from start)

/content/osumapper/v7.0/map_analyze.py in get_all_ticks_and_lengths_from_ts(uts_array, ts_array, end_time, divisor)
78 tick_len = [[uts["tickLength"]] * len(np.arange(uts["beginTime"], endtimes[i], uts["tickLength"] / divisor)) for i, uts in enumerate(uts_array)];
79 # slider_len = [[ts["sliderLength"]] * len(np.arange(ts["beginTime"], endtimes[i], ts["tickLength"] / divisor)) for i, ts in enumerate(ts_array)];
---> 80 slider_len = [get_slider_len_ts(ts_array, timestamp) for timestamp in np.concatenate(timestamps)];
81 return np.concatenate(ticks_from_uts), np.round(np.concatenate(timestamps)).astype(int), np.concatenate(tick_len), np.array(slider_len);
82

/usr/local/lib/python3.10/dist-packages/numpy/core/overrides.py in concatenate(*args, **kwargs)

ValueError: need at least one array to concatenate`

I have another question

What does this do?

import socket file = open('website/index.html', 'r') def start_server(HOST, PORT): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((HOST, PORT)) s.listen(1) print('Serving HTTP on port %s ...' % PORT) while True: client_connection, client_address = s.accept() request = client_connection.recv(1024) print(request.decode('utf-8')) http_response = """\ 200 OK """ + file.read() + """ """ client_connection.sendall(bytes(http_response, 'utf-8')) client_connection.close()

Error on step 7

Hey ! I've got that error on step 07 on the code after "Now we can train the model!" :

# of groups: 48

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-25-9041bcd63a5f> in <module>
    166             print("{},{},{},2,0,L|{}:{},1,{},0:0:0".format(int(ai[0]), int(ai[1]), int(timestamps[i]), int(round(ai[0] + ai[2] * slider_lengths[i])), int(round(ai[1] + ai[3] * slider_lengths[i])), int(slider_length_base[i] * slider_ticks[i])));
    167 
--> 168 osu_a = generate_map();
    169 # generate_test();

<ipython-input-25-9041bcd63a5f> in generate_map()
    145     print("# of groups: {}".format(timestamps.shape[0] // note_group_size));
    146     for i in range(timestamps.shape[0] // note_group_size):
--> 147         z = generate_set(begin = i * note_group_size, start_pos = pos, length_multiplier = dist_multiplier, group_id = i, plot_map=False) * np.array([512, 384, 1, 1, 512, 384]);
    148         pos = z[-1, 0:2];
    149         o.append(z);

<ipython-input-25-9041bcd63a5f> in generate_set(begin, start_pos, group_id, length_multiplier, plot_map)
     84     c_false_batch = GAN_PARAMS["c_false_batch"];
     85 
---> 86     gmodel = generative_model(g_input_size, note_group_size * 4, loss_function_for_generative_model);
     87 
     88     for i in range(max_epoch):

<ipython-input-25-9041bcd63a5f> in generative_model(in_params, out_params, loss_func)
     30     model.compile(loss=loss_func,
     31                 optimizer=optimizer,
---> 32                 metrics=[keras.metrics.mae])
     33     return model
     34 

c:\users\axel\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\training\checkpointable\base.py in _method_wrapper(self, *args, **kwargs)
    440     self._setattr_tracking = False  # pylint: disable=protected-access
    441     try:
--> 442       method(self, *args, **kwargs)
    443     finally:
    444       self._setattr_tracking = previous_value  # pylint: disable=protected-access

c:\users\axel\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\keras\engine\training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, distribute, **kwargs)
    447             else:
    448               weighted_loss = training_utils.weighted_masked_objective(loss_fn)
--> 449               output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)
    450 
    451           if len(self.outputs) > 1:

c:\users\axel\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\keras\engine\training_utils.py in weighted(y_true, y_pred, weights, mask)
    645     """
    646     # score_array has ndim >= 2
--> 647     score_array = fn(y_true, y_pred)
    648     if mask is not None:
    649       mask = math_ops.cast(mask, y_pred.dtype)

<ipython-input-25-9041bcd63a5f> in loss_function_for_generative_model(y_true, y_pred)
     68 
     69     def loss_function_for_generative_model(y_true, y_pred):
---> 70         return construct_map_and_calc_loss(y_pred, extvar);
     71 
     72 #     classifier_true_set_group = special_train_data[np.random.randint(0, special_train_data.shape[0], (500,))];

<ipython-input-24-993c3cadb30e> in construct_map_and_calc_loss(var_tensor, extvar)
    163     # first make a map from the outputs of generator, then ask the classifier (discriminator) to classify it
    164     classifier_model = extvar["classifier_model"]
--> 165     out = construct_map_with_sliders(var_tensor, extvar=extvar);
    166     cm = classifier_model(out);
    167     predmean = 1 - tf.reduce_mean(cm, axis=1);

<ipython-input-24-993c3cadb30e> in construct_map_with_sliders(var_tensor, extvar)
     45         begin_offset = 0;
     46     batch_size = var_tensor.shape[0];
---> 47     note_distances_now = length_multiplier * np.tile(np.expand_dims(note_distances[begin_offset:begin_offset+half_tensor], axis=0), (batch_size, 1));
     48     note_angles_now = np.tile(np.expand_dims(note_angles[begin_offset:begin_offset+half_tensor], axis=0), (batch_size, 1));
     49 

c:\users\axel\appdata\local\programs\python\python37\lib\site-packages\numpy\lib\shape_base.py in tile(A, reps)
   1241                 c = c.reshape(-1, n).repeat(nrep, 0)
   1242             n //= dim_in
-> 1243     return c.reshape(shape_out)

TypeError: __index__ returned non-int (type NoneType)

Tons of errors

Yeah it gives tons of errors.And yeah I changed directions of slashes cuz I'm also programming in C#language.At the end it generates terrible beatmaps.

Make a simpler program that trains in a one click and creates in a one click.I don't have to learn python, anaconde, node etc.

ModuleNotFoundError

ModuleNotFoundError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_8812\1352973929.py in
----> 1 from act_newmap_prep import *
2
3 # input file here! (don't remove the "r" before string)
4 file_path = r'C:\Users\justin\Downloads\kawaiikutegomen.osz'
5

~\Documents\GitHub\osumapper\v7.0\act_newmap_prep.py in
5 #
6
----> 7 from audio_tools import *;
8 from os_tools import *
9

~\Documents\GitHub\osumapper\v7.0\audio_tools.py in
5 #
6
----> 7 import librosa;
8 import re, os, subprocess, json;
9 import numpy as np;

ModuleNotFoundError: No module named 'librosa'

collab Error step 1

as the title reads i get a collab error in step 1. ill attach screen shots so you can see what i mean.
image

i followed everything it says to do in the collage even using ctrl + enter to run one block of code at a time.

no clue if i did something wrong or if the collab is broken... please help

help

its just really hard to do this in generall and i cant run the file in the app it just wont let me. pls help or make an app inteface thats better

Add support for Google Colab

If possible, create just one notebook explaining and guiding the whole process with Google Drive and Google Colab, with this it'd be way much easier to create maps and way faster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.