Giter Club home page Giter Club logo

magenta-demos's Introduction

Status

This repository is currently inactive and serves only as a supplement some of our papers. We have transitioned to using individual repositories for new projects. For our current work, see the Magenta website and Magenta GitHub Organization.

Magenta

Build Status PyPI version

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it's also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. Magenta was started by some researchers and engineers from the Google Brain team, but many others have contributed significantly to the project. We use TensorFlow and release our models and tools in open source on this GitHub. If you’d like to learn more about Magenta, check out our blog, where we post technical details. You can also join our discussion group.

This is the home for our Python TensorFlow library. To use our models in the browser with TensorFlow.js, head to the Magenta.js repository.

Getting Started

Take a look at our colab notebooks for various models, including one on getting started. Magenta.js is also a good resource for models and demos that run in the browser. This and more, including blog posts and Ableton Live plugins, can be found at https://magenta.tensorflow.org.

Magenta Repo

Installation

Magenta maintains a pip package for easy installation. We recommend using Anaconda to install it, but it can work in any standard Python environment. We support Python 3 (>= 3.5). These instructions will assume you are using Anaconda.

Automated Install (w/ Anaconda)

If you are running Mac OS X or Ubuntu, you can try using our automated installation script. Just paste the following command into your terminal.

curl https://raw.githubusercontent.com/tensorflow/magenta/main/magenta/tools/magenta-install.sh > /tmp/magenta-install.sh
bash /tmp/magenta-install.sh

After the script completes, open a new terminal window so the environment variable changes take effect.

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Note that you will need to run source activate magenta to use Magenta every time you open a new terminal window.

Manual Install (w/o Anaconda)

If the automated script fails for any reason, or you'd prefer to install by hand, do the following steps.

Install the Magenta pip package:

pip install magenta

NOTE: In order to install the rtmidi package that we depend on, you may need to install headers for some sound libraries. On Ubuntu Linux, this command should install the necessary packages:

sudo apt-get install build-essential libasound2-dev libjack-dev portaudio19-dev

On Fedora Linux, use

sudo dnf group install "C Development Tools and Libraries"
sudo dnf install SAASound-devel jack-audio-connection-kit-devel portaudio-devel

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Using Magenta

You can now train our various models and use them to generate music, audio, and images. You can find instructions for each of the models by exploring the models directory.

Development Environment

If you want to develop on Magenta, you'll need to set up the full Development Environment.

First, clone this repository:

git clone https://github.com/tensorflow/magenta.git

Next, install the dependencies by changing to the base directory and executing the setup command:

pip install -e .

You can now edit the files and run scripts by calling Python as usual. For example, this is how you would run the melody_rnn_generate script from the base directory:

python magenta/models/melody_rnn/melody_rnn_generate --config=...

You can also install the (potentially modified) package with:

pip install .

Before creating a pull request, please also test your changes with:

pip install pytest-pylint
pytest

PIP Release

To build a new version for pip, bump the version and then run:

python setup.py test
python setup.py bdist_wheel --universal
twine upload dist/magenta-N.N.N-py2.py3-none-any.whl

magenta-demos's People

Contributors

adarob avatar amrzv avatar banus avatar cclauss avatar cghawthorne avatar chrisdonahue avatar dependabot[bot] avatar douglaseck avatar falaktheoptimist avatar hardmaru avatar iansimon avatar icoxfog417 avatar irapha avatar jesseengel avatar jrgillick avatar pkmital avatar sherol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

magenta-demos's Issues

no iPad for MIRA

How required is MIRA?

As I understand MIRA is an app that just allows you to have a configurable midi controller interface, but this is the first time I'm seeing it.

I want to try this project but do not own an iDevice. Thank you.

piano-genie build doesn't seem to work

I just cloned the piano-genie demo and tried to build it, but yarn build gives me this error:

TypeScript error: node_modules/@tensorflow/tfjs-core/dist/kernels/webgl/gpgpu_context.d.ts(15,27): Error TS2304: Cannot find name 'WebGLLoseContext'.
error Command failed with exit code 1.

Is there a problem with the version of tfjs it's using?

cc @chrisdonahue

Magenta demos not working as `!pip install magenta` fails on Colab

EDIT I went through all notebooks linked from the Website https://magenta.tensorflow.org/demos/colab/, opened them in Colab and checked the environment setup steps:

broken(building numba and llvmlite fails, see below)

  • DDSP Timbre Transfer
  • Music Transformer (doesn't build the old numpy)
  • GANSynth
  • Multitrack MusicVAE
  • Performance RNN (also uses Python2 which ins not supported anymore) 
  • E-Z NSynth
  • MusicVAE
  • Onsets and Frames
  • Latent Constraints (RuntimeError)

also broken: 'Getting Started' example linked from https://magenta.tensorflow.org/get-started/

** works**

  • Music Transcription with Transformers

The command !pip install magenta fails on Colab (as well as on other platforms) because the wheels for numba and llvmlite cannot be built:

The output is:

error: subprocess-exited-with-error
  
  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  Building wheel for numba (setup.py) ... error
  ERROR: Failed building wheel for numba
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for llvmlite (setup.py) ... error
ERROR: Failed building wheel for llvmlite

Has someone found a working solution for this? It would be valuable to revive these notebooks for teaching and general in order to play around with Music / Machine Learning.

How to use conditional model

Hello,

I wonder how to use the conditional model, as I supposed, there is a conditional model included in the pre-trained model. But, I don't know how to input. For example, when I want the model output a catbus, can I draw something like catbus as input? This

AI Ableton Jam Issues

Hi, I'm trying to set up playback in Max and Ableton but when I run the RUN_DEMO.sh its not functions, but I'm unsure where to place the RNN.mag files, also I am getting some errors, any ideas? Here's a printout

Traceback (most recent call last):
File "/Users/Luke/miniconda2/envs/magenta/bin/magenta_midi", line 11, in
File "/Users/Luke/miniconda2/envs/magenta/bin/magenta_midi", line 11, in
sys.exit(console_entry_point())
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/interfaces/midi/magenta_midi.py", line 392, in console_entry_point
sys.exit(console_entry_point())
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/interfaces/midi/magenta_midi.py", line 392, in console_entry_point
tf.app.run(main)
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
tf.app.run(main)
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
_sys.exit(main(argv))
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/interfaces/midi/magenta_midi.py", line 322, in main
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/interfaces/midi/magenta_midi.py", line 322, in main
generators.append(_load_generator_from_bundle_file(bundle_file))
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/interfaces/midi/magenta_midi.py", line 277, in _load_generator_from_bundle_file
generators.append(_load_generator_from_bundle_file(bundle_file))
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/interfaces/midi/magenta_midi.py", line 277, in _load_generator_from_bundle_file
bundle_file)
bundle_file)
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/music/sequence_generator_bundle.py", line 34, in read_bundle_file
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/magenta/music/sequence_generator_bundle.py", line 34, in read_bundle_file
bundle.ParseFromString(f.read())
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 120, in read
bundle.ParseFromString(f.read())
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 120, in read
self._preread_check()
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 80, in _preread_check
self._preread_check()
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 80, in _preread_check
compat.as_bytes(self.__name), 1024 * 512, status)
compat.as_bytes(self.__name), 1024 * 512, status)
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit
File "/Users/Luke/miniconda2/envs/magenta/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundErrorc_api.TF_GetCode(self.status.status))
: tensorflow.python.framework.errors_impl.NotFoundError./drum_kit_rnn.mag; No such file or directory
: ./basic_rnn.mag; No such file or directory
[1]- Exit 1 magenta_midi --input_ports="IAC Driver IAC Bus 1" --output_ports="IAC Driver IAC Bus 2" --passthrough=false --qpm=120 --allow_overlap=true --enable_metronome=false --log=DEBUG --clock_control_number=1 --end_call_control_number=2 --min_listen_ticks_control_number=3 --max_listen_ticks_control_number=4 --response_ticks_control_number=5 --temperature_control_number=6 --tempo_control_number=7 --generator_select_control_number=8 --state_control_number=9 --loop_control_number=10 --panic_control_number=11 --mutate_control_number=12 --bundle_files=./basic_rnn.mag,./lookback_rnn.mag,./attention_rnn.mag,./rl_rnn.mag,./polyphony_rnn.mag,./pianoroll_rnn_nade.mag --playback_offset=-0.035 --playback_channel=1
[2]+ Exit 1 magenta_midi --input_ports="IAC Driver IAC Bus 3" --output_ports="IAC Driver IAC Bus 4" --passthrough=false --qpm=120 --allow_overlap=true --enable_metronome=false --clock_control_number=1 --end_call_control_number=2 --min_listen_ticks_control_number=3 --max_listen_ticks_control_number=4 --response_ticks_control_number=5 --temperature_control_number=6 --tempo_control_number=7 --generator_select_control_number=8 --state_control_number=9 --loop_control_number=10 --panic_control_number=11 --mutate_control_number=12 --bundle_files=./drum_kit_rnn.mag --playback_offset=-0.035 --playback_channel=2 --log=INFO

running Flask

when running Flask I get this error message:

Error: The file/path provided (hello.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py

Nsynth generate.py "operands could not be broadcast together with shapes"

I'm getting this error when running the generate.py script

Interpolating embeddings between instruments at each pitch...
generate.py:80: DeprecationWarning: object of type <class 'float'> cannot be safely interpreted as an integer.
  x, y = np.meshgrid(np.linspace(0, grid_size, res+1), np.linspace(0, grid_size, res+1))
Traceback (most recent call last):
  File "generate.py", line 279, in <module>
    interpolate_embeddings()
  File "generate.py", line 124, in interpolate_embeddings
    interp = (embeddings.T * weights).T.sum(axis=0)
  File "~/Coding/nsynth/VIRTUAL/lib/python3.7/site-packages/numpy/core/_methods.py", line 36, in _sum
    return umr_sum(a, axis, dtype, out, keepdims, initial)
ValueError: operands could not be broadcast together with shapes (116,16) (125,16)

This is my settings.json

{
        "instruments": [
                ["harp","guitar"],
                ["flute", "trumpet"]
        ],
        "checkpoint_dir":"~/Downloads/wavenet-ckpt",
        "pitches":      [24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68],
        "resolution":              9,
        "final_length":            64000,
        "gpus":                    1,
        "batch_size_embeddings":   32,
        "batch_size_generate":     256,
        "name":         "test_1"
}

I'm using:
python 3.7.6
numpy 1.16.0
tensorflow 1.15.3
magenta 1.3.1

Sketch-RNN ln7 Error: could not cast hparam

I am trying to run the Sketch-RN demo Sketch_RNN.ipynb.
when I run:
[train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env(data_dir, model_dir)

this is the error I get:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-7-c072ee5d7a3f> in <module>()
----> 1 [train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env(data_dir, model_dir)

/Users/libphd/anaconda3/envs/magenta/lib/python2.7/site-packages/magenta/models/sketch_rnn/sketch_rnn_train.pyc in load_env(data_dir, model_dir)
     73   model_params = sketch_rnn_model.get_default_hparams()
     74   with tf.gfile.Open(os.path.join(model_dir, 'model_config.json'), 'r') as f:
---> 75     model_params.parse_json(f.read())
     76   return load_dataset(data_dir, model_params, inference_mode=True)
     77 

/Users/libphd/anaconda3/envs/magenta/lib/python2.7/site-packages/tensorflow/contrib/training/python/training/hparam.pyc in parse_json(self, values_json)
    585     """
    586     values_map = json.loads(values_json)
--> 587     return self.override_from_dict(values_map)
    588 
    589   def values(self):

/Users/libphd/anaconda3/envs/magenta/lib/python2.7/site-packages/tensorflow/contrib/training/python/training/hparam.pyc in override_from_dict(self, values_dict)
    537     """
    538     for name, value in values_dict.items():
--> 539       self.set_hparam(name, value)
    540     return self
    541 

/Users/libphd/anaconda3/envs/magenta/lib/python2.7/site-packages/tensorflow/contrib/training/python/training/hparam.pyc in set_hparam(self, name, value)
    499         raise ValueError(
    500             'Must pass a list for multi-valued parameter: %s.' % name)
--> 501       setattr(self, name, _cast_to_type_if_compatible(name, param_type, value))
    502 
    503   def parse(self, values):

/Users/libphd/anaconda3/envs/magenta/lib/python2.7/site-packages/tensorflow/contrib/training/python/training/hparam.pyc in _cast_to_type_if_compatible(name, param_type, value)
    173   # Avoid converting a number or string type to a boolean or vice versa.
    174   if issubclass(param_type, bool) != isinstance(value, bool):
--> 175     raise ValueError(fail_msg)
    176 
    177   # Avoid converting float to an integer (the reverse is fine).

ValueError: Could not cast hparam 'conditional' of type '<type 'bool'>' from value 1

Can't figure out the problem though...

has multigrid generation been removed for the NSYNTH?

Hi all,

A few months ago I found this git readme (I think in the NSYNTH folder) with instructions on how to generate a multigrid which could be loaded by the nsynth ableton plugin.
It described how it needed a number of different tunings for each .wav samples.

I've searched everywhere, but can't seem to find it. Where can I find these instructions now?
Thank you!

License?

Hi,

Very cool demos.

What is the license of this code?

Thanks!

Error using SketchRNN model trained using Sketch_RNN_TF_To_JS_Tutorial notes

@hardmaru I've tried following your super helpful Sketch_RNN_TF_To_JS_Tutorial notebook, but ran into errors using a model I've just trained.

With the version of sketch-rnn-js in this repo I get this error:

numjs.js:5040 Uncaught Error: all the input arrays must have same number of dimensions
    at new ValueError (numjs.js:5040)
    at Object.concatenate (numjs.js:6156)
    at new LSTMCell (sketch_rnn.js:383)
    at load_model (sketch_rnn.js:584)
    at new SketchRNN (sketch_rnn.js:1089)
    at sketch_rnn.js:345
    at XMLHttpRequest.xobj.onreadystatechange (sketch_rnn.js:275)

using your latest magenta-js version of sketch-rnn:

magentasketch.js:23262 Uncaught (in promise) Error: Constructing tensor of shape (NaN) should match the length of values (1024)
    at Object.assert (magentasketch.js:23262)
    at new Tensor (magentasketch.js:22128)
    at Function.Tensor.make (magentasketch.js:22143)
    at tensor (magentasketch.js:20430)
    at Object.tensor2d (magentasketch.js:20469)
    at SketchRNN.instantiateFromJSON (magentasketch.js:42407)
    at SketchRNN.<anonymous> (magentasketch.js:42430)
    at step (magentasketch.js:42371)
    at Object.next (magentasketch.js:42352)
    at fulfilled (magentasketch.js:42343)

More info on training this model:

  • it uses giraffe.npz from quickdraw_dataset/sketchrnn
  • I've used the following command to train:sketch_rnn_train --data_dir=datasets\quickdraw_dataset --hparams="data_set=[giraffe.npz],num_steps=200000,conditional=0,dec_rnn_size=1024"
  • the training environment is Windows 10 with Python 3.6.5 using magenta-gpu installed via pip: magenta version 0.3.12, tensorflow version 1.10.0`
  • the Sketch RNN TF to JS python script has been minimally tweaked: it simply has a few print statements to check progress/errors and skips drawing the svg:
# import the required libraries
import numpy as np
import time
import random

import codecs
import collections
import os
import math
import json
import tensorflow as tf
from six.moves import xrange

# libraries required for visualisation:
from IPython.display import SVG, display
import svgwrite # conda install -c omnia svgwrite=1.1.6
import PIL
from PIL import Image
import matplotlib.pyplot as plt


from magenta.models.sketch_rnn.sketch_rnn_train import *
from magenta.models.sketch_rnn.model import *
from magenta.models.sketch_rnn.utils import *
from magenta.models.sketch_rnn.rnn import *

print('import complete')

# set numpy output to something sensible
np.set_printoptions(precision=8, edgeitems=6, linewidth=200, suppress=True)

# little function that displays vector images and saves them to .svg
def draw_strokes(data, factor=0.2, svg_filename = 'sample.svg'):
  tf.gfile.MakeDirs(os.path.dirname(svg_filename))
  min_x, max_x, min_y, max_y = get_bounds(data, factor)
  dims = (50 + max_x - min_x, 50 + max_y - min_y)
  dwg = svgwrite.Drawing(svg_filename, size=dims)
  dwg.add(dwg.rect(insert=(0, 0), size=dims,fill='white'))
  lift_pen = 1
  abs_x = 25 - min_x 
  abs_y = 25 - min_y
  p = "M%s,%s " % (abs_x, abs_y)
  command = "m"
  for i in xrange(len(data)):
    if (lift_pen == 1):
      command = "m"
    elif (command != "l"):
      command = "l"
    else:
      command = ""
    x = float(data[i,0])/factor
    y = float(data[i,1])/factor
    lift_pen = data[i, 2]
    p += command+str(x)+","+str(y)+" "
  the_color = "black"
  stroke_width = 1
  dwg.add(dwg.path(p).stroke(the_color,stroke_width).fill("none"))
  dwg.save()
  display(SVG(dwg.tostring()))

# generate a 2D grid of many vector drawings
def make_grid_svg(s_list, grid_space=10.0, grid_space_x=16.0):
  def get_start_and_end(x):
    x = np.array(x)
    x = x[:, 0:2]
    x_start = x[0]
    x_end = x.sum(axis=0)
    x = x.cumsum(axis=0)
    x_max = x.max(axis=0)
    x_min = x.min(axis=0)
    center_loc = (x_max+x_min)*0.5
    return x_start-center_loc, x_end
  x_pos = 0.0
  y_pos = 0.0
  result = [[x_pos, y_pos, 1]]
  for sample in s_list:
    s = sample[0]
    grid_loc = sample[1]
    grid_y = grid_loc[0]*grid_space+grid_space*0.5
    grid_x = grid_loc[1]*grid_space_x+grid_space_x*0.5
    start_loc, delta_pos = get_start_and_end(s)

    loc_x = start_loc[0]
    loc_y = start_loc[1]
    new_x_pos = grid_x+loc_x
    new_y_pos = grid_y+loc_y
    result.append([new_x_pos-x_pos, new_y_pos-y_pos, 0])

    result += s.tolist()
    result[-1][2] = 1
    x_pos = new_x_pos+delta_pos[0]
    y_pos = new_y_pos+delta_pos[1]
  return np.array(result)

# TODO: make these args
data_dir = './datasets/quickdraw_dataset'
model_dir = './checkpoints'

[train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env(data_dir, model_dir)
print('loaded env')
[hps_model, eval_hps_model, sample_hps_model] = load_model(model_dir)
print('loaded model',model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
print('preparing interactive session')
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
print('interactive session ready')
def decode(z_input=None, draw_mode=True, temperature=0.1, factor=0.2):
  z = None
  if z_input is not None:
    z = [z_input]
  sample_strokes, m = sample(sess, sample_model, seq_len=eval_model.hps.max_seq_len, temperature=temperature, z=z)
  strokes = to_normal_strokes(sample_strokes)
  if draw_mode:
    draw_strokes(strokes, factor)
  return strokes
print('loading checkpoints')
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
print('loaded checkpoints')
# randomly unconditionally generate 10 examples
N = 10
reconstructions = []
for i in range(N):
  reconstructions.append([decode(temperature=0.5, draw_mode=False), [0, i]])

# stroke_grid = make_grid_svg(reconstructions)
# draw_strokes(stroke_grid)

def get_model_params():
  # get trainable params.
  model_names = []
  model_params = []
  model_shapes = []
  with sess.as_default():
    t_vars = tf.trainable_variables()
    for var in t_vars:
      param_name = var.name
      p = sess.run(var)
      model_names.append(param_name)
      params = p
      model_params.append(params)
      model_shapes.append(p.shape)
  return model_params, model_shapes, model_names

def quantize_params(params, max_weight=10.0, factor=32767):
  result = []
  max_weight = np.abs(max_weight)
  for p in params:
    r = np.array(p)
    r /= max_weight
    r[r>1.0] = 1.0
    r[r<-1.0] = -1.0
    result.append(np.round(r*factor).flatten().astype(np.int).tolist())
  return result

model_params, model_shapes, model_names = get_model_params()
print('got model params')
print('model_names',model_names)

# scale factor converts "model-coordinates" to "pixel coordinates" for your JS canvas demo later on.
# the larger it is, the larger your drawings (in pixel space) will be.
# I recommend setting this to 100.0 and iterating the value in the json file later on when you build the JS part.
scale_factor = 200.0
metainfo = {"mode":2,"version":6,"max_seq_len":train_set.max_seq_length,"name":"custom","scale_factor":scale_factor}

model_params_quantized = quantize_params(model_params)
print('quantized params')
model_blob = [metainfo, model_shapes, model_params_quantized]

# TODO: add filename arg
with open("giraffe.gen.full.json", 'w') as outfile:
  json.dump(model_blob, outfile, separators=(',', ':'))

print('complete!')

There are a few details:

  1. The tensorflow graph looks different. The Python notebook graph looks like this:
['vector_rnn/RNN/output_w:0',
 'vector_rnn/RNN/output_b:0',
 'vector_rnn/RNN/LSTMCell/W_xh:0',
 'vector_rnn/RNN/LSTMCell/W_hh:0',
 'vector_rnn/RNN/LSTMCell/bias:0']

however the one I got looks like this:

['vector_rnn/ENC_RNN/fw/LSTMCell/W_xh:0',
 'vector_rnn/ENC_RNN/fw/LSTMCell/W_hh:0', 
'vector_rnn/ENC_RNN/fw/LSTMCell/bias:0', 
'vector_rnn/ENC_RNN/bw/LSTMCell/W_xh:0',
 'vector_rnn/ENC_RNN/bw/LSTMCell/W_hh:0', 
'vector_rnn/ENC_RNN/bw/LSTMCell/bias:0', 
'vector_rnn/ENC_RNN_mu/super_linear_w:0', 
'vector_rnn/ENC_RNN_mu/super_linear_b:0', 
'vector_rnn/ENC_RNN_sigma/super_linear_w:0',
 'vector_rnn/ENC_RNN_sigma/super_linear_b:0', 
'vector_rnn/linear/super_linear_w:0', 
'vector_rnn/linear/super_linear_b:0', 
'vector_rnn/RNN/output_w:0', 
'vector_rnn/RNN/output_b:0', 
'vector_rnn/RNN/LSTMCell/W_xh:0', 
'vector_rnn/RNN/LSTMCell/W_hh:0', 
'vector_rnn/RNN/LSTMCell/bias:0']
  1. The array after the model meta info in the json model looks different compared to existing pre-trained sketch-rnn models. For example hand.gen.json has this structure:
[[1024,123],[123],[5,4096],[1024,4096],[4096]]

while the recently generated giraffe.gen.json has this structure:

[[5,1024],[256,1024],[1024],[5,1024],[256,1024],[1024],[512,128],[128],[512,128],[128],[128,1024],[1024],[512,123],[123],[133,2048],[512,2048],[2048]]
  1. I have uploaded the model here

Somehow I'm generating the same network ? Maybe I just need the RNN and LTSM cell layers with their weights and biases, and not the whole encoder ? If so, how do I do that ?
Otherwise, what am I missing / doing wrong ?

How can I train a sketch-rnn model using a dataset from quickdraw ?

Thank you,
George

AI Jam js not working

Hi there, I'm trying to run ai-jam-js on my Ubuntu 16.04 from my Magenta conda environment.

I want to use an Akai LPK25 as a MIDI controller, so I plug it into my PC with the environment active and I run sh RUN_DEMO.sh in the appropriate directory. By accessing to 127.0.0.1:8080 the interface is shown but it seems like it is not receiving any kind of signal because the piano keys only work when I use the mouse and they don't emit any sound at all (even with the mouse).

ai-jam

I don't know what's wrong but I suspect that it has to be sth related to the hardware and synth connections. Here's the log:

File http://download.magenta.tensorflow.org/models/attention_rnn.mag already present
File http://download.magenta.tensorflow.org/models/performance.mag already present
File http://download.magenta.tensorflow.org/models/pianoroll_rnn_nade.mag already present
File http://download.magenta.tensorflow.org/models/drum_kit_rnn.mag already present
 * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
WARNING:tensorflow:No input port specified. Capture disabled.
INFO:tensorflow:Opening 'magenta_in' as a virtual MIDI port for output.
2018-02-01 01:04:04.104307: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-02-01 01:04:04.166210: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
WARNING:tensorflow:The saved meta_graph is possibly from an older release:
'model_variables' collection should be of type 'byte_list', but instead is of type 'node_list'.
INFO:tensorflow:Restoring parameters from /tmp/tmpIxuYso/model.ckpt
Loaded 'attention_rnn' generator bundle from file './attention_rnn.mag'.
WARNING:tensorflow:The saved meta_graph is possibly from an older release:
'model_variables' collection should be of type 'byte_list', but instead is of type 'node_list'.
INFO:tensorflow:Restoring parameters from /tmp/tmpZlPHdN/model.ckpt
Loaded 'drum_kit' generator bundle from file './drum_kit_rnn.mag'.
INFO:tensorflow:Opening 'magenta_drums_in' as a virtual MIDI port for input.
INFO:tensorflow:Opening 'magenta_clock' as a virtual MIDI port for input.
INFO:tensorflow:Opening 'magenta_out' as a virtual MIDI port for output.

Instructions:
Start playing  when you want to begin the call phrase.
When you want to end the call phrase, stop playing and wait one clock tick.
Once the response completes, the interface will wait for you to begin playing again to start a new call phrase.

To end the interaction, press CTRL-C.
INFO:tensorflow:Restoring parameters from /tmp/tmpAEHW5P/model.ckpt
Loaded 'rnn-nade_attn' generator bundle from file './pianoroll_rnn_nade.mag'.
WARNING:tensorflow:The saved meta_graph is possibly from an older release:
'model_variables' collection should be of type 'byte_list', but instead is of type 'node_list'.
INFO:tensorflow:Restoring parameters from /tmp/tmpd8bQ9A/model.ckpt
Loaded 'performance' generator bundle from file './performance.mag'.
INFO:tensorflow:Opening 'magenta_in' as a virtual MIDI port for input.
INFO:tensorflow:Opening 'magenta_out' as a virtual MIDI port for output.

Instructions:
Start playing  when you want to begin the call phrase.
When you want to end the call phrase, stop playing and wait one clock tick.
Once the response completes, the interface will wait for you to begin playing again to start a new call phrase.

To end the interaction, press CTRL-C.
127.0.0.1 - - [01/Feb/2018 01:04:13] "GET /build/1.js HTTP/1.1" 200 -
open: opción incorrecta -- «a»
Uso: open [OPCIONES] -- comando

Please help @adarob @cghawthorne 🙏 😃

AttributeError: 'module' object has no attribute 'download_bundle'

Hi, I have tried your model but I have found that altough, I have install magenta right and put the performance_with_dynamics.mag file in the right directory, I still enconter this error:

 `---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-11-3eb1b89367f7> in <module>()
  1 dir(mm)
  ----> 2 mm.notebook_utils.download_bundle(BUNDLE_NAME, BUNDLE_DIR)
  3 bundle = mm.sequence_generator_bundle.read_bundle_file(os.path.join(BUNDLE_DIR, BUNDLE_NAME))
  4 generator_map = performance_sequence_generator.get_generator_map()
  5 generator = generator_map[MODEL_NAME](checkpoint=None, bundle=bundle)

   AttributeError: 'module' object has no attribute 'download_bundle'

Image Stylization jupyter book doesn't work on Windows

Two issues.

1. OS agnostic path delimiting needed

In magenta-demos/jupyter-notebooks/Image_Stylization.ipynb at line 38 os.path.join(example_path, 'evaluation_images/guerrillero_heroico.jpg') should be os.path.join(example_path, 'evaluation_images', 'guerrillero_heroico.jpg')

2. image_utils.load_np_image failure

This is then followed by an error inside image_utils.load_np_image (line 48). Here the call attempts to load the image into a temp file but that temp file is garbage collected / disposed / deleted before the call returns.

~\Anaconda3\envs\tensorflow\lib\site-packages\magenta\models\image_stylization\image_utils.py in load_np_image(image_file)
    389     with values in [0, 1].
    390   """
--> 391   return np.float32(load_np_image_uint8(image_file) / 255.0)
    392 
    393 

~\Anaconda3\envs\tensorflow\lib\site-packages\magenta\models\image_stylization\image_utils.py in load_np_image_uint8(image_file)
    405     f.write(tf.gfile.GFile(image_file, 'rb').read())
    406     f.flush()
--> 407     image = scipy.misc.imread(f.name)
    408     # Workaround for black-and-white images
    409     if image.ndim == 2:

~\Anaconda3\envs\tensorflow\lib\site-packages\numpy\lib\utils.py in newfunc(*args, **kwds)
     99             """`arrayrange` is deprecated, use `arange` instead!"""
    100             warnings.warn(depdoc, DeprecationWarning, stacklevel=2)
--> 101             return func(*args, **kwds)
    102 
    103         newfunc = _set_function_name(newfunc, old_name)

~\Anaconda3\envs\tensorflow\lib\site-packages\scipy\misc\pilutil.py in imread(name, flatten, mode)
    162     """
    163 
--> 164     im = Image.open(name)
    165     return fromimage(im, flatten=flatten, mode=mode)
    166 

~\Anaconda3\envs\tensorflow\lib\site-packages\PIL\Image.py in open(fp, mode)
   2650 
   2651     if filename:
-> 2652         fp = builtins.open(filename, "rb")
   2653         exclusive_fp = True
   2654 

PermissionError: [Errno 13] Permission denied: 'C:\\Users\\lab\\AppData\\Local\\Temp\\tmpkvttffnd'

Note that C:\\Users\\lab\\AppData\\Local\\Temp\\tmpkvttffnd doesn't exist at the time of the error.

Missing: From tensorflow model to json model file

Especially for beginners it would be very helpful if you could describe the step from a pre-trained model from tensorflow (which is e.g. described in the magenta/sketch-rnn readme) to the json-model-file (which is downloaded from the google storage in the readme of your Sketch-RNN-JS example).

This would enable people to try out your example with their own models.

Update:

  • I've found the Tensorflow-JS-Converter which seems to create the needed model from a SavedModel-File. Unfortunately sketch_rnn_train saves the model in checkpoint format which tfjs-converter is unable to handle.

style transfer demo broken on colab

installing magenta and running the style transfer notebook code in colab(gpu) gives 👍
ImportError: This version of TensorFlow Probability requires TensorFlow version >= 1.14; Detected an installation of version 1.13.1. Please upgrade TensorFlow to proceed.

upgrading tf gives new errors

How to do the midi generation and midi playing simultaneously?

Hello, dear google-magenta guys,

I tried to use your APIs to generate music.
First, create an estimator:

# Create Estimator.
run_config = trainer_lib.create_run_config(hparams)
estimator = trainer_lib.create_estimator(
		model_name, hparams, run_config,
		decode_hparams=decode_hparams)

Second, I have a midi input interface:

# Create input generator (so we can adjust priming and
# decode length on the fly).

def input_generator():
	global targets
	global decode_length
	while True:
		yield {
				'targets': np.array([targets], dtype=np.int32),
				'decode_length': np.array(decode_length, dtype=np.int32)
		}

# These values will be changed by subsequent cells.
targets = []
decode_length = 0

# Start the Estimator, loading from the specified checkpoint.
input_fn = decoding.make_input_fn_from_generator(input_generator())

Third, do prediction:

unconditional_samples = estimator.predict(
		input_fn, checkpoint_path=ckpt_path)

Fourth, I have an API to generate the MIDI note one by one, after the generation, it will read the MID file as playing:

	# Generate sample events.
	sample_ids = next(unconditional_samples)['outputs']

	#print('-------------2', sample_ids)

	# Decode to NoteSequence.
	midi_filename = decode(
			sample_ids,
			encoder=unconditional_encoders['targets'])

	return open(midi_filename, 'rb').read()

To be frank, my question is that is it possible to iterate the model output and then playing, means that do midi generation with decoding and midi playing simultaneously instead of playing the midi file after the midi file is ready?

Thanks & Regards!
Jun Yan

Trained models for Chair

Hi,

I am exploring the code, the trained files mentioned here are in JSON format. I need the trained models in tensorflow form. When I downloaded the trained model from tensorflow, from this link, I found only 5 folders. Where can I find the trained model for chair?

[bug] - tensorflow/magenta-demos - AI piano doesn't work OOB (tmm2018)

[bug] - tensorflow/magenta-demos - AI piano doesn't work OOB (tmm2018)

when trying to run ai-duet (the accompanmient thing):

  • it compiles

however, when i get to run it in macos high sierra on macbook a1278 with core i5:

  • i open the default localhost path
  • it doesn't display anything at all
  • it reports some sort of error on terminal
* Serving Flask app "server" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
127.0.0.1 - - [12/Jun/2018 12:18:52] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [12/Jun/2018 12:18:52] "GET /build/Main.js HTTP/1.1" 404 -
127.0.0.1 - - [12/Jun/2018 12:18:52] "GET /images/AIDuet_32.png HTTP/1.1" 200 -

Unable to load cat.gen.js and firetruck.gen.js

In basic_predict.html and in simple_predict.html, fetching javascripts for Cat and Firetruck classes give 403 error.

In other words, the default code in basic_predict.html is given below and runs fine:

<script language="javascript" type="text/javascript" src="https://storage.googleapis.com/quickdraw-models/sketchRNN/models/mosquito.gen.js"></script>

I get 200 for fetching mosquito.gen.js.

Whereas if I replace mosquito.gen.js with cat.gen.js like this:

<script language="javascript" type="text/javascript" src="https://storage.googleapis.com/quickdraw-models/sketchRNN/models/cat.gen.js"></script>

I get 403 error in debugger console for fetching cat.gen.js.

Same error happens in simple_predict.html for Cat and Firetruck.

the RL_Tuner.ipynb file seems to be broken

It does not open in the github screen and in my local jupyter notebook I get: NotJSONError('Notebook does not appear to be JSON: '{\n "cells": [\n {\n "cell_type": "c...',)

nsynth generate.py stuck, doesn't use GPU, and generates 31GB of long .wav files

Hi all!

I'm trying to run generate.py

"instruments": [["cupoo","bush"],
                 ["cu", "kemphur"]],
 "checkpoint_dir":"./wavenet-ckpt",
 "pitches":      [24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84],
 "resolution":              9,
 "final_length":            64000,
 "gpus":                    1,
 "batch_size_embeddings":   32,
 "batch_size_generate":     256,
 "name":         "synth_1"

the soxi output of my wav files looks pretty okay:

Input File     : 'bush_48.wav'
Channels       : 1
Sample Rate    : 16000
Precision      : 16-bit
Duration       : 00:00:04.17 = 66730 samples ~ 312.797 CDDA sectors
File Size      : 134k
Bit Rate       : 256k
Sample Encoding: 16-bit Signed Integer PCM

But for some reason the training won't work.
I get as far as generating embeddings files, but when interpolating I've been waiting for over 12 hours to generate one batch and cancelled it. I'm running on a NVIDIA geforce GTX 1080.

When checking the working directory I see a lot of wav files generated (31Gb) in the batch0 folder, but they have a length each of ~ 34 minutes. This must be wrong right?

I think the main problem is that I'm getting a lot of tensorflow warnings about how most resources are placed on the CPU. GPU's are not active during training when checking nvidia-smi. I've tried on two computers and both give the same error.

2020-05-05 08:47:40.259450: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU
Add: CPU
Const: CPU
RandomUniform: CPU
Sub: CPU
VariableV2: CPU
Mul: CPU
Identity: CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
  skip_30/biases/Initializer/random_uniform/shape (Const)
  skip_30/biases/Initializer/random_uniform/min (Const)
  skip_30/biases/Initializer/random_uniform/max (Const)
  skip_30/biases/Initializer/random_uniform/RandomUniform (RandomUniform)
  skip_30/biases/Initializer/random_uniform/sub (Sub)
  skip_30/biases/Initializer/random_uniform/mul (Mul)
  skip_30/biases/Initializer/random_uniform (Add)
  skip_30/biases (VariableV2) /device:GPU:0
  skip_30/biases/Assign (Assign) /device:GPU:0
  skip_30/biases/read (Identity) /device:GPU:0
  save/Assign_233 (Assign) /device:GPU:0

Any ideas of what's going wrong? Or ideas on how to debug? Thanks in advance!

Possible to save / record?

Hi - I've set up the AI Jam JS demo to run locally, and I can compile the code with webpack. Is there a way to save the output of a session? Either within this codebase, or with a third party application?

Using existing midi?

Would it be possible to use the ai jam demo on an existing midi file instead of a midi device?
For example sake, would/could you be able to generate a midi through use of magenta/models/melody_rnn/, and then use the ai jam to create a drum track based on that input?
Or is this a question for the drums_rnn model?
Also, the link for the for the simplified js version is pointing to the wrong place. I believe it should be https://github.com/tensorflow/magenta-demos/tree/master/ai-jam-js instead of https://github.com/tensorflow/magenta-demos/blob/master/demos/ai-jam-js

performance_rnn midi timing broken?

hi,

i've recorded the audio output of performance_rnn (browser) in parallel to the midi output and the two are not aligned.

midi is usually ahead, but sometimes lagging behind. the delay is not constant.

i haven't managed to test with the offline version of performance_rnn because my OS is too old (?). but am wondering if it is possible to fix the variable delay in the online version.

cheers

used Chrome and MaxMSP os MacOS 10.10.5 routing audio with Soundflower

Porting magenta-demos to C#

Hey I'm about to start porting over the sketchRNN demo to c#. Before I start I was wondering whether you knew of any resources I could use a starting point or had any advice, it seems like it'll be at least a 40 hour job so any help would be appreciated.

Thanks for making these demos public, they're really interesting and cool.

the test of sketch-rnn

Now I want to test an image of my own .It's ".png". The sketch-rnn using the test dataset of ".npz". And I want to know how to use a single image to test?

Sketch RNN example for finishing incomplete sketches

Is there any code sample illustrating this?

Also, It is mentioned that the authors only use the decoder RNN to first encode an incomplete sketch into a hidden state h. Could someone please explain this?

Thanks,
Akilesh

Setting the environment to run the Sketch_RNN.ipynb

Hello there, thanks for the material, it is awesome.

I want to play around with Sketch_RNN.ipynb but I am having troubles when setting environment.
I am having difficulties with these libraries/versions
are these the correct versions?

tensorflow==1.8.0?
tensorflow_probability==XX? ## tensorflow 1.8.0 only works with 0.0.1, but I am kind of lost
magenta==1.2.2?

is there a colab version where I can see libraries versions?
thanks for your help!.

ERROR: tensorflow-gan 2.0.0 has requirement tensorflow-probability>=0.7, but you'll have tensorflow-probability 0.0.1 which is incompatible.
ERROR: dm-sonnet 1.35 has requirement tensorflow-probability>=0.6.0, but you'll have tensorflow-probability 0.0.1 which is incompatible.
ERROR: tensor2tensor 1.15.4 has requirement tensorflow-probability==0.7.0, but you'll have tensorflow-probability 0.0.1 which is incompatible.
ERROR: magenta 1.2.2 has requirement tensorflow<2.0.0,>=1.15.0, but you'll have tensorflow 1.8.0 which is incompatible.
ERROR: magenta 1.2.2 has requirement tensorflow-probability==0.7.0, but you'll have tensorflow-probability 0.0.1 which is incompatible.

perfomance rnn js demo with own midi material

Dear magenta team,

thanks so much for all your efforts bringing magenta closer to artistic dudes like me.

I'm currently trying to port your performance_rnn browser model using tensorflow.js with my own trained midi composition.

I went through the whole conversion process and endet up with a (hopefully valid) midi -> training data -> squences -> performance_rnn.mag file including my composition data.

Unfortunately I don't know how to use this dataset with the tensorflow.js library.
Could you give me a hint at which place I replace my *.mag file with the current trained model? Or did I miss some deeper understanding here?

Thanks and greetings from Berlin,
Christian

nsynth generate.py not work

I want to one instrument at a corner, in total 4 instruments, and not a multi-grid. I try two configs, but both failed. I add some print in generate.py to debug.

I’ve tried that before, and I add some print in the generate.py. Here is the result for the two different configs:

First config:

{
"instruments": [["single"],
["chorus"],
["bird1"],
["bird5"]],

"checkpoint_dir":"wavenet-ckpt",
"pitches":      [36, 40, 44, 48, 52, 56],
"resolution":              9,
"final_length":            64000,
"gpus":                    1,
"batch_size_embeddings":   32,
"batch_size_generate":     256,
"name":         "bird1"

}

Interpolating embeddings between instruments at each pitch...
instrument grid = [['single'], ['chorus'], ['bird1'], ['bird5']]
grid_size = -1e-13
x=[[0.]]
y=[[0.]]
x=[0.]
y=[0.]
xy_grid=<zip object at 0x7f82c8ecaec8>
uv=(0, 0)
Traceback (most recent call last):
File "generate.py", line 289, in
interpolate_embeddings()
File "generate.py", line 131, in interpolate_embeddings
sub_grid, weights = get_instruments(xy), get_weights(xy)
File "generate.py", line 114, in get_instruments
return [instrument_grid[uv[0]][uv[1]],instrument_grid[uv[0]][uv[1]+1],
IndexError: list index out of range

Second config:

{
"instruments": [["single","chorus","bird1","bird5"]],
"checkpoint_dir":"wavenet-ckpt",
"pitches": [36, 40, 44, 48, 52, 56],
"resolution": 9,
"final_length": 64000,
"gpus": 1,
"batch_size_embeddings": 32,
"batch_size_generate": 256,
"name": "bird1"
}

Interpolating embeddings between instruments at each pitch...
instrument grid = [['single', 'chorus', 'bird1', 'bird5']]
grid_size = 2.9999999999999
x=[[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]
[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
3. ]]
y=[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. ]
[0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125
0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125
0.125]
[0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25
0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25
0.25 ]
[0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375
0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375
0.375]
[0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
0.5 ]
[0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625
0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625
0.625]
[0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75
0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75
0.75 ]
[0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875
0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875
0.875]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.

  1. ]
    [1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125
    1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125
    1.125]
    [1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25
    1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25
    1.25 ]
    [1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375
    1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375
    1.375]
    [1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
    1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
    1.5 ]
    [1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625
    1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625
    1.625]
    [1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75
    1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75
    1.75 ]
    [1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875
    1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875
    1.875]
    [2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.
  2. ]
    [2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125
    2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125
    2.125]
    [2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25
    2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25
    2.25 ]
    [2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375
    2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375
    2.375]
    [2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5
    2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5
    2.5 ]
    [2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625
    2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625
    2.625]
    [2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75
    2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75
    2.75 ]
    [2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875
    2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875
    2.875]
    [3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3.
  3. ]]
    x=[0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
    1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
    1. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25
      1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75
      2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125
      1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625
      2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1.
      1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5
      2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875
  4. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375
    2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75
    0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25
    2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625
    0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125
    2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5
    0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2.
    2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375
    0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875
  5. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25
    0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75
    1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125
    0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625
    1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0.
    0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5
    1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3.
  6. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
    1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
    1. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25
      1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75
      2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125
      1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625
      2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1.
      1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5
      2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75 0.875
  7. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375
    2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625 0.75
    0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125 2.25
    2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5 0.625
    0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2. 2.125
    2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375 0.5
    0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875 2.
    2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25 0.375
    0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75 1.875
  8. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125 0.25
    0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625 1.75
    1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0. 0.125
    0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5 1.625
    1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3. 0.
    0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375 1.5
    1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875 3.
  9. 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1. 1.125 1.25 1.375
    1.5 1.625 1.75 1.875 2. 2.125 2.25 2.375 2.5 2.625 2.75 2.875
  10. ]
    y=[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
  11. 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125
    0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125
    0.125 0.125 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25
    0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25
    0.25 0.25 0.25 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375
    0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375 0.375
    0.375 0.375 0.375 0.375 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
    0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
    0.5 0.5 0.5 0.5 0.5 0.625 0.625 0.625 0.625 0.625 0.625 0.625
    0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625 0.625
    0.625 0.625 0.625 0.625 0.625 0.625 0.75 0.75 0.75 0.75 0.75 0.75
    0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75
    0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.875 0.875 0.875 0.875 0.875
    0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875
    0.875 0.875 0.875 0.875 0.875 0.875 0.875 0.875 1. 1. 1. 1.
                  1. 1.125 1.125 1.125
                    1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125
                    1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.25 1.25
                    1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25
                    1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.25 1.375
                    1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375
                    1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375 1.375
                    1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
                    1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
                    1.5 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625
                    1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625 1.625
                    1.625 1.625 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75
                    1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75
                    1.75 1.75 1.75 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875
                    1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875 1.875
                    1.875 1.875 1.875 1.875 2. 2. 2. 2. 2. 2. 2. 2.
          1. 2.125 2.125 2.125 2.125 2.125 2.125 2.125
            2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125 2.125
            2.125 2.125 2.125 2.125 2.125 2.125 2.25 2.25 2.25 2.25 2.25 2.25
            2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25
            2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.375 2.375 2.375 2.375 2.375
            2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375
            2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.375 2.5 2.5 2.5 2.5
            2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5
            2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.625 2.625 2.625
            2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625
            2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.625 2.75 2.75
            2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75
            2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.75 2.875
            2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875
            2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875 2.875
  12. ]
    xy_grid=<zip object at 0x7f77f1551f48>
    uv=(0, 0)
    Traceback (most recent call last):
    File "generate.py", line 289, in
    interpolate_embeddings()
    File "generate.py", line 131, in interpolate_embeddings
    sub_grid, weights = get_instruments(xy), get_weights(xy)
    File "generate.py", line 115, in get_instruments
    instrument_grid[uv[0]+1][uv[1]],instrument_grid[uv[0]+1][uv[1]+1]]
    IndexError: list index out of range

Getting Black Images as Output for Image Stylization

I am getting black images after running the code (from Image Stylization) on my local computer. However, the same thing worked perfectly in Google Colab. I am not sure where is the problem.

I installed packages as in the Google Colab Runtime. Here's a list of necessary packages:

tensorflow                     2.2.0
tensorflow-addons              0.10.0
tensorflow-datasets            3.1.0
tensorflow-estimator           2.2.0
tensorflow-gan                 2.0.0
tensorflow-hub                 0.8.0
tensorflow-metadata            0.22.2
tensorflow-probability         0.10.0
magenta                        2.0.2

Cannot read property 'playNote' of undefined

When solo is off, I get no sound. So I looked in console:

Uncaught TypeError: Cannot read property 'playNote' of undefined
at MagentaInstance.sendKeyDown (eval at (1.js:25), :97:23)
at Keyboard.keyDown (eval at (1.js:37), :179:30)
at KeyboardElement.eval (eval at (1.js:37), :94:10)
at KeyboardElement.EventEmitter.emit (eval at (1.js:31), :81:17)
at HTMLDivElement.eval (eval at (1.js:55), :146:12)
sendKeyDown @ Magenta.js?3a80:71
keyDown @ Keyboard.js?3167:125
(anonymous) @ Keyboard.js?3167:60
EventEmitter.emit @ events.js?7c71:81
(anonymous) @ Element.js?74ca:111
Magenta.js?3a80:75 Uncaught TypeError: Cannot read property 'stopNote' of undefined
at MagentaInstance.sendKeyUp (eval at (1.js:25), :102:23)
at Keyboard.keyUp (eval at (1.js:37), :201:30)
at KeyboardElement.eval (eval at (1.js:37), :98:10)
at KeyboardElement.EventEmitter.emit (eval at (1.js:31), :81:17)
at HTMLDivElement.eval (eval at (1.js:55), :152:12)

magenta_out in ai-jam-js does not output midi events

Using python 2, ai-jam-js project with Ableton Live.
The call and response work perfectly and there is audible sound, but I have an Ableton Live MIDI track listening on magenta_out and there are midi events on magenta_out.

From the README.md on ai-jam-js, "Both of these instances output responses to the "magenta_out" port, which the browser is listening to for playback"

There is audible playback but no midi events on magenta_out, so I'm not sure what's happening, any ideas?

Nsynth generate.py "TypeError: 'float' object cannot be interpreted as an integer" with numpy 1.18.4

Hi when using numpy 1.18.4 I'm getting the following error when trying to run the generate.py script for Nsynth

Interpolating embeddings between instruments at each pitch...
Traceback (most recent call last):
  File "~/Coding/nsynth/VIRTUAL/lib/python3.7/site-packages/numpy/core/function_base.py", line 117, in linspace
    num = operator.index(num)
TypeError: 'float' object cannot be interpreted as an integer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "generate.py", line 279, in <module>
    interpolate_embeddings()
  File "generate.py", line 80, in interpolate_embeddings
    x, y = np.meshgrid(np.linspace(0, grid_size, res+1), np.linspace(0, grid_size, res+1))
  File "<__array_function__ internals>", line 6, in linspace
  File "~/Coding/nsynth/VIRTUAL/lib/python3.7/site-packages/numpy/core/function_base.py", line 121, in linspace
    .format(type(num)))
TypeError: object of type <class 'float'> cannot be safely interpreted as an integer.

This is my settings.json

{
        "instruments": [
                ["harp","guitar"],
                ["flute", "trumpet"]
        ],
        "checkpoint_dir":"~/Downloads/wavenet-ckpt",
        "pitches":      [24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68],
        "resolution":              9,
        "final_length":            64000,
        "gpus":                    1,
        "batch_size_embeddings":   32,
        "batch_size_generate":     256,
        "name":         "test_1"
}

I'm using:
python 3.7.6
numpy 1.18.4
tensorflow 1.15.3
magenta 1.3.1

Image Stylization notebook doesn't work on Colab

getting this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-7-3b16e573062a> in <module>()
     47 DownloadCheckpointFiles()
     48 image = np.expand_dims(image_utils.load_np_image(
---> 49           os.path.expanduser(input_image)), 0)
     50 if demo == 'monet':
     51     checkpoint = 'checkpoints/multistyle-pastiche-generator-monet.ckpt'

1 frames
/usr/local/lib/python3.6/dist-packages/magenta/models/image_stylization/image_utils.py in load_np_image_uint8(image_file)
    407     f.write(tf.gfile.GFile(image_file, 'rb').read())
    408     f.flush()
--> 409     image = scipy.misc.imread(f.name)
    410     # Workaround for black-and-white images
    411     if image.ndim == 2:

AttributeError: module 'scipy.misc' has no attribute 'imread'

Google Colaboratory Notebook Examples Broken

Unfortunately..
Google Colaboratory Notebooks (.ipynb)
Are no longer functional!
*.ipynb versions from both the website
(Magenta.tensorflow.org/demos/colab/)
and from the git repository (GitHub.com/magenta/magenta)
do not work at present!
I have tried Hello magenta, GANSynth and GrooVAE
Multiple Errors on configuration/setup phase [ ]
Magenta/Tensorflow

Please assist! Help fix?

A.I. DUET 's piano in localhost:8080 dosen't respond

Description

I folowd the guide and built the front end javascript code. The webpage of my browser's localhost:8080 first loaded the A.I. DUET page, I clicked the play button after the loading process. But when I played(clicked) the pinao notes shown below that page, It didn't respond to me at all.

Now I could figure out that after I click the piano, that page may not POST that to the server to get the predict method's results. But that's all I know and I couldn't figure out why by hitting F12 to see the GET or POST events.

in Firefox

firefox

Here is the what I had tried.

How I installed Node.js and Python with magenta, tensorflow and Flask

  • I am using a windows 10 64 bit operating system, installed Anaconda3-5.1.0-Windows-x86_64.exe
  • I followed the guide on this page https://github.com/tensorflow/magenta-demos/tree/master/ai-duet, first downloaded magenta,tensorflow ,Flask by typing pip install magenta, pip install --upgrade tensorflow, pip install Flask in my windows powershell, and it went without problems.
  • Then I installed node-v9.10.0-x64.exe and it was added to my envrionment path automatically.

in the server folder and the static folder

  • I change the directory to the server folder and typed ** python server.py** and it displayed below, which looks fine, indicating the server is on and listenning to port 8080.

PS C:\Users\User\Desktop\deut\server> python server.py
C:\Users\User\Anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
WARNING:tensorflow:From C:\Users\User\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.

  • Then in my static folder, I installed the [email protected](It turned out to install this version) by
    npm install -g webpack@^1.12.14
    and the message:

C:\Users\User\AppData\Roaming\npm\webpack -> C:\Users\User\AppData\Roaming\npm\node_modules\webpack\bin\webpack.js
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules\webpack\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

  • [email protected]
    updated 1 package in 21.784s

  • Then by typing * npm install* , I got the message in the poweshell below

npm WARN deprecated [email protected]: Please use postcss-loader instead of autoprefixer-loader
npm WARN deprecated [email protected]: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
[email protected] install C:\Users\User\Desktop\deut\static\node_modules\node-sass
node scripts/install.js
Cached binary found at C:\Users\User\AppData\Roaming\npm-cache\node-sass\4.8.3\win32-x64-59_binding.node
[email protected] postinstall C:\Users\User\Desktop\deut\static\node_modules\node-sass
node scripts/build.js
Binary found at C:\Users\User\Desktop\deut\static\node_modules\node-sass\vendor\win32-x64-59\binding.node
Testing binary
Binary is fine
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN [email protected] requires a peer of node-sass@^3.4.2 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] No description
npm WARN [email protected] No license field.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
added 527 packages in 59.881s

  • finally , I typed * webpack -p" and got the message below

Hash: b86b94b068323f83d855
Version: webpack 1.15.0
Time: 5203ms
Asset Size Chunks Chunk Names
./build/Main.js 232 kB 0 [emitted] Main
./build/1.js 5.4 MB 1 [emitted]
+ 422 hidden modules
PS C:\Users\User\Desktop\deut\static> webpack -p
Hash: cbdb4cf43fc3cb5e9e36
Version: webpack 1.15.0
Time: 17017ms
Asset Size Chunks Chunk Names
./build/0.js 845 kB 0 [emitted]
./build/Main.js 78.1 kB 1 [emitted] Main
+ 422 hidden modules
WARNING in ./build/0.js from UglifyJs
Dropping unused variable frustumSize [./src/roll/Roll.js:108,8]
Side effects in initialization of unused variable aspect [./src/roll/Roll.js:109,8]
Dropping unused function scale [./src/roll/Roll.js:25,9]
Condition always false [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/notsupported.css:10,0]
Dropping unreachable code [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/notsupported.css:12,0]
Side effects in initialization of unused variable update [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/notsupported.css:7,0]
Condition always true [.//three/build/three.js:2,0]
Dropping unused variable edges [./
/three/build/three.js:17399,0]
Side effects in initialization of unused variable Tutorial [./src/ai/Tutorial.js:60,13]
Side effects in initialization of unused variable About [./src/interface/About.js:33,13]
Side effects in initialization of unused variable Note [./src/keyboard/Note.js:19,13]
Side effects in initialization of unused variable RollNote [./src/roll/RollNote.js:24,13]
Condition always true [/source/AudioKeys.js:15,0]
Condition always true [.//buckets-js/dist/buckets.min.js:2,15]
Condition always true [./
/midiconvert/build/MidiConvert.js:1,15]
Condition always true [.//pepjs/dist/pep.js:7,0]
Condition always true [./
/startaudiocontext/StartAudioContext.js:8,0]
Dropping unreachable code [.//startaudiocontext/StartAudioContext.js:10,3]
Condition always false [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/about.css:10,0]
Dropping unreachable code [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/about.css:12,0]
Side effects in initialization of unused variable update [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/about.css:7,0]
Condition always false [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/glow.css:10,0]
Dropping unreachable code [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/glow.css:12,0]
Side effects in initialization of unused variable update [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/glow.css:7,0]
Condition always false [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/keyboard.css:10,0]
Dropping unreachable code [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/keyboard.css:12,0]
Side effects in initialization of unused variable update [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/keyboard.css:7,0]
Condition always false [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/tutorial.css:10,0]
Dropping unreachable code [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/tutorial.css:12,0]
Side effects in initialization of unused variable update [./
/style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/tutorial.css:7,0]
Condition always true [./
/webmidi/webmidi.min.js:31,1635]
WARNING in ./build/Main.js from UglifyJs
Side effects in initialization of unused variable main [./src/FeatureTest.js:28,9]
Condition always false [.//style-loader/addStyles.js:24,0]
Dropping unreachable code [./
/style-loader/addStyles.js:25,0]
Condition always false [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/splash.css:10,0]
Dropping unreachable code [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/splash.css:12,0]
Side effects in initialization of unused variable update [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/splash.css:7,0]
Condition always true [.//domready/ready.js:6,0]
Dropping unreachable code [./
/domready/ready.js:7,0]
Condition always false [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/main.css:10,0]
Dropping unreachable code [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/main.css:12,0]
Side effects in initialization of unused variable update [.//style-loader!.//css-loader!.//autoprefixer-loader!.//sass-loader!./style/main.css:7,0]`

DEBUG:tensorflow:Control change 30: 127

Hello,

I've installed magenta as instructed and after the 'source activate magenta' command line I initated the .sh file for the Ableton Live Jam demo but I'm getting this error continously, the number varies once in a while.

Also as far as I see from the max patch it doesn't start 'listening'

I haven't changed a thing in the Ableton file but just routed my midi keyboard as the midi input for the drums track.

What might I be doing wrong?

Thanks.

DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 127
DEBUG:tensorflow:Control change 30: 126
DEBUG:tensorflow:Control change 30: 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.