Giter Club home page Giter Club logo

python-midi's Issues

Calling Track.make_ticks_abs twice gives incorrect tick values

Calling Track.make_ticks_abs has a similar problematic behavior.

I would expect the second call to either of these functions to do nothing, as the ticks are already absolue/relative.

This is the current behavior for Track t:

print t
[midi.NoteOnEvent(tick=0, channel=0, data=[45, 95]),
midi.NoteOffEvent(tick=960, channel=0, data=[45, 80]),
midi.NoteOnEvent(tick=0, channel=0, data=[57, 95]),
midi.NoteOffEvent(tick=480, channel=0, data=[57, 80]))
t.make_ticks_abs()
print t #correct
[midi.NoteOnEvent(tick=0, channel=0, data=[45, 95]),
midi.NoteOffEvent(tick=960, channel=0, data=[45, 80]),
midi.NoteOnEvent(tick=960, channel=0, data=[57, 95]),
midi.NoteOffEvent(tick=1440, channel=0, data=[57, 80]))
t.make_ticks_abs()
print t #incorrect
[midi.NoteOnEvent(tick=0, channel=0, data=[45, 95]),
midi.NoteOffEvent(tick=960, channel=0, data=[45, 80]),
midi.NoteOnEvent(tick=1920, channel=0, data=[57, 95]),
midi.NoteOffEvent(tick=3360, channel=0, data=[57, 80]))

Note off events of this MIDI file are not parsed

Only note on events are read from this midi file : http://download.jzy3d.org/debug-deb_passMINp_align.mid

A third party software (Ableton Live) can load the file properly with midi off events.

capture d ecran 2015-09-21 a 18 07 35

When the MIDI clip is exported from Ableton, it can be properly read by python-midi (see the file as exported by live : http://download.jzy3d.org/debug-poliner-debussy.mid)

To reproduce :
‘‘‘
p = midi.read_midifile("../data/debug-deb_passMINp_align.mid");
print "tracks : " + repr(len(p))
print p[0]
print p[1][0:20]
‘‘‘

Will output :

tracks : 2
midi.Track(
[midi.TimeSignatureEvent(tick=0, data=[4, 2, 24, 8]),
midi.SetTempoEvent(tick=0, data=[9, 39, 192]),
midi.EndOfTrackEvent(tick=0, data=[])])
midi.Track(
[midi.NoteOnEvent(tick=0, channel=0, data=[42, 38]),
midi.NoteOnEvent(tick=22, channel=0, data=[42, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[49, 32]),
midi.NoteOnEvent(tick=22, channel=0, data=[49, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[57, 34]),
midi.NoteOnEvent(tick=22, channel=0, data=[57, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[49, 33]),
midi.NoteOnEvent(tick=22, channel=0, data=[49, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[44, 42]),
midi.NoteOnEvent(tick=22, channel=0, data=[44, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[49, 34]),
midi.NoteOnEvent(tick=22, channel=0, data=[49, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[59, 38]),
midi.NoteOnEvent(tick=22, channel=0, data=[59, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[49, 36]),
midi.NoteOnEvent(tick=22, channel=0, data=[49, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[45, 44]),
midi.NoteOnEvent(tick=22, channel=0, data=[45, 0]),
midi.NoteOnEvent(tick=38, channel=0, data=[54, 40]),
midi.NoteOnEvent(tick=22, channel=0, data=[54, 0])])

The MIDI file is taken from the training set of Poliner & Ellis dataset : http://labrosa.ee.columbia.edu/projects/piano/

Bug in Event.is_event() method?

Hi! I recently got a chance to try you code out, and it seems that the Event.is_event() method is unable to catch several valid midi events such an NoteOn and NoteOff. I believe this is due to a faulty mask for the bitwise AND operation with the statusmsg variable.

In line 97 of events.py it reads:
return (cls.statusmsg == (statusmsg & 0xF0))

Since you're comparing with a statusmsg variable of the parent class set to 0x0, it should be:
return (cls.statusmsg == (statusmsg & 0x0F)) (note the inverted mask: 0xF0 becomes 0x0F)

Without this modification, an event like, for example NoteOn which has a signature of 0x90, would never get caught, because 0x90 & 0xF0 equals to 0x90 instead of 0x0 and the method will return False. In fact, a NoteOn event actually raises an 'Unknown MIDI Event' exception.

Is that correct, or did I misunderstand the code?

Cheers,
Mauricio

Add the ability to find matching NoteOff event

From #56:

The easiest (but not efficient) method that comes to mind right now would be to make a helper function findMatchingNoteOff(track, noteOn) that scanned for the first instance in track of a NoteOffEvent on with the same channel and pitch as noteOn, and occurring after its absolute tick value.

I'm opening an issue for this since it feels like having this in the library would be useful to others. PR to follow soon.

Question: how to set a note to "Minus C2"

Apparently virtual keyboards go down to "minus c2". How would I specify a note with this pitch in python-midi?

Conversation snippet with the composer I'm working with:

Minus C2 is the lowest note. the virtual midi keyboard- not real ones- go all the way down to -C2. A0 is the low note on a piano, although I think now that is -A1

C2 is too high- that's one octave below Middle C these days.

You are starting 4 octaves too high.

test_mary will always pass!

I think you were victim of a cut-and-paste bug. Lines 22-23 tests.py reads:

event1 = pattern1[track_idx][event_idx]
event2 = pattern1[track_idx][event_idx]

but it should be

event1 = pattern1[track_idx][event_idx]
event2 = pattern2[track_idx][event_idx]

The way you have it, event1 == event2, so the test will always pass.

Btw, thanks for this most excellent work. It has been very useful to me.

Questin: dump midi commands in realtime

How can I put out the raw midi commands to the stdout?
Use case: I am implementing a little toy that understands noteon, noteoff and alloff commands, but I only have a serial port to communicate to it. I want to send the raw commands with the correct timing to make it play.
Thanks in advance for your help

class AfterTouchEvent extended with pitch and value functions

Hi all,

Thanks for developing this wonderful Python module! I've extended the AfterTouchEvent class with new get_pitch, set_pitch, get_value and set_value functions in events.py:

events.py:

class AfterTouchEvent(Event):
slots = ['pitch', 'value']
statusmsg = 0xA0
length = 2
name = 'After Touch'

def get_pitch(self):
    return self.data[0]
def set_pitch(self, val):
    self.data[0] = val
pitch = property(get_pitch, set_pitch)

def get_value(self):
    return self.data[1]
def set_value(self, val):
    self.data[1] = val
value = property(get_value, set_value)

Now I can write in an application:

if (event.name == "After Touch"):
print "Channel %d" % (event.channel+1)
print "Note: %d" % event.get_pitch()
print "Value: %d" % event.get_value()

Can someone review and commit this code?
Thanks!

Documentation??

Am I just missing documentation somewhere? It's almost impossible to do anything not described in your example without it. For example, I'm trying to figure out how to use SetTempoEvent, but I don't tell what any of the arguments are supposed to be. Help!

Octave is incorrect for constants

If you run the following you would expect to get a result of 60 (that being the correct MIDI value for C4), however it actually returns the value for C3.

>>> import midi
>>> midi.C_4
48

This seems to be a fault with how the globals are generated in constants.py that causes every note name constant to be shifted down one octave.

SetTempoEvent data interpretation?

I am confused as to how one might make use of the SetTempoEvent data; I can't determine how the three values listed in each event are related to the tempo; for example, with a pattern resolution of 384, an event where the tempo should be set to 60 bpm, and therefore approximately 2604 ms/tick, has data=[15, 66, 64]. What correlation do those numbers have to the tempo?

Question on how to change the pitch of certain notes (per-track)

I'm new to midi, so excuse the n00b question.

Here's a snippet of the actual algorithm I'm trying to implement:

Each measure of four beats represents a decade in time, starting from the year 1300A.D. and ending at 2015.

We used a simple algorithm in which a rise or fall in the values of each climate variables is matched by a rise or fall of an equal magnitude (measured as a %) of the pitch of the corresponding musical voices. The reference value used in this model is the pre-industrial (1300-1760) averages for temperature and CO2 concentrations. So, for example, if the temperature value rises by 10% as measured against the reference value over a decade, then the pitch of the trumpet will increase by 10% for the four beats of the corresponding measure in the score.

My current plan is to:

  • Read an input midi file
  • For each track
    • Create a new "modified" track that corresponds to this track
    • Read every note in the track
    • If the criteria above requires that the note must be changed:
      • Modify the note by changing the pitch a half-step up or down (are there any helper functions for doing this?)
      • Otherwise, copy the note over to the modified track as-is
  • Write the modified tracks to an output midi file

Do you have any examples of iterating through all notes in a track? If not, can you at least point me to the function/method to call? What I had in mind was basically something similar to the mxm python-midi example

As far as grouping the notes into groups of 4 beats, is there anything provided by the library to make this easier? I guess what I mean is there a "wrapper event" to invoke a callback after X number of beats and pass all the notes in those beats? Or would I have to handle that on my own?

By the way, this is for an an art project to bring awareness about Climate Change.

How to copy Event object?

I want to extract a part of a midi file and save it on a file

I am trying to copy events from pattern to another. But it seems deepcopy and copy aren't supported.

python-rtmidi integration

It would be useful to be able to send midi events encoded as midi messages into python-rtmidi output ports, and to parse midi messages received from python-rtmidi input ports as midi events.
It would be especially useful on the Windows platform, where the sequencer module isn't supported.

What do you think?

Support for MMC or CC

If I understand correctly, there isn't currently support for CC or MMC messages. Is that correct, and are there any plans to add that yet?

Which license?

This looks really handy and I'd like to package it for the NixOS distro. I just wonder what license to put down for it. Thanks!

mididump.py does not load entire file

I can't seem to get midiplay.py to play the whole file. It stops at 45 seconds in. mididump.py has the same issue. It doesn't load my file completely before dumping that file.

My file is about 1min 30secs.

How to get the start time and duration of NoteEvent?

Hi vishnubob, I'm trying to use this library to parse midi file in my project, but I found that when I run this example, I only got the NoteOnEvent, where is the NoteOffEvent? and can you show me how to get the start time and the duration of a NoteEvent?

thanks!

Installation on Linux has unnecessary swig dependency for Python-only functionality

When I attempt to do a pip install on Ubuntu, there's an error because I don't have swig installed. So then I do sudo apt-get install swig, and try again, and it works.

However, my application is not actually using those features of the library that require swig.

So possibly it would make sense to split this library into two parts: the first with the pure Python functionality, and the second with the Alsa sequencer and the swig dependency.

If this issue is fixed, and my previous issue (Pypi downloadability), then this library would get closer to the "it just works" stage, with perhaps the Python 3 issue being the only other problem.

Repo lacks versioning

The repository, as it currently stands, has no proper versioning system. This makes it hard for packagers (such as myself; I am packaging this for Arch Linux) to package "stable" versions of the code, because it's hard to know which commits are stable and which aren't. Therefore, I ask that you tag your releases with a specific version number to make releases easier.

Best way to deepcopy a track

I'm trying to get a deep copy of a track via:

copy.deepcopy(track)

but I'm getting this error:

  File "/Users/tleyden/Development/climate_music/src/transformer.py", line 52, in transform
    track_copy = copy.deepcopy(track)
  File "/Users/tleyden/DevLibraries/anaconda/lib/python2.7/copy.py", line 190, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "/Users/tleyden/DevLibraries/anaconda/lib/python2.7/copy.py", line 351, in _reconstruct
    item = deepcopy(item, memo)
  File "/Users/tleyden/DevLibraries/anaconda/lib/python2.7/copy.py", line 190, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "/Users/tleyden/DevLibraries/anaconda/lib/python2.7/copy.py", line 346, in _reconstruct
    setattr(y, key, value)
  File "/Users/tleyden/DevLibraries/anaconda/lib/python2.7/site-packages/midi/events.py", line 125, in set_velocity
    self.data[1] = val
AttributeError: data

As an alternative, I'm writing my own kludge and just propagating the events I care about:

def copy_track(track):
    """
    copy.deepcopy() didn't work, so I hand rolled this as a workaround
    """
    track_copy = midi.Track()
    for event in track:
        if isinstance(event, midi.NoteOnEvent):
            on = midi.NoteOnEvent(tick=event.tick, velocity=event.velocity, pitch=event.pitch)
            track_copy.append(on)
        if isinstance(event, midi.NoteOffEvent):
            off = midi.NoteOffEvent(tick=event.tick, velocity=event.velocity, pitch=event.pitch)
            track_copy.append(off)
     return track_copy

If it's not feasible to make copy.deepcopy() work, is there a more elegant way of rolling my own?

NoteOn events almost always get tick=0

This is a great library, but I think I found an issue:
When parsing midi files most - if not all - NoteOn events get a tick value of 0.
This is not correct. It should follow the timeline and increase the tick.
Can this be fixed ?

example:
midi.ControlChangeEvent(tick=3072, channel=1, data=[7, 75]),
midi.NoteOnEvent(tick=0, channel=1, data=[78, 127]),
midi.NoteOffEvent(tick=384, channel=1, data=[78, 64]),
midi.NoteOnEvent(tick=0, channel=1, data=[76, 127]),
midi.NoteOffEvent(tick=384, channel=1, data=[76, 64]),
midi.NoteOnEvent(tick=0, channel=1, data=[74, 127]),
midi.NoteOffEvent(tick=384, channel=1, data=[74, 64]),
midi.NoteOnEvent(tick=0, channel=1, data=[73, 127]),

a comparable perl-based parser (midicsv.exe) does it right:
2, 3072, Control_c, 1, 7, 75
2, 3072, Note_on_c, 1, 78, 127
2, 3456, Note_off_c, 1, 78, 64
2, 3456, Note_on_c, 1, 76, 127
2, 3840, Note_off_c, 1, 76, 64
2, 3840, Note_on_c, 1, 74, 127
2, 4224, Note_off_c, 1, 74, 64
2, 4224, Note_on_c, 1, 73, 127
2, 4608, Note_off_c, 1, 73, 64

Question: converting from ticks to milliseconds

Hi,
I'm having some trouble using the framework to do what I want:
I read a midi file, I convert ticks to absolute but now I want to convert those ticks to milliseconds.

The documentation is confusing me: in several places it mentions the importance of a good time model and how conversions taking into account tempo and resolution are taken care of by the framework, but I don't see how to get the millisecond value for an event at all. As for calculating it with the formula, it's not clear to me how to use the TempoMap or EventStreamIterator classes to track changing tempos. What are these streams they expect as a parameter? And what window length is required for the EventStreamIterator?

So the question is: how can I get a millisecond timevalue (relative to the start of the piece) of an
event?

Multiple connections

Hi,

First of all thanks for this nice python library. I'm using it currently to write a tool to integrate my monome (http://monome.org/) as an MIDI interface in my live looping setup. It starts to work :) . During development I came across the following issues:

  • It does not seem possible to connect one output port of a sequencer to multiple input ports. Using jack, the alsa connections are showed, but MIDI events are only send to the last port that has been connected.
  • I noticed that I cannot setup connections via jack (or I guess aconnect). These need to be made via the python objects. Is there a workaround for that?
  • Can I make multiple input/output ports?

best regards :)
Alex

Unknown Meta MIDI Event

In my file i have some weird midi events. How do i get the parser to just skip the incorrect midi events?

My midi editor program can read the file just fine so i know that it can still be read.

Missing files?

When i run sudo python setup.py install it fails with the error:
swig -python -o src/sequencer_alsa/sequencer_alsa_wrap.c src/sequencer_alsa/sequencer_alsa.i
Unable to execute swig: Nosuch file or directory.

I checked the src/sequencer_alsa/ directory and there is no sequencer_alsa_wrap.c but there is /sequencer_alsa.i

Note Values Mapping

Dear Vishnu,

First of all thanks for this fantastic library. Its really good, and is altogether different from rest of the open-source stuff out there.

I am trying to capture to a certain degree the mappings of the midi notes notations to their numerical values. I see that they are already there within the package..however, what I precisely would like to know is what alphabetical notation you are using to map those numerical values.. (i.e. for instance, what's the difference between Bb_1 and Bs_1 and B_1). What I am interpreting is that B_1, B_2, .. B_N corresponds to the 'B' in Nth octave. Now, what I am having trouble interpreting is about what is Bb_1 and Bs_1.

Hope you can help me out in this.

Thanks.

Saif

setup.py not working, missing characters.

I was trying to install the module, as I plan to do some work with MIDI files and it seemed useful, but I couldn't get setup.py to install. For some reason, in the version I downloaded at least, the brackets around the print statement on line 65 were missing, and so it wouldn't work. When I added them back in, it worked fine.

the readme examples don't work

import midi

t = midi.new_stream(resolution=120, tempo=200)

AttributeError Traceback (most recent call last)
in ()
----> 1 t = midi.new_stream(resolution=120, tempo=200)

AttributeError: 'module' object has no attribute 'new_stream'

Question : Converting timestamps to ticks

Hi, I am trying to convert mic input into a monophonic midi file. I have a script which does the pitch tracking for me. I ran it on a wav file and it give me a dictionary of timestamps and pitches(which I converted into midi numbers) . How should I go about creating a midi file from this?
I looked up the formula for converting time in seconds to ticks.
ticks = (timestamp * bpm) / 60
Looks like tick value should always be increasing with time.But I found that ticks some times decrease and sometimes increase for successive NoteOnEvents.
Could you help me out please?

midilisten and midiplay examples nonfunctional, undocumented

My best guess about how to actually use the example scripts fail, indicate no solution. For example :

[russell@peregrin scripts (master) ]$ ./midiplay 0 0 ../mary.mid
Traceback (most recent call last):
File "./midiplay", line 27, in
seq.subscribe_port(client, port)
File "/home/russell/opt/lib/python2.7/site-packages/midi/sequencer/sequencer.py", line 430, in subscribe_port
self._subscribe_port(subscribe)
File "/home/russell/opt/lib/python2.7/site-packages/midi/sequencer/sequencer.py", line 115, in _subscribe_port
if err < 0: self._error(err)
File "/home/russell/opt/lib/python2.7/site-packages/midi/sequencer/sequencer.py", line 74, in _error
raise RuntimeError, msg
RuntimeError: ALSAError[-1]: Operation not permitted

Now what?

Generated MIDI file doesn't start playing at the beginning of file.

Perhaps I'm overlooking some details, but I wanted to see if anyone knew of this. When I generate MIDI files using this API, I'm noticing that the MIDI files don't begin playing until approx. half a second into the file. Is there a fix or workaround for this particular problem? Is this a bug, or am I misunderstanding the usage of the library?

I'm appending NoteOn events with a tick of 0 while the files are set to tick_relative = true. I'm going to try making the files using an absolute timing and see if that solves my issues.

I'm playing the files using Quicktime Player 7 and Max. I'm going to see if other programs reproduce the problem as well.

Installation problems with pip

Hi,

it seems that there are some problems when I try to install the midi package via pip.

He is the output

$ pip install midi
Collecting midi
  Could not find a version that satisfies the requirement midi (from versions: )
  Some externally hosted files were ignored as access to them may be unreliable (use --allow-external midi to allow).
No matching distribution found for midi

Interesting, that doing a pip search midi lists the package.

When I clone the repo and run pip install -e . from inside the directory, the installation finishes. However, when I install it like this I cannot import midi (also the script/ commands fail because of the same reason)

$ python
>>> import midi
ImportError: No module named midi

or the script

$ mididump.py data/confuta.mid 
Traceback (most recent call last):
  File "/usr/local/bin/mididump.py", line 6, in <module>
    exec(compile(open(__file__).read(), __file__, 'exec'))
  File "/home/ckoerner/software/python-midi/scripts/mididump.py", line 5, in <module>
    import midi
ImportError: No module named midi

When I install the package with the recommended way python setup.py install everything works fine. However, I would really like to put the package into the requirements.txt file and install it with pip.

Best,
Christoph

Question : Is there a way I can extract pitches from midi?

I want to perform some mathematical operations on the pitches of a midi file. Is there a way I can extract pitches from a midi file, store them in a list perform some operations and write them back to their corresponding events ?

Questions on mididump.py

I downloaded this America the Beautiful.midi, and ran:

mididump.py america_the_beautiful_three_instruments.midi

which gave me this output

and I had a few questions:

  1. In this line: midi.NoteOnEvent(tick=720, channel=0, data=[62, 126]), what do the list elements in the data field represent?
  2. Why are there duplicate events? this event seems identical to this event
  3. Why aren't the events sorted by tick?
  4. In Side Note: What is a MIDI Tick?, you mention:

First, a saved MIDI file encodes an initial Resolution and Tempo

In the dumped file, I'm only seeing a resolution:

midi.Pattern(format=1, resolution=240, tracks=\

Where should I look for the Tempo?

Question: how to show the current beat in realtime

I want to create something which can show the current beat while a song is playing back. It doesn't have to play back the actual song, but it needs to be synchronized with the song playback. For example, if I start the program and the song at the same time, the program would show the current beat in the song playing back.

It seems like it could be done with the sequencer, although some guidance / examples would be helpful. Looks like according to the README, there's only an ALSA sequencer working, but I did see some code for an OSX sequencer -- is that known to work?

Also, does it seem feasible without a sequencer so I could avoid messing around with drivers?

Python 3

Any plans on Python 3 compatibility?

Is README out of date?

I'm trying to use the examples in README, but using the new_stream function gives

AttributeError: 'module' object has no attribute 'new_stream'

Looking through the tests and the source, it looks like new_stream has been replaced by the Pattern class?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.