Giter Club home page Giter Club logo

musicpy's Introduction

musicpy

中文

Have you ever thought about writing music with codes in a very concise, human-readable syntax?

Musicpy is a music programming language in Python designed to write music in very handy syntax through music theory and algorithms. It is easy to learn and write, easy to read, and incorporates a fully computerized music theory system.

Musicpy can do way more than just writing music. This package can also be used to analyze music through music theory logic, and you can design algorithms to explore the endless possibilities of music, all with musicpy.

With musicpy, you can express notes, chords, melodies, rhythms, volumes and other information of a piece of music with a very concise syntax. It can generate music through music theory logic and perform advanced music theory operations. You can easily output musicpy codes into MIDI file format, and you can also easily load any MIDI files and convert to musicpy's data structures to do a lot of advanced music theory operations. The syntax of musicpy is very concise and flexible, and it makes the codes written in musicpy very human-readable, and musicpy is fully compatible with python, which means you can write python codes to interact with musicpy. Because musicpy is involved with everything in music theory, I recommend using this package after learning at least some fundamentals of music theory so you can use musicpy more clearly and satisfiedly. On the other hand, you should be able to play around with them after having a look at the documentation I wrote if you are already familiar with music theory.

Documentation

See musicpy wiki or Read the Docs documentation for complete and detailed tutorials about syntax, data structures and usages of musicpy.

This wiki is updated frequently, since new functions and abilities are adding to musicpy regularly. The syntax and abilities of this wiki is synchronized with the latest released version of musicpy.

You can click here to download the entire wiki of musicpy I written in pdf and markdown format, which is updating continuously.

Installation

Make sure you have installed python (version >= 3.7) in your pc first. Run the following line in the terminal to install musicpy by pip.

pip install musicpy

Note 1: On Linux, you need to make sure the installed pygame version is older than 2.0.3, otherwise the play function of musicpy won't work properly, this is due to an existing bug with newer versions of pygame. You can run pip install pygame==2.0.2 in terminal to install pygame 2.0.2 or any version that is older than 2.0.3. You also need to install freepats to make the play function works on Linux, you can run sudo apt-get install freepats (on Ubuntu).

Note 2: If you cannot hear any sound when running the play function, this is because some IDE won't wait till the pygame's playback ends, they will stops the whole process after all of the code are executed without waiting for the playback. You can set wait=True in the parameter of the play function, which will block the function till the playback ends, so you can hear the sounds.

In addition, I also wrote a musicpy editor for writing and compiling musicpy code more easily than regular python IDE with real-time automatic compilation and execution, there are some syntactic sugar and you can listen to the music generating from your musicpy code on the fly, it is more convenient and interactive. I strongly recommend to use this musicpy editor to write musicpy code. You can download this musicpy editor at the repository musicpy_editor, the preparation steps are in the README.

Musicpy is all compatible with Windows, macOS and Linux.

Musicpy now also supports reading and writing musicxml files, note that you need to install partitura to use the functionality by pip install partitura.

Importing

Place this line at the start of the files you want to have it used in.

from musicpy import *

or

import musicpy as mp

to avoid possible conflicts with the function names and variable names of other modules.

Composition Examples

Because musicpy has too many features to introduce, I will just give a simple example code of music programming in musicpy:

# a nylon string guitar plays broken chords on a chord progression

guitar = (C('CM7', 3, 1/4, 1/8)^2 |
          C('G7sus', 2, 1/4, 1/8)^2 |
          C('A7sus', 2, 1/4, 1/8)^2 |
          C('Em7', 2, 1/4, 1/8)^2 | 
          C('FM7', 2, 1/4, 1/8)^2 |
          C('CM7', 3, 1/4, 1/8)@1 |
          C('AbM7', 2, 1/4, 1/8)^2 |
          C('G7sus', 2, 1/4, 1/8)^2) * 2

play(guitar, bpm=100, instrument=25)

Click here to hear what this sounds like (Microsoft GS Wavetable Synth)

If you think this is too simple, musicpy could also produce music like this within 30 lines of code (could be even shorter if you don't care about readability). Anyway, this is just an example of a very short piece of electronic dance music, and not for complexity.

For more musicpy composition examples, please refer to the musicpy composition examples chapters in wiki.

Brief Introduction of Data Structures

note, chord, scale are the basic classes in musicpy that builds up the base of music programming, and there are way more musical classes in musicpy.

Because of musicpy's data structure design, the note class is congruent to integers, which means that it can be used as int directly.

The chord class is the set of notes, which means that it itself can be seen as a set of integers, a vector, or even a matrix (e.g. a set of chord progressions can be seen as a combination of multiple vectors, which results in a form of matrix with lines and columns indexed)

Because of that, note, chord and scale classes can all be arithmetically used in calculation, with examples of Linear Algebra and Discrete Mathmetics. It is also possible to write an algorithm following music theory logics using musicpy's data structure, or to perform experiments on music with the help of pure mathematics logics.

Many experimental music styles nowadays, like serialism, aleatoric music, postmodern music (like minimalist music), are theoretically possible to make upon the arithmetically performable data structures provided in musicpy. Of course musicpy can be used to write any kind of classical music, jazz, or pop music.

For more detailed descriptions of data structures of musicpy, please refer to wiki.

Summary

I started to develop musicpy in October 2019, currently musicpy has a complete set of music theory logic syntax, and there are many composing and arranging functions as well as advanced music theory logic operations. For details, please refer to the wiki. I will continue to update musicpy's video tutorials and wiki.

I'm working on musicpy continuously and updating musicpy very frequently, more and more musical features will be added, so that musicpy can do more with music.

Thank you for your support~

If you are interested in the latest progress and develop thoughts of musicpy, you could take a look at this repository musicpy_dev

Contact

Discord: Rainbow Dreamer#7122

qq: 2180502841

Bilibili account: Rainbow_Dreamer

email: [email protected] / [email protected]

Discussion group:

Join our Discord server!

QQ discussion group: 364834221

Donation

This project is developed by Rainbow Dreamer on his spare time to create an interesting music composition library and a high-level MIDI toolkit. If you feel this project is useful to you and want to support it and it's future development, please consider buy me a coffee, I appreciate any amount.

Support via PayPal

Reasons Why I Develop This Language and Keep Working on This Project

There are two main reasons why I develop this language. Firstly, compared with project files and MIDI files that simply store unitary information such as notes, intensity, tempo, etc., it would be more meaningful to represent how a piece of music is realized from a compositional point of view, in terms of music theory. Most music is extremely regular in music theory, as long as it is not modernist atonal music, and these rules can be greatly simplified by abstracting them into logical statements of music theory. (A MIDI file with 1000 notes, for example, can actually be reduced to a few lines of code from a music theory perspective.) Secondly, the language was developed so that the composing AI could compose with a real understanding of music theory (instead of deep learning and feeding a lot of data), and the language is also an interface that allows the AI to compose with a human-like mind once it understands the syntax of music theory. We can tell AI the rules on music theory, what is good to do and what is not, and these things can still be quantified, so this music theory library can also be used as a music theory interface to communicate music between people and AI. So, for example, if you want AI to learn someone's composing style, you can also quantify that person's style in music theory, and each style corresponds to some different music theory logic rules, which can be written to AI, and after this library, AI can realize imitating that person's style. If it is the AI's own original style, then it is looking for possibilities from various complex composition rules.

I am thinking that without deep learning, neural network, teaching AI music theory and someone's stylized music theory rules, AI may be able to do better than deep learning and big data training. That's why I want to use this library to teach AI human music theory, so that AI can understand music theory in a real sense, so that composing music won't be hard and random. That's why one of my original reasons for writing this library was to avoid the deep learning. But I feel that it is really difficult to abstract the rules of music theory of different musicians, I will cheer up to write this algorithm qwq In addition, in fact, the musician himself can tell the AI how he likes to write his own music theory (that is, his own unique rules of music theory preference), then the AI will imitate it very well, because the AI does know music theory at that time, composition is not likely to have a sense of machine and random. At this point, what the AI is thinking in its head is exactly the same as what the musician is thinking in his head.

The AI does not have to follow the logical rules of music theory that we give it, but we can set a concept of "preference" for the AI. The AI will have a certain degree of preference for a certain style, but in addition, it will have its own unique style found in the rules of "correct music theory", so that the AI can say that it "has been influenced by some musicians to compose its own original style". When this preference is 0, the AI's composition will be exactly the style it found through music theory, just like a person who learns music theory by himself and starts to figure out his own composition style. An AI that knows music theory can easily find its own unique style to compose, and we don't even need to give it data to train, but just teach it music theory.

So how do we teach music theory to an AI? In music, ignoring the category of modernist music for the moment, most music follows some very basic rules of music theory. The rules here refer to how to write music theory OK and how to write music theory mistakes. For example, when writing harmonies, four-part homophony is often to be avoided, especially when writing orchestral parts in arrangements. For example, when writing a chord, if the note inside the chord has a minor second (or minor ninth) it will sound more fighting. For example, when the AI decides to write a piece starting from A major, it should pick chords from the A major scale in steps, possibly off-key, add a few subordinate chords, and after writing the main song part, it may modulate by circle of fifths, or major/minor thirds, modulate in the parallel major and minor keys, etc. What we need to do is to tell the AI how to write the music correctly, and furthermore, how to write it in a way that sounds good, and that the AI will learn music theory well, will not forget it, and will be less likely to make mistakes, so they can write music that is truly their own. They will really know what music is and what music theory is. Because what the language of this library does is to abstract the music theory into logical statements, then every time we give "lessons" to the AI, we are expressing the person's own music theory concepts in the language of this library, and then writing them into the AI's database. In this way, the AI really learns the music theory. Composing AI in this way does not need deep learning, training set, or big data, compared to composing AI trained by deep learning, which actually does not know what music theory is and has no concept of music, but just draws from the huge amount of training data. Another point is that since things can be described by concrete logic, there is no need for machine learning. If it is text recognition, image classification, which is more difficult to use abstract logic to describe things, that is the place where deep learning is useful.

musicpy's People

Contributors

jenniferkuo avatar kant avatar kianmeng avatar nkid00 avatar olemb avatar oxygen-dioxide avatar rainbow-dreamer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

musicpy's Issues

Cannot hear any sound from sample code

Hello again,

Similar to Rainbow-Dreamer/sf2_loader#1, I cannot hear any sound when using the sample code found in the README:

from musicpy import *

guitar = (C('CM7', 4, 1/4, 1/8) ^ 2 | C('G7sus', 3, 1/4, 1/8) ^ 2
          | C('A7sus', 3, 1/4, 1/8) ^ 2 | C('Em7', 3, 1/4, 1/8) ^ 2 |
          C('FM7', 3, 1/4, 1/8) ^ 2 | C('CM7', 4, 1/4, 1/8)@1 |
          C('AbM7', 3, 1/4, 1/8) ^ 2 | C('G7sus', 3, 1/4, 1/8) ^ 2)
play((guitar * 2)-octave, 100, instrument=25)

play 时,sh: temp.mid: command not found 报错

作者您好,我按照b站教程,运行如下代码

from musicpy import *

a=note('A',5,duration=3)
b=chd("C5","maj7",duration=1,intervals=1)
play(a)
# print(b)

play这句报错:sh: temp.mid: command not found

请问我该如何处理.

我看了一下play函数的源代码如下

def play(chord1,
         tempo=80,
         track=0,
         channel=0,
         time1=0,
         track_num=1,
         name='temp.mid',
         modes='quick',
         instrument=None,
         save_as_file=True):
    file = write(name_of_midi=name,
                 chord1=chord1,
                 tempo=tempo,
                 track=track,
                 channel=channel,
                 time1=time1,
                 track_num=track_num,
                 mode=modes,
                 instrument=instrument,
                 save_as_file=save_as_file)
    if save_as_file:
        result_file = name
        if sys.platform.startswith('win'):
            os.startfile(result_file)
        elif sys.platform.startswith('linux'):
            import subprocess
            subprocess.Popen(result_file)
        elif sys.platform == 'darwin':
            os.system(result_file)
    else:
        return file

我的电脑是mac,该如何处理这个问题呢。现在在同级目录下有temp.mid这个文件,只是不能播放。需要拖入第三方录音软件听。

Sampler doesn't properly handle 32bit wav files

When playing wav samples of 32bit/44.1khz wav files, the sampler plays them back at the wrong speed which pitches them down. I haven't checked but I would guess that 8/12bit files will playback pitched up.

error: mixer not initialized

environment
google colab

code
same like on your git and pypi

pip install -U musicpy 
from musicpy import *
guitar = (C('CM7', 3, 1/4, 1/8)^2 |
          C('G7sus', 2, 1/4, 1/8)^2 |
          C('A7sus', 2, 1/4, 1/8)^2 |
          C('Em7', 2, 1/4, 1/8)^2 | 
          C('FM7', 2, 1/4, 1/8)^2 |
          C('CM7', 3, 1/4, 1/8)@1 |
          C('AbM7', 2, 1/4, 1/8)^2 |
          C('G7sus', 2, 1/4, 1/8)^2) * 2
play(guitar, bpm=100, instrument=25)

result

error                                     Traceback (most recent call last)
[<ipython-input-9-e2f1c6daee5e>](https://localhost:8080/#) in <cell line: 1>()
----> 1 play(guitar, bpm=100, instrument=25)

[/usr/local/lib/python3.10/dist-packages/musicpy/musicpy.py](https://localhost:8080/#) in play(current_chord, bpm, channel, start_time, name, instrument, i, save_as_file, msg, nomsg, ticks_per_beat, ignore_instrument, ignore_bpm, ignore_track_names, wait, **midi_args)
    291     if save_as_file:
    292         result_file = name
--> 293         pygame.mixer.music.load(result_file)
    294         pygame.mixer.music.play()
    295         if wait:

error: mixer not initialized

best regards

Question on using `build` and `interval` notation as pertains to `chord_analysis`

I have a question regarding the interval description:

Starting from a JSON structure, I would like to convert to a musicpy piece and then do a chord analysis as follows :

import json as js
import musicpy as mp
json=js.loads("""[{
			"chd": "Cm7",
			"time": 0.5,
			"arp": 0.125,
			"start": 0,
			"inst": "Acoustic Grand Piano"
		},
		{
			"chd": "Dsus",
			"time": 0.5,
			"arp": 0.125,
			"start": 0.5,
			"inst": "Acoustic Grand Piano"
		},
		{
			"chd": "Caug7",
			"time": 0.5,
			"arp": 0.125,
			"start": 1,
			"inst": "Acoustic Grand Piano"
		},
		{
			"chd": "Dadd2",
			"time": 0.5,
			"arp": 0.125,
			"start": 1.5,
			"inst": "Acoustic Grand Piano"
		},
		{
			"chd": "Cm7",
			"time": 0.5,
			"arp": 0.125,
			"start": 2,
			"inst": "Acoustic Grand Piano"
		},
		{
			"chd": "Dsus",
			"time": 0.5,
			"arp": 0.125,
			"start": 2.5,
			"inst": "Acoustic Grand Piano"
		},
		{
			"chd": "Caug7",
			"time": 0.5,
			"arp": 0.125,
			"start": 3,
			"inst": "Acoustic Grand Piano"
		},
		{
			"chd": "D,G,A,A# / Dadd2",
			"time": 0.5,
			"arp": 0.25,
			"start": 3.5,
			"inst": "Acoustic Grand Piano"
		}
	]""")
def convert_to_mp(chp : list, bpm : float=138, name: str='Progression 0') -> mp.piece:
    """Convert a chord-progression from a dict `chd` to a musicpy object

    Args:
        chp (list): chord progression
        bpm (float): BPM to play the song at.

    Returns:
        mp.piece: musicpy piece from chord progression
    """
    chord = [mp.C(i['chd']) % (i['time'], i['arp']) for i in chp]
    inst = [i['inst'] for i in chp]
    start = [i['start'] for i in chp]
    bpm, chdnotes, _ = mp.piece(chord,inst,bpm,start,['0']*len(chord)).merge()
    return mp.build(mp.track(chdnotes),bpm=bpm,name=name)
    
piece=convert_to_mp(chp=json, bpm=138, name='Progression 0')
mp.chord_analysis(piece.tracks[0])

But this returns ['Cmaj13#11 omit B, G sort as [1, 3, 2, 4, 5]']. I noticed that Track 0 never has any 0-intervals, which I see in other songs.
(output from piece.tracks[0]):

[C4, D#4, G4, A#4, D4, G4, A4, C4, E4, G#4, A#4, D4, E4, F#4, A4, C4, D#4, G4, A#4, G4, C5, D5, G4, B4, D#5, F5, D4, E4, F#4, A4, D5, F#5, A5] with interval [0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.25, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.25, 0.125, 0.125, 0.125, 0.125, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.5]

Could you please explain what I am missing?
Thank you so much!

Can you provide a basic example of how to use the sampler

I'm trying to get started with the sampler. I'm finding it hard to get started, the information is here but it's a bit to complicated for someone who doesn't know this module. It would help if there was a very simple example. Ive spent the last couple of days trying to figure out how to start.

If I had an example that shows how to do the following, it would be much easier to start.

1 drum loaded as a WAV file with a very simple sequence.
I sf2 loaded with an explanation of how to load a specific preset
simple chord progression of the sF2
A short loop made from the 2 sequences.
Play the loop
Export the loop

🙏🏼🙏🏼

[feature req] Integration with hookpad ?

Hi , first of all kudos to work done by you . I am not sure if you are aware of hook pad theory ? basically it is sketching notes and chords with tones of options . It helps in understanding theory in a visual way. I am wondering if the sketch pad can be converted directly into a midi file with musicpy ?

An example for one of the songs is here :https://www.hooktheory.com/theorytab/view/the-beatles/here-comes-the-sun where you can edit the chords and notes.

I think it could be super useful to use both musicpy and hooktheory for faster learning. It was just a thought as hook theory has also lots of music resources

关于track类型与build函数的几个问题(可能是bug)

  1. 用build函数创建piece,传入track时,只有当所有的track都有名称track_name时,输出piece的音轨才有名称。如果输入的某个track没有名称,则其他音轨的名称在piece中丢失
    image
    查看了代码,发现是写死在代码里面的。

    musicpy/musicpy/musicpy.py

    Lines 3019 to 3020 in ead467d

    if all(i.track_name for i in tracks_list):
    track_names = [i.track_name for i in tracks_list]

    是故意这样设计的吗?
  2. build不支持将数量不确定的track构建为一个piece。例如,这些track不是由用户在代码中写入的,而是由其他文件转换过来的,存放在一个list中,只能用append一个一个加。(这个设计不是很pythonic,像numpy这样的库会要求用户将所有的track先装入一个list,再传入函数,例如build([track1,track2,track3]),而不是build(track1,track2,track3)
  3. build不支持track与列表形式混输

控制播放

当开始播放一段旋律之后,就不能暂停播放,只能等待播放完毕,也不能拖动进度条。


在jupyter notebook中的解决方案:先判断是否为jupyter notebook

def is_jupyter_notebook()->bool:
    try:
        get_ipython().__class__.__name__
        #jupyter notebook
        return True
    except NameError:
        #普通命令行
        return False

如果不是,则仍使用原来的音频api播放

如果是,则使用jupyter的api播放

播放音频:

#音频数据为audiodata,numpy数组
if(is_jupyter_notebook()):
    from IPython.display import Audio
    return Audio(data=audiodata, rate=44100)

效果:
image

'str' object has no attribute 'track_number'

musicpy\musicpy.py", line 771, in write
current_chord.track_number, current_chord.start_times, current_chord.instruments_numbers, current_chord.bpm, current_chord.tracks, current_chord.track_names, current_chord.channels, current_chord.pan, current_chord.volume
AttributeError: 'str' object has no attribute 'track_number'

chord类的setvolume方法,当ind参数设为'all'的时候,为什么要对self.notes进行note的类型检查?

如题,在structures.py的1204行里有这样的语句:

available_notes = [i for i in self.notes if type(i) == note]

根据上下文来看,这里的意思是,当ind参数设置为all以后,需要对self.notes里的每个内容进行类型检查,只有那些类型为note的对象才会被修改音量。但是self.notes应该在创建的时候就应该把输入转换成了 note 类型了,如果一定要做类型检查,也应该是在self.notes创建、新增的时候进行检查,而不应该在setvolume这样的方法里做检查。

另外,1199行each = self.notes[current - 1]中的current还没有进行初始化,假如vol类型为int,而ind类型为list,那么进入该分支则会报错。

音符的degree属性可修改,但修改后其他属性并不会跟着改

创建一个音符后,degree属性可修改,但修改后其他属性并不会跟着改,会得到一个错误的音符
例如以下代码:

>>> from musicpy import *
>>> a = note('A', 5)
>>> a.__dict__
{'name': 'A', 'num': 5, 'degree': 81, 'duration': 0.25, 'volume': 100, 'channel': None}
>>> a.degree=80
>>> a.__dict__
{'name': 'A', 'num': 5, 'degree': 80, 'duration': 0.25, 'volume': 100, 'channel': None}
  1. 如果设计时就不希望用户修改degree,那么应该将这个属性设置为只读
  2. 当然如果能够更智能一点,在用户修改degree时自动修改name和num,就更好了

Couldn't open /etc/timidity/freepats.cfg

Arch 下用 pip 安装, python 3.9 ,使用 play 函数就这样,
代码如下:

from musicpy import *
a = note('A', 5)
play(a)

报错:

/usr/lib/python3.9/site-packages/musicpy/musicpy.py in play(chord1, tempo, track, channel, time1, track_num, name, modes, instrument, i, save_as_file, deinterleave)
    235     if save_as_file:
    236         result_file = name
--> 237         pygame.mixer.music.load(result_file)
    238         pygame.mixer.music.play()
    239     else:

error: Couldn't open /etc/timidity/freepats.cfg

经过检查 /etc/timidity 这个目录没有,个人觉得这些文件可以直接放在包下的。

pip无法安装

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting musicpy
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c1/d1/46c78c9c1d6968c88436a0e93e7c314e26f1c441692ced349565a39eb98e/musicpy-1.60.tar.gz (100 kB)
     |████████████████████████████████| 100 kB 576 kB/s 
Requirement already satisfied: mido in /usr/local/lib/python3.5/dist-packages (from musicpy) (1.2.9)
Requirement already satisfied: midiutil in /usr/local/lib/python3.5/dist-packages (from musicpy) (1.2.1)
Requirement already satisfied: pygame in /usr/local/lib/python3.5/dist-packages (from musicpy) (2.0.1)
Requirement already satisfied: pillow in /usr/lib/python3/dist-packages (from musicpy) (3.1.2)
Building wheels for collected packages: musicpy
  Building wheel for musicpy (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/setup.py'"'"'; __file__='"'"'/tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-7ke3swbq
       cwd: /tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/
  Complete output (28 lines):
  /usr/lib/python3.5/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
    warnings.warn(msg)
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib
  creating build/lib/musicpy
  copying musicpy/__init__.py -> build/lib/musicpy
  copying musicpy/structures.py -> build/lib/musicpy
  copying musicpy/match.py -> build/lib/musicpy
  copying musicpy/database.py -> build/lib/musicpy
  copying musicpy/musicpy.py -> build/lib/musicpy
  running egg_info
  writing top-level names to musicpy.egg-info/top_level.txt
  writing musicpy.egg-info/PKG-INFO
  writing dependency_links to musicpy.egg-info/dependency_links.txt
  writing requirements to musicpy.egg-info/requires.txt
  reading manifest file 'musicpy.egg-info/SOURCES.txt'
  reading manifest template 'MANIFEST.in'
  writing manifest file 'musicpy.egg-info/SOURCES.txt'
  creating build/lib/musicpy/__pycache__
  copying musicpy/__pycache__/__init__.cpython-37.pyc -> build/lib/musicpy/__pycache__
  copying musicpy/__pycache__/database.cpython-37.pyc -> build/lib/musicpy/__pycache__
  copying musicpy/__pycache__/match.cpython-37.pyc -> build/lib/musicpy/__pycache__
  copying musicpy/__pycache__/musicpy.cpython-37.pyc -> build/lib/musicpy/__pycache__
  copying musicpy/__pycache__/structures.cpython-37.pyc -> build/lib/musicpy/__pycache__
  error: can't copy 'musicpy/__pycache__': doesn't exist or not a regular file
  ----------------------------------------
  ERROR: Failed building wheel for musicpy
  Running setup.py clean for musicpy
Failed to build musicpy
Installing collected packages: musicpy
    Running setup.py install for musicpy ... error
    ERROR: Command errored out with exit status 1:
     command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/setup.py'"'"'; __file__='"'"'/tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-gamuuxo2/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.5/musicpy
         cwd: /tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/
    Complete output (28 lines):
    /usr/lib/python3.5/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
      warnings.warn(msg)
    running install
    running build
    running build_py
    creating build
    creating build/lib
    creating build/lib/musicpy
    copying musicpy/__init__.py -> build/lib/musicpy
    copying musicpy/structures.py -> build/lib/musicpy
    copying musicpy/match.py -> build/lib/musicpy
    copying musicpy/database.py -> build/lib/musicpy
    copying musicpy/musicpy.py -> build/lib/musicpy
    running egg_info
    writing musicpy.egg-info/PKG-INFO
    writing requirements to musicpy.egg-info/requires.txt
    writing top-level names to musicpy.egg-info/top_level.txt
    writing dependency_links to musicpy.egg-info/dependency_links.txt
    reading manifest file 'musicpy.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    writing manifest file 'musicpy.egg-info/SOURCES.txt'
    creating build/lib/musicpy/__pycache__
    copying musicpy/__pycache__/__init__.cpython-37.pyc -> build/lib/musicpy/__pycache__
    copying musicpy/__pycache__/database.cpython-37.pyc -> build/lib/musicpy/__pycache__
    copying musicpy/__pycache__/match.cpython-37.pyc -> build/lib/musicpy/__pycache__
    copying musicpy/__pycache__/musicpy.cpython-37.pyc -> build/lib/musicpy/__pycache__
    copying musicpy/__pycache__/structures.cpython-37.pyc -> build/lib/musicpy/__pycache__
    error: can't copy 'musicpy/__pycache__': doesn't exist or not a regular file
    ----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/setup.py'"'"'; __file__='"'"'/tmp/pip-install-i9geki1a/musicpy_7ae0f7c0837b41c5bb9dd7ab1acbc3bd/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-gamuuxo2/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.5/musicpy Check the logs for full command output.

这是报错信息,是不是什么地方bug了?

split_melody返回的note对象

你好,请问我在用.split_melody(mode='notes')函数分离得到的主旋律音符对象,我的note对象里的duration是float类型的,这个是秒为单位的还是wiki文档里写的以节为单位的呢?如果是以节为单位的,是以默认的480ticks_per_beat和4/4分音符计算的,还是按每首歌自己不同的情况计算的
Uploading image.png…

Apply rhythm to drum / note object?

r1 = rhythm('b b b 0 b b b 0 b b b 0 b 0 b 0')

mp.chord('C2').apply_rhythm(r1)
>>> chord(notes=[C2], interval=[1/4], start_time=0)

mp.chord('C2, C2, C2, C2').apply_rhythm(r1)
>>> chord(notes=[C2, C2, C2, C2], interval=[1/4, 1/4, 1/2, 1/4], start_time=0)

drum('K').apply_rhythm(r1)
>>> AttributeError: 'drum' object has no attribute 'apply_rhythm'

Is there a QOL feature where you can apply rhythm to note / drum?

Expected behavior:

r1 = rhythm('b b b 0 b b b 0')
>>>
[rhythm]
rhythm: beat(1/4), beat(1/4), beat(1/4), rest(1/4), beat(1/4), beat(1/4), beat(1/4), rest(1/4)
total bar length: 2
time signature: 4 / 4

mp.note('C', 2).apply_rhythm(r1) # not supported yet
>>> 
chord(notes=[C2, C2, C2, C2, C2, C2], interval=[1/4, 1/4, 1/2, 1/4, 1/4, 1/2], start_time=0)

drum('K')
>>>
[drum] 
chord(notes=[C2], interval=[1/8], start_time=0)

drum('K').apply_rhythm(r1) # not supported yet 
>>>
[drum]
chord(notes=[C2, C2, C2, C2, C2, C2], interval=[1/4, 1/4, 1/2, 1/4, 1/4, 1/2], start_time=0)

Demos won't work without an explicit sleep

Hi, I tried some of the demos of sf2_loader and musicpy, but without sleeping, all the notes/pieces won't be played and the program exited in advance. After looking into the code and doing some googling, I found that the audio is played underlying by pygame.mixer, which is supposed to play the audio in the background without blocking(which makes sense in game, discussed in this stackoverflow thread). So after adding a sleep(some_seconds), the demos work as expected.

I tried this on Windows and Linux, with pygame [email protected], sleep is essential in both systems. Is there anything else I have to care if I want to try the demo without sleep? Or this is the design then maybe it's better to add a line of code in the tutorial demos?

Oh, almost forgot to say, it's a great project, good job!

Unable to set volume for tracks

When using this code from the github example, I was unable to set the 'volume' argument for tracks.
new_piece = build(track(A1, instrument='Acoustic Grand Piano', start_time=0, channel=1, track_name='piano'))
This works, as does setting volume= for new_piece.
new_piece = build(track(A1, instrument='Acoustic Grand Piano', start_time=0, channel=1, track_name='piano',volume=40))
The error came up when using the 'play' method

  File "C:\Users\vlign\AppData\Local\Programs\Python\Python310\lib\site-packages\musicpy\musicpy.py", line 240, in play
    file = write(current_chord=current_chord,
  File "C:\Users\vlign\AppData\Local\Programs\Python\Python310\lib\site-packages\musicpy\musicpy.py", line 849, in write
    current_volume_channel = current_channel if each.channel is None else each.channel
AttributeError: 'int' object has no attribute 'channel'

play函数在linux下不可用

play函数在linux下,会报错

In [8]: play(note('A',5))                                                                
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-8-e5677dd3da72> in <module>
----> 1 play(note('A',5))

~/.pyenv/versions/3.8.0/lib/python3.8/site-packages/musicpy/musicpy.py in play(chord1, tempo, track, channel, time1, track_num, name, modes)
    198                  track_num=1,
    199                  mode=modes)
--> 200     os.startfile(f'{name}.mid')
    201 
    202 

AttributeError: module 'os' has no attribute 'startfile'

但是在当前目录生成了个 temp.mid 能够播放。目测是 linux下的python没有 os.startfile 这个函数,仅仅windows才有,希望跨平台适配。

实时变换tempo功能 / 音量问题

能不能加入一种对象,可以像note一样插进一段chord里,其长度为0,作用是被处理到时更改后续序列的tempo,如果chord被打包进piece,则对整个piece生效,类似daw上的tempo轨那样

另外有个小问题,为什么musicpy的volume最高能设定到255?midi标准是到127的,做了一个包含大于127的音符的midi在fluidsynth播放缺音,提示超出127,导入musescore也是大面积缺音,但用java写的播放程序调用jvm自带的gervill就可以正常播放;如果把音符全改成127以下,fluidsynth正常播放,但声音很小,和手上一些其他的midi文件相比只有一半左右,这是什么情况?后续会修复吗?

关于让构建和弦时和弦音的显示更符合乐理的建议

现在使用和弦语法构成和弦似乎默认是以升号表示半音。以三和弦为例,一般认为三和弦是1 3 5的变化,比如

C 大三和弦:1 3 5
Cm 小三和弦 :1 b3 5
Caug 增三和弦:1 3 #5
Cdim 减三和弦 :1 b3 b5

当使用musicpy.C('Cdim')构建和弦,和弦的构成音为[C4, D#4, F#4],能否修改为默认显示[C4, Eb4, Gb4]?这样对于后续进行乐理角度的分析更方便。其他七九十一和弦等等复杂和弦同理。

[FEATURE REQ] MIDI to musicpy notation parser

Hey @Rainbow-Dreamer !!!

Its been a while since we have talked so I figured I would pop in and say hi! :)

First of all, I wanted to thank you for all the work you did on musicpy. Its very nice now and very mature/feature-rich :)

The reason I am writing is that I wanted to ask you if you can make a MIDI to musicpy notation parser/converter?

I think it would be a nice feature/option and it can be very useful for ML/DL applications of musicpy.

For example, I want to be able to take a Iron Maiden MIDI and get the result similar to your example:


a12 = translate('D, G, D, a:1/8;., E[l:1/4; i:.]')

a1 = a11 * 2 + a12

a20 = translate('D2, G2, D2, a:1/8;.')

a21 = translate('E2[l:.4; i:.] | E2[l:.16; i:.; r:2], E2[l:.8; i:.], n:1, u:1, r:4, E2[l:.16; i:.; r:2], D2[l:.8; i:.; r:3]')

a22 = translate('C2[l:.4; i:.] | C2[l:.16; i:.; r:2], C2[l:.8; i:.], n:1, u:1, r:4, C2[l:.16; i:.; r:2], D2[l:.8; i:.; r:3]')

a23 = translate('D2, G2, D2, a:1/8;., E2[l:.4; i:.]')

a2 = a20 + a21 * 2 + a22 + a21[:-3] + a23

a3 = drum('C;S;K[l:1/4; i:.], H, S, H, K, H, S, H, K, H, S, H, S[r:3], r:4, C;S;K[l:1/4; i:.]').notes

a41 = translate('F#, G, F#, a:1/8;., G[l:1/4; i:.], B4[l:1/8; i:.] | A, B, A, B, a:1/16;., n:1, D5[l:1/8; i:.], B4[l:1/8; i:.], u:1, B[l:1/8;i:.], G[l:1/8; i:.], G[l:1/4; i:.]') * 2

a42 = translate('F#, G, F#, a:1/8;., G[l:1/4; i:.]')

a4 = a41 * 2 + a42

a4.set_volume(80)

result = P([a1, a2, a3, a4], [31, 34, 1, 31], channels=[0, 1, 9, 2], bpm=165, start_times=[0, 0, 3/8, 0])

play(result)

Another thing I wanted to say is that I really liked algorithms.write_pop(scale('C', 'minor')) option. I think you should really develop it and make it work well.

Anyway, please let me know.

Sincerely,

Alex

单个音符滑音

刚刚看到21年加入了区段pitch bend,可以使某点后的音符细微变调,这功能很赞
那么,怎么才能在某一个音符内部实现一个线性或者特定数学函数控制的pitch?比如一个长音,以B4开始,到一半时长开始降调,一直到结束时的A4,或者一个D4,开始时快速向D#4抖动一下再拉回来,配合拨弦音色整出颤音——话说MIDI支持这种指令吗?

Any support for stacking chords so they can be played simultaneously within the same track?

Currently, you cannot play 2 chords in parallel unless you assign them to different tracks. But this cause a lot of export problems.

As far as I know, chords can only be appended sequentially, not stacked simultaneously. Meaning if I have a chord playing a melody, and a second chord playing its counter melody, I must assign both in different tracks to allow simultaneous playing.

It is very hard for me to merge the two chords into a single chord.

My current workaround is to resample both chords and reconstructing them.

class ChordStacker:
    def __init__(self, chord_list:list):
        """ 
        chord_list: list of chord objects

        [
            chord(notes=[C#3, C#3, C#3, C#3, C#3], interval=[15/8, 3/8, 1/4, 1/2, 1], start_time=0), 
            chord(notes=[C2, C2, C2, C2, C2, C2, C2, C2, C2, C2, ...], interval=[1/4, 1/4, 1/4, 1/4, 1/4, 1/4, 3/8, 1/4, 1/4, 1/4, ...], start_time=0), 
            chord(notes=[F#2, F#2, F#2, F#2, F#2]
        ]
        """
        self.chord_list = chord_list
        self.chord = None 


    def _get_chord_info(self, chord) -> list:
        """ 
        Get the note position for each chord

        Returns: list of dict

        Example: [
            {'note': 'C#3', 'start': 0}, 
            {'note': 'C#3', 'start': 1.875}, 
            {'note': 'C#3', 'start': 2.25}, {'note': 'C#3', 'start': 2.5},
        ]
        """
        track = chord.copy()
        intervals = track.interval
        notes = [] 

        # BUG WARNING! start counting from the start time of the chord instead of 0!
        cumulative_interval = chord.start_time

        for note, interval in zip(track.notes, intervals):
            notes.append({
                'note': str(note),
                'start': cumulative_interval,
            })
            cumulative_interval += interval

        return notes
    

    def _group_notes_by_start(self, notes_info) -> dict:
        """ 
        Used sequentially after getting chord info

        Returns: dict 
            key: start time
            value: dict of notes and start time
        
        Example: 
            {0: {'notes': ['C#3', 'C2'], 'start': 0}, 0.25: {'notes': ['C2', 'F#2', 'D2'], 'start': 0.25}, 0.375: {'notes': ['A#2']}}
        """
        grouped_chords = {}
        
        for note_info in notes_info:
            start = note_info['start']
            
            if start not in grouped_chords:
                grouped_chords[start] = {
                    'notes': [],
                    'start': start
                }
            grouped_chords[start]['notes'].append(note_info['note'])

        return grouped_chords
    

    def _convert_grouped_to_dict(self, grouped_chords) -> list:
        """ 
        Convert grouped chords to a list of dicts, from group_notes_by_start()

        Returns: list of dict
        
        Example: [{'notes': ['C#3', 'C2'], 'interval': 0.25}, {'notes': ['C2', 'F#2', 'D2'], 'interval': 0.125}]
        """
        import math 

        chords_list = []
        sorted_durations = sorted(grouped_chords.keys())
        
        for i in range(len(sorted_durations)):
            start = sorted_durations[i]
            if i < len(sorted_durations) - 1:
                interval = sorted_durations[i+1] - start
            else:
                interval = math.ceil(start) - start
            
            chords_list.append({
                'notes': grouped_chords[start]['notes'],
                'interval': interval,
            })
        return chords_list
    

    def dict_to_chord(self,dict_list):
        """
        Converts a list of dictionaries representing chords into a single chord object.

        dict_list: list of dict
            List of dictionaries where each dictionary has 'notes' and 'interval' keys.

        Example:
            Input: [{'notes': ['B1'], 'interval': 0.0625},
                    {'notes': ['G#2'], 'interval': 0.0625},
                    {'notes': ['F#2'], 'interval': 0.0625},
                    {'notes': ['G#2'], 'interval': 0.0625},
                    {'notes': ['D2', 'E2', 'A#2'], 'interval': 0.125},
            
            Output: mp.chord('B1[1/16]') + mp.chord('G#2[1/16]') + mp.chord('F#2[1/16]') + ...
        """
        import math 

        chd = mp.chord('')
        for chord_dict in dict_list:
            notes = ','.join(chord_dict['notes'])
            interval = chord_dict['interval']
            chord_str = f"{notes}[{interval}]"
            try:
                chord = mp.chord(chord_str, duration=interval)
                chd += chord

            except Exception as e:
                print(f"Error parsing chord: {chord_str}")
                print(e)    
        
        max_bar = math.ceil(chd.bars())
        remainder = max_bar - chd.bars() 
        chd.notes[-1].duration += remainder
        return chd



    def stack_chords(self):
        """ 
        Stack chords from the chord list into a single representation.
        """
        stacked_chord_info = []

        for chord in self.chord_list:
            chord_info = self._get_chord_info(chord)
            stacked_chord_info.extend(chord_info)  # Append to the final list

        # Sort the notes based on their start time before grouping
        stacked_chord_info.sort(key=lambda x: x['start'])

        grouped = self._group_notes_by_start(stacked_chord_info)
        dict_list = self._convert_grouped_to_dict(grouped)        
        chord = self.dict_to_chord(dict_list)
        self.chord = chord

        return chord

Example usage


r1 = mp.rhythm('b - b - b - b -', 1) 
r2 = mp.rhythm('- - b - - - b - b - - b - - b b', 1) 
r3 = mp.rhythm('b - - - b - - - b - b - - b - -', 1)

c1 = mp.chord('C4').from_rhythm(r1)
c2 = mp.chord('C5').from_rhythm(r2)
c3 = mp.chord('D4').from_rhythm(r3)

chds = [c1, c2, c3]
cs = ChordStacker(chds)
cs.stack_chords()

print(cs.chord)
>> chord(notes=[C4, D4, C5, C4, D4, C5, C4, C5, D4, D4, ...], interval=[0, 1/8, 1/8, 0, 1/8, 1/8, 0, 0, 1/8, 1/16, ...], start_time=0)

cs.chord.play(bpm=100, channel=1)

Can't load the module if there is no audio interface

I'd like to use this module to generate loops using a server. But when you try to load the module it tries to initiate the audio interface which fails without an audio interface.

I don't need to playback the audio, I just need to export the loops I generate as WAV file.

The work around is to create a dummy audio interface but it's complicated and I'm having issues keeping the dummy interface running.

Note example

Could you give a simple example on your Wiki how to create a simple melody. It is not clear to me how to play a simple sequence of notes. I tried the only_notes function but get the same output. If I want to play notes without the cords how would you do this?

翻譯和弦判斷算法github wiki的文檔

您好,
我對於和弦判斷算法邏輯這部分特別有興趣,我想要幫忙將這部分算法條目給英翻中。
但我對於github wiki的使用不夠熟,想請問有沒有甚麼特別的內嵌功能可以在原來英文條目添加中文內容,做language switching的時候只顯示特定語言內容;還是一定得開新的頁面添加中文條目?

音符滑音颤音功能

比如增加滑音颤音的参数,接收lambda表达式(传入根据音符回放时间变化的量)或者数值

Any support for adding leading or trailing rests to chord objects?

It is possible to fix the problem using start times during piece(), but certain segments of a chord sometimes don't take up the whole bar.

chd.bars()
> 10.935 

chd2 = mp.rest(duration=0.065)
> chd2 = mp.rest(duration=0.065)

# expected behavior
chd3 = chd2 + chd
# chd3 = chd + chd2
chd3.bars() 
> 11.0 

# current 
chd3.bars()
> 10.935 # rest is ignored. 

The reason why I think this feature is critical is because midi files with trailing or leading rests will have their notes reindexed after importing as chord object and exporting as midi file.

Current solution is

chd3 = mp.piece([chd], start_times=[0.065]) # to apply leading rest 
chd3.bars()
> 11.0

One idea is to implement a chord method:

def fill_bar(chord, leading=True):
    """ 
    Adjust chord to ensure it spans a whole number of bars
    """ 
    total_bars = chord.bars()
    remainder = total_bars % 1 

    if remainder == 0: 
        raise ValueError(f"Chord already spans {total_bars} bars.")
    
    if leading == True: 
        chord = mp.chord('', duration=remainder) + chord
    else:
        chord = chord + mp.chord('', duration=remainder)
    return chord

Feature - ASCII guitars tabs to midi ?

Hi two questions does musicpy have the ability to convert guitar tabs in ascii format to midi ?

Second is there a way to emulate strumming of guitar strings , that is to play each note of a chord with a certain Delay ?

Hope to hear from you

Drum parts will show up as piano if it starts earlier than piano during piece.

Current fix is to change drum instrument so it doesn't have collision with piano (but your exported midi will not play sound in DAW), or to use different piano instrument and reserve instrument 1 for drums.

# p = piano 
# b = bass 
# s = string 
# d = drum

intro = mp.piece(
    tracks=[
        intro_p,
        intro_b,
        mp.chord(''), # empty insturment channel
        intro_d,
    ],
    instruments=[
        1, 
        35,
        41,
        1
    ],
    start_times=[
        0, # piano starts first
        8,
        24,
        8, # drum starts later
    ],
    channels=[1, 2, 3, 9], # channel 9 is drum channel
    bpm=180
)

verse = mp.piece(
    tracks=[
        verse_p,
        verse_b,
        verse_s,
        verse_d,
    ],
    instruments=[
        1, 
        35,
        41,
        1 # export bug with drum if piano starts later than drum
    ],
    start_times=[
        16, # piano starts later
        0,
        0,
        0
    ],
    channels=[1,2,3,9],
    bpm=180
)

chorus = mp.piece(
    tracks=[
        ch_p,
        ch_b,
        ch_s,
        ch_d,
    ],
    instruments=[
        1, 
        35,
        41,
        1
    ],
    start_times=[
        0, # simultaneous start also share the same issue
        0,
        0,
        0
    ],
    channels=[1,2,3,9],
    bpm=180
)

song = (intro | verse | chorus)

2024-08-03 06_01_06-● rolling_girl_demo ipynb - mido-proj - Visual Studio Code

I'm curious about why "math.log" applies to denominator of time_signature message

I would like to develop a score analysis program and I would like to get time signature of each bar.
As a result, I find that time_signature is a message, which can be accessed by bar.other_messages, if the message is present. (If my way of accessing it is wrong, thanks for pointing out)

I have a minuet of 3/4.

Logic pro x gives the correct time signature.
image

But in musicpy,

# musicpy.py:948
        current_message = time_signature(
            track=track_ind,
            time=time,
            numerator=message.numerator,
            denominator=int(math.log(message.denominator, 2)),  #this line
            clocks_per_tick=message.clocks_per_click,
            notes_per_quarter=message.notated_32nd_notes_per_beat)

Logarithm is applied to denominator, and if I print(msg.numerator, msg.denominator), I would get 3/2, which is different from expected.
I am curious about the logarithm. It should be intentional, but I don't quite understand. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.