Giter Club home page Giter Club logo

Comments (16)

steveb avatar steveb commented on June 8, 2024 1

Just coming back to this, it turns out /sys/devices/platform/soc/soc:firmware/get_throttled makes the value available to read, see raspberrypi/linux#2397

It looks like reading this sysfs path would be a very low overhead alternative to calling vcgencmd

from zynthian-ui.

jofemodo avatar jofemodo commented on June 8, 2024

Ohhh!! This is a very interesting research. Let me test your fix, but it looks promising!! Congrats and thanks!!

from zynthian-ui.

jofemodo avatar jofemodo commented on June 8, 2024

OK! I just tried this library:

https://pypi.org/project/vcgencmd/

but it also forks a vcgencmd process. Perhaps we could access the vcgen API directly:

https://github.com/raspberrypi/firmware/blob/master/opt/vc/include/interface/vmcs_host/vcgencmd.h

Should we re-implement the vcgencmd python library to avoid the fork calls? It doesn't seem too difficult and doing like this will benefit the whole RBPi user community.

Regards,

from zynthian-ui.

jofemodo avatar jofemodo commented on June 8, 2024

BTW, i didn't know that forking a process block all threads, including RT ones. I'm still thinking about it because i think we have several scenarios on zynthian code where things can be improved by knowing this important detail.

Thanks a lot for opening my eyes to this!!
@riban-bw , what do you think about it? ;-)

from zynthian-ui.

jofemodo avatar jofemodo commented on June 8, 2024

Here is the vcgencmd source code:

https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/gencmd/gencmd.c

Not complex at all. I propose to create a "libvcgencmd.so" that implements the vcgencmd functionality as a simple API that we could use from zynthian UI or, better, use it to re-implement the vcgencmd python library without using forks at all.

Regards,

from zynthian-ui.

davidfurey avatar davidfurey commented on June 8, 2024

Thank you for replying so quickly. I'm glad the research is useful. I'm finding it interesting trying exploring quite how low latency the RBPi can go.

BTW, i didn't know that forking a process block all threads, including RT ones. I'm still thinking about it because i think we have several scenarios on zynthian code where things can be improved by knowing this important detail.

I've struggled to find any documentation that confirms that forking a process blocks all threads, this is just what my investigation suggests - it would be great if someone could confirm that this is expected behaviour.

It seems like implementing the vcgencmd functionality in a library would solve the issue, and be useful to the wider RBPi community.

I'm still thinking about it because i think we have several scenarios on zynthian code where things can be improved by knowing this important detail.

This is why I was pondering whether it would be good to split the zynthian-ui into two applications (or at least two processes), one that is only responsible for time sensitive audio processing and the other for UI which is less latency sensitive. I presume there are other system calls, or even patterns of memory access that might cause similar issues.

from zynthian-ui.

riban-bw avatar riban-bw commented on June 8, 2024

Hi guys!

Thanks David for your investigation. It is good to see this analysis. I certainly makes sense that running a process from Python and waiting for its result will block. As jofemodo says, we could create a library to access this data or as you suggest, we could obtain the info in a separate thread. Maybe a combination of the two would benefit, i.e. a library that regularly gets the required data running in its own thread, updating a data point / variable that the main UI code can access.

Regarding splitting UI and core functionality - we are working on that right now...

from zynthian-ui.

riban-bw avatar riban-bw commented on June 8, 2024

I did a quick and dirty test to disable calling vcgencmd and xruns are reduced / less likely to occur. I think it may be prudent to do this as an interim until we resolve the underlying issue, e.g. implement direct access to vc, e.g. via a lib.

By disabling calls to vcgencmd we lose the indication of under-voltage / over-temperature but it is better to not be warned of a symptom than to have the monitoring of the symptom cause an actual issue.

from zynthian-ui.

jofemodo avatar jofemodo commented on June 8, 2024

I don't think this is needed anymore. I moved the vcgencmd call to a separated thread instead of calling it from the main thread. This avoids the main thread getting blocked. This is implemented on the z2rf branch.

Anyway, i still think we should avoid forking for getting this kind of info, so we should leave this open until we implement a library with the vcgencmd functionality. It should be quite easy. Cut and paste from here:

https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/gencmd/gencmd.c

from zynthian-ui.

riban-bw avatar riban-bw commented on June 8, 2024

@jofemodo uou marked this ticket with urgent and hot fix tags sure to the impact it had on core operation. I agree which is why I suggested temporarily removing the call to allow users to experience better audio fidelity. It's a simple patch to disable this until we merge the fix in z2rf which I guess will be a while.

from zynthian-ui.

jofemodo avatar jofemodo commented on June 8, 2024

The z2rf is now merged on testing. This branch includes the fix i mentioned above:

eedd72a

Sorry, i did it while refactoring the zynthian_gui code, so the change is mixed with a lot of unrelated changes.
This could be a problem for easily merging on stable, specially if we want to "cherrypick". I'm thinking about it ...

Regards,

from zynthian-ui.

riban-bw avatar riban-bw commented on June 8, 2024

I tried z2rf and didn't notice substantial difference in xruns so not convinced the change has had much impact. @davidfurey would you have time to perform a similar analysis on the latest testing branch?

from zynthian-ui.

davidfurey avatar davidfurey commented on June 8, 2024

I've switched to the latest testing branch, and the pattern has changed slightly. But fundamentally I still seeing a correlation between xruns and the fork that happens as part of calling vcgencmd. I've verified this using Kernel shark, and by commenting out the call to vcgencmd and observing the xruns disappear.

The issue was not the main thread being blocked, so I'm not too surprised that moving the fork off the main thread didn't reduce the number of xruns.

I think we either need to find a way of getting the under-voltage / over-temperature info without forking, or ensure the fork happens in a different process - not just a different thread

Screenshot from 2021-12-18 16-13-25
.

from zynthian-ui.

steveb avatar steveb commented on June 8, 2024

It looks like vcgencmd get_throttled is just reading /dev/vchiq and parsing to build the output value. I bet zynthian could read that device file directly for significantly lower overhead. The source for vcgencmd might need to be read to know what to parse


root@zynthian-1:~# strace vcgencmd get_throttled
execve("/usr/bin/vcgencmd", ["vcgencmd", "get_throttled"], 0xbeeeecb4 /* 105 vars */) = 0
brk(NULL)                               = 0xd5f000
mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6fd9000
access("/etc/ld.so.preload", R_OK)      = 0
openat(AT_FDCWD, "/etc/ld.so.preload", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=54, ...}) = 0
mmap2(NULL, 54, PROT_READ|PROT_WRITE, MAP_PRIVATE, 3, 0) = 0xb6fd8000
close(3)                                = 0
readlink("/proc/self/exe", "/opt/vc/bin/vcgencmd", 4096) = 20
openat(AT_FDCWD, "/usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\254\3\0\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0644, st_size=17708, ...}) = 0
mmap2(NULL, 81964, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6f96000
mprotect(0xb6f9a000, 61440, PROT_NONE)  = 0
mmap2(0xb6fa9000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0xb6fa9000
close(3)                                = 0
munmap(0xb6fd8000, 54)                  = 0
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=98423, ...}) = 0
mmap2(NULL, 98423, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb6f7d000
close(3)                                = 0
openat(AT_FDCWD, "/opt/vc/lib/libvchiq_arm.so", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0(\26\0\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0644, st_size=30288, ...}) = 0
mmap2(NULL, 88456, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6f67000
mprotect(0xb6f6d000, 61440, PROT_NONE)  = 0
mmap2(0xb6f7c000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0xb6f7c000
close(3)                                = 0
openat(AT_FDCWD, "/opt/vc/lib/libvcos.so", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\210*\0\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0644, st_size=169364, ...}) = 0
mmap2(NULL, 102288, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6f4e000
mprotect(0xb6f57000, 61440, PROT_NONE)  = 0
mmap2(0xb6f66000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8000) = 0xb6f66000
close(3)                                = 0
openat(AT_FDCWD, "/lib/arm-linux-gnueabihf/libpthread.so.0", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\3\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\224O\0\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=130416, ...}) = 0
mmap2(NULL, 168560, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6f24000
mprotect(0xb6f3b000, 61440, PROT_NONE)  = 0
mmap2(0xb6f4a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0xb6f4a000
mmap2(0xb6f4c000, 4720, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb6f4c000
close(3)                                = 0
openat(AT_FDCWD, "/lib/arm-linux-gnueabihf/libdl.so.2", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0 \n\0\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0644, st_size=9768, ...}) = 0
mmap2(NULL, 73924, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6f11000
mprotect(0xb6f13000, 61440, PROT_NONE)  = 0
mmap2(0xb6f22000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1000) = 0xb6f22000
close(3)                                = 0
openat(AT_FDCWD, "/lib/arm-linux-gnueabihf/librt.so.1", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\3\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0 \30\0\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0644, st_size=26600, ...}) = 0
mmap2(NULL, 90648, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6efa000
mprotect(0xb6f00000, 61440, PROT_NONE)  = 0
mmap2(0xb6f0f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0xb6f0f000
close(3)                                = 0
openat(AT_FDCWD, "/lib/arm-linux-gnueabihf/libc.so.6", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\274x\1\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1296004, ...}) = 0
mmap2(NULL, 1364764, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6dac000
mprotect(0xb6ee4000, 65536, PROT_NONE)  = 0
mmap2(0xb6ef4000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x138000) = 0xb6ef4000
mmap2(0xb6ef7000, 8988, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb6ef7000
close(3)                                = 0
mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6fd7000
set_tls(0xb6fd7500)                     = 0
mprotect(0xb6ef4000, 8192, PROT_READ)   = 0
mprotect(0xb6f4a000, 4096, PROT_READ)   = 0
mprotect(0xb6f0f000, 4096, PROT_READ)   = 0
mprotect(0xb6f22000, 4096, PROT_READ)   = 0
mprotect(0xb6fa9000, 4096, PROT_READ)   = 0
mprotect(0xb6fdb000, 4096, PROT_READ)   = 0
munmap(0xb6f7d000, 98423)               = 0
set_tid_address(0xb6fd70a8)             = 13986
set_robust_list(0xb6fd70b0, 12)         = 0
rt_sigaction(SIGRTMIN, {sa_handler=0xb6f288e8, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO, sa_restorer=0xb6dd9120}, NULL, 8) = 0
rt_sigaction(SIGRT_1, {sa_handler=0xb6f289a4, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART|SA_SIGINFO, sa_restorer=0xb6dd9120}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
ugetrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0
futex(0xb6f66770, FUTEX_WAKE_PRIVATE, 2147483647) = 0
openat(AT_FDCWD, "/dev/vchiq", O_RDWR|O_LARGEFILE) = 3
ioctl(3, _IOC(_IOC_READ|_IOC_WRITE, 0xc4, 0xa, 0x8), 0xbed33ae4) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0x10, 0), 0x8) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0, 0), 0) = 0
mmap2(NULL, 8392704, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0xb65ab000
mprotect(0xb65ac000, 8388608, PROT_READ|PROT_WRITE) = 0
brk(NULL)                               = 0xd5f000
brk(0xd80000)                           = 0xd80000
clone(child_stack=0xb6daaf78, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0xb6dab4a8, tls=0xb6dab900, child_tidptr=0xb6dab4a8) = 13987
ioctl(3, _IOC(_IOC_READ|_IOC_WRITE, 0xc4, 0x2, 0x1c), 0xbed33a64) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0xd, 0), 0x3806d006) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0xc, 0), 0x3806d006) = 0
ioctl(3, _IOC(_IOC_WRITE, 0xc4, 0x4, 0xc), 0xbed33ac4) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0xd, 0), 0x3806d006) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0xc, 0), 0x3806d006) = 0
ioctl(3, _IOC(_IOC_READ|_IOC_WRITE, 0xc4, 0x8, 0x10), 0xbed33ac0) = 18
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0xd, 0), 0x3806d006) = 0
fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(0x88, 0x1), ...}) = 0
write(1, "throttled=0x0\n", 14throttled=0x0
)         = 14
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0xc, 0), 0x3806d006) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0xb, 0), 0x3806d006) = 0
ioctl(3, _IOC(_IOC_NONE, 0xc4, 0x1, 0), 0) = 0
futex(0xb6dab4a8, FUTEX_WAIT, 13987, NULL) = -1 EAGAIN (Resource temporarily unavailable)
close(3)                                = 0
exit_group(0)                           = ?
+++ exited with 0 +++
root@zynthian-1:~# 
root@zynthian-1:~# 
root@zynthian-1:~# cat /dev/vchiq
State 0: CONNECTED
  tx_pos=c48968(@2372983a), rx_pos=c49ed0(@511b0a12)
  Version: 8 (min 3)
  Stats: ctrl_tx_count=459026, ctrl_rx_count=462170, error_count=0
  Slots: 30 available (29 data), 0 recyclable, 0 stalls (0 data)
  Platform: 2835 (VC master)
  Local: slots 34-64 tx_pos=c48968 recycle=c67
    Slots claimed:
    DEBUG: SLOT_HANDLER_COUNT = 918080(e0240)
    DEBUG: SLOT_HANDLER_LINE = 1877(755)
    DEBUG: PARSE_LINE = 1851(73b)
    DEBUG: PARSE_HEADER = -354873656(ead90ec8)
    DEBUG: PARSE_MSGID = 67108870(4000006)
    DEBUG: AWAIT_COMPLETION_LINE = 1253(4e5)
    DEBUG: DEQUEUE_MESSAGE_LINE = 950(3b6)
    DEBUG: SERVICE_CALLBACK_LINE = 688(2b0)
    DEBUG: MSG_QUEUE_FULL_COUNT = 0(0)
    DEBUG: COMPLETION_QUEUE_FULL_COUNT = 0(0)
  Remote: slots 2-32 tx_pos=c49ed0 recycle=c68
    Slots claimed:
      16: 69/68
    DEBUG: SLOT_HANDLER_COUNT = 688574(a81be)
    DEBUG: SLOT_HANDLER_LINE = 1851(73b)
    DEBUG: PARSE_LINE = 1827(723)
    DEBUG: PARSE_HEADER = -354748064(eadaf960)
    DEBUG: PARSE_MSGID = 67133440(4006000)
    DEBUG: AWAIT_COMPLETION_LINE = 0(0)
    DEBUG: DEQUEUE_MESSAGE_LINE = 0(0)
    DEBUG: SERVICE_CALLBACK_LINE = 0(0)
    DEBUG: MSG_QUEUE_FULL_COUNT = 0(0)
    DEBUG: COMPLETION_QUEUE_FULL_COUNT = 0(0)
Service 0: LISTENING (ref 1) 'KEEP' remote n/a (msg use 0/3840, slot use 0/15)
  Bulk: tx_pending=0 (size 0), rx_pending=0 (size 0)
  Ctrl: tx_count=0, tx_bytes=0, rx_count=0, rx_bytes=0
  Bulk: tx_count=0, tx_bytes=0, rx_count=0, rx_bytes=0
  0 quota stalls, 0 slot stalls, 0 bulk stalls, 0 aborted, 0 errors
  instance a364ee7f
Service 1: OPEN (ref 5) 'SMEM' remote 27 (msg use 0/3840, slot use 0/15)
  Bulk: tx_pending=0 (size 0), rx_pending=0 (size 0)
  Ctrl: tx_count=1, tx_bytes=12, rx_count=1, rx_bytes=12
  Bulk: tx_count=0, tx_bytes=0, rx_count=0, rx_bytes=0
  0 quota stalls, 0 slot stalls, 0 bulk stalls, 0 aborted, 0 errors
  instance f90d4ed4
Service 2: OPEN (ref 1) 'mmal' remote 81 (msg use 0/3840, slot use 0/15)
  Bulk: tx_pending=0 (size 0), rx_pending=0 (size 0)
  Ctrl: tx_count=10, tx_bytes=500, rx_count=10, rx_bytes=2136
  Bulk: tx_count=0, tx_bytes=0, rx_count=0, rx_bytes=0
  0 quota stalls, 0 slot stalls, 0 bulk stalls, 0 aborted, 0 errors
  instance 7deb57bb
Service 3: OPEN (ref 1) 'mmal' remote 85 (msg use 0/3840, slot use 0/15)
  Bulk: tx_pending=0 (size 0), rx_pending=0 (size 0)
  Ctrl: tx_count=7, tx_bytes=376, rx_count=7, rx_bytes=1164
  Bulk: tx_count=0, tx_bytes=0, rx_count=0, rx_bytes=0
  0 quota stalls, 0 slot stalls, 0 bulk stalls, 0 aborted, 0 errors
  instance 5016aeda
Service 4: OPEN (ref 1) 'mmal' remote 84 (msg use 0/3840, slot use 0/15)
  Bulk: tx_pending=0 (size 0), rx_pending=0 (size 0)
  Ctrl: tx_count=8, tx_bytes=412, rx_count=8, rx_bytes=1448
  Bulk: tx_count=0, tx_bytes=0, rx_count=0, rx_bytes=0
  0 quota stalls, 0 slot stalls, 0 bulk stalls, 0 aborted, 0 errors
  instance a8c6fb76
Service 5: OPEN (ref 1) 'mmal' remote 83 (msg use 0/3840, slot use 0/15)
  Bulk: tx_pending=0 (size 0), rx_pending=0 (size 0)
  Ctrl: tx_count=9, tx_bytes=448, rx_count=9, rx_bytes=2044
  Bulk: tx_count=0, tx_bytes=0, rx_count=0, rx_bytes=0
  0 quota stalls, 0 slot stalls, 0 bulk stalls, 0 aborted, 0 errors
  instance 29ae8968

from zynthian-ui.

steveb avatar steveb commented on June 8, 2024

On second thoughts, its not just reading /dev/vchiq, its calling ioctls which set up DMA for the reads. This would be hard and risky to do directly from python

from zynthian-ui.

jofemodo avatar jofemodo commented on June 8, 2024

Ouoaaooo!! I wasn't aware of this sysfs interface! Very good point, @steveb !!
Let's try and see if we get free of these annoying XRUNs.

from zynthian-ui.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.