Giter Club home page Giter Club logo

openmodelz's Introduction

OpenModelZ

discord invitation link trackgit-views docs all-contributors CI PyPI version Coverage Status

What is OpenModelZ?

OpenModelZ ( mdz ) is tool to deploy your models to any cluster (GCP, AWS, Lambda labs, your home lab, or even a single machine).

Getting models into production is hard for data scientists and SREs. You need to configure the monitoring, logging, and scaling infrastructure, with the right security and permissions. And then setup the domain, SSL, and load balancer. This can take weeks or months of work even for a single model deployment.

You can now use mdz deploy to effortlessly deploy your models. OpenModelZ handles all the infrastructure setup for you. Each deployment gets a public subdomain, like http://jupyter-9pnxd.2.242.22.143.modelz.live, making it easily accessible.

OpenModelZ

Benefits

OpenModelZ provides the following features out-of-the-box:

  • ๐Ÿ“ˆ Auto-scaling from 0: The number of inference servers could be scaled based on the workload. You could start from 0 and scale it up to 10+ replicas easily.
  • ๐Ÿ“ฆ Support any machine learning framework: You could deploy any machine learning framework (e.g. vLLM/triton-inference-server/mosec etc.) with a single command. Besides, you could also deploy your own custom inference server.
  • ๐Ÿ”ฌ Gradio/Streamlit/Jupyter support: We provide a robust prototyping environment with support for Gradio, Streamlit, jupyter and so on. You could visualize your model's performance and debug it easily in the notebook, or deploy a web app for your model with a single command.
  • ๐Ÿƒ Start from a single machine to a cluster of machines: You could start from a single machine and scale it up to a cluster of machines without any hassle, with a single command mdz server start.
  • ๐Ÿš€ Public accessible subdomain for each deployment ( optional ) : We provision a separate subdomain for each deployment without any extra cost and effort, making each deployment easily accessible from the outside.

OpenModelZ is the foundational component of the ModelZ platform available at modelz.ai.

How it works

Get a server (could be a cloud VM, a home lab, or even a single machine) and run the mdz server start command. OpenModelZ will bootstrap the server for you.

$ mdz server start
๐Ÿšง Creating the server...
๐Ÿšง Initializing the load balancer...
๐Ÿšง Initializing the GPU resource...
๐Ÿšง Initializing the server...
๐Ÿšง Waiting for the server to be ready...
๐Ÿ‹ Checking if the server is running...
๐Ÿณ The server is running at http://146.235.213.84.modelz.live
๐ŸŽ‰ You could set the environment variable to get started!

export MDZ_URL=http://146.235.213.84.modelz.live
$ export MDZ_URL=http://146.235.213.84.modelz.live

Then you could deploy your model with a single command mdz deploy and get the endpoint:

$ mdz deploy --image modelzai/gradio-stable-diffusion:23.03 --name sdw --port 7860 --gpu 1
Inference sd is created
$ mdz list
 NAME  ENDPOINT                                                 STATUS  INVOCATIONS  REPLICAS 
 sdw   http://sdw-qh2n0y28ybqc36oc.146.235.213.84.modelz.live   Ready           174  1/1      
       http://146.235.213.84.modelz.live/inference/sdw.default                                

Quick Start ๐Ÿš€

Install mdz

You can install OpenModelZ using the following command:

pip install openmodelz

You could verify the installation by running the following command:

mdz

Once you've installed the mdz you can start deploying models and experimenting with them.

Bootstrap mdz

It's super easy to bootstrap the mdz server. You just need to find a server (could be a cloud VM, a home lab, or even a single machine) and run the mdz server start command.

Notice: We may require the root permission to bootstrap the mdz server on port 80.

$ mdz server start
๐Ÿšง Creating the server...
๐Ÿšง Initializing the load balancer...
๐Ÿšง Initializing the GPU resource...
๐Ÿšง Initializing the server...
๐Ÿšง Waiting for the server to be ready...
๐Ÿ‹ Checking if the server is running...
Agent:
 Version:       v0.0.13
 Build Date:    2023-07-19T09:12:55Z
 Git Commit:    84d0171640453e9272f78a63e621392e93ef6bbb
 Git State:     clean
 Go Version:    go1.19.10
 Compiler:      gc
 Platform:      linux/amd64
๐Ÿณ The server is running at http://192.168.71.93.modelz.live
๐ŸŽ‰ You could set the environment variable to get started!

export MDZ_URL=http://192.168.71.93.modelz.live

The internal IP address will be used as the default endpoint of your deployments. You could provide the public IP address of your server to the mdz server start command to make it accessible from the outside world.

# Provide the public IP as an argument
$ mdz server start 1.2.3.4

You could also specify the registry mirror to speed up the image pulling process. Here is an example:

$ mdz server start --mirror-endpoints https://docker.mirrors.sjtug.sjtu.edu.cn

Create your first UI-based deployment

Once you've bootstrapped the mdz server, you can start deploying your first applications. We will use jupyter notebook as an example in this tutorial. You could use any docker image as your deployment.

$ mdz deploy --image jupyter/minimal-notebook:lab-4.0.3 --name jupyter --port 8888 --command "jupyter notebook --ip='*' --NotebookApp.token='' --NotebookApp.password=''"
Inference jupyter is created
$ mdz list
 NAME     ENDPOINT                                                   STATUS  INVOCATIONS  REPLICAS
 jupyter  http://jupyter-9pnxdkeb6jsfqkmq.192.168.71.93.modelz.live  Ready           488  1/1
          http://192.168.71.93/inference/jupyter.default                                                                         

You could access the deployment by visiting the endpoint URL. The endpoint will be automatically generated for each deployment with the following format: <name>-<random-string>.<ip>.modelz.live.

It is http://jupyter-9pnxdkeb6jsfqkmq.192.168.71.93.modelz.live in this case. The endpoint could be accessed from the outside world as well if you've provided the public IP address of your server to the mdz server start command.

jupyter notebook

Create your first OpenAI compatible API server

You could also create API-based deployments. We will use OpenAI compatible API server with Bloomz 560M as an example in this tutorial.

$ mdz deploy --image modelzai/llm-bloomz-560m:23.07.4 --name simple-server
Inference simple-server is created
$ mdz list
 NAME           ENDPOINT                                                         STATUS  INVOCATIONS  REPLICAS 
 jupyter        http://jupyter-9pnxdkeb6jsfqkmq.192.168.71.93.modelz.live        Ready           488  1/1      
                http://192.168.71.93/inference/jupyter.default                                                 
 simple-server  http://simple-server-lagn8m9m8648q6kx.192.168.71.93.modelz.live  Ready             0  1/1      
                http://192.168.71.93/inference/simple-server.default                                           

You could use OpenAI python package and the endpoint http://simple-server-lagn8m9m8648q6kx.192.168.71.93.modelz.live in this case, to interact with the deployment.

import openai
openai.api_base="http://simple-server-lagn8m9m8648q6kx.192.168.71.93.modelz.live"
openai.api_key="any"

# create a chat completion
chat_completion = openai.ChatCompletion.create(model="bloomz", messages=[
    {"role": "user", "content": "Who are you?"},
    {"role": "assistant", "content": "I am a student"},
    {"role": "user", "content": "What do you learn?"},
], max_tokens=100)

Scale your deployment

You could scale your deployment by using the mdz scale command.

$ mdz scale simple-server --replicas 3

The requests will be load balanced between the replicas of your deployment.

You could also tell the mdz to autoscale your deployment based on the inflight requests. Please check out the Autoscaling documentation for more details.

Debug your deployment

Sometimes you may want to debug your deployment. You could use the mdz logs command to get the logs of your deployment.

$ mdz logs simple-server
simple-server-6756dd67ff-4bf4g: 10.42.0.1 - - [27/Jul/2023 02:32:16] "GET / HTTP/1.1" 200 -
simple-server-6756dd67ff-4bf4g: 10.42.0.1 - - [27/Jul/2023 02:32:16] "GET / HTTP/1.1" 200 -
simple-server-6756dd67ff-4bf4g: 10.42.0.1 - - [27/Jul/2023 02:32:17] "GET / HTTP/1.1" 200 -

You could also use the mdz exec command to execute a command in the container of your deployment. You do not need to ssh into the server to do that.

$ mdz exec simple-server ps
PID   USER     TIME   COMMAND
    1 root       0:00 /usr/bin/dumb-init /bin/sh -c python3 -m http.server 80
    7 root       0:00 /bin/sh -c python3 -m http.server 80
    8 root       0:00 python3 -m http.server 80
    9 root       0:00 ps
$ mdz exec simple-server -ti bash
bash-4.4# 

Or you could port-forward the deployment to your local machine and debug it locally.

$ mdz port-forward simple-server 7860
Forwarding inference simple-server to local port 7860

Add more servers

You could add more servers to your cluster by using the mdz server join command. The mdz server will be bootstrapped on the server and join the cluster automatically.

$ mdz server join <internal ip address of the previous server>
$ mdz server list
 NAME   PHASE  ALLOCATABLE      CAPACITY        
 node1  Ready  cpu: 16          cpu: 16         
               mem: 32784748Ki  mem: 32784748Ki 
               gpu: 1           gpu: 1      
 node2  Ready  cpu: 16          cpu: 16         
               mem: 32784748Ki  mem: 32784748Ki 
               gpu: 1           gpu: 1      

Label your servers

You could label your servers to deploy your models to specific servers. For example, you could label your servers with gpu=true and deploy your models to servers with GPUs.

$ mdz server label node3 gpu=true type=nvidia-a100
$ mdz deploy ... --node-labels gpu=true,type=nvidia-a100

Architecture

OpenModelZ is inspired by the k3s and OpenFaaS, but designed specifically for machine learning deployment. We keep the core of the system simple, and easy to extend.

You do not need to read this section if you just want to deploy your models. But if you want to understand how OpenModelZ works, this section is for you.

OpenModelZ

OpenModelZ is composed of two components:

  • Data Plane: The data plane is responsible for the servers. You could use mdz server to manage the servers. The data plane is designed to be stateless and scalable. You could easily scale the data plane by adding more servers to the cluster. It uses k3s under the hood, to support VMs, bare-metal, and IoT devices (in the future). You could also deploy OpenModelZ on a existing kubernetes cluster.
  • Control Plane: The control plane is responsible for the deployments. It manages the deployments and the underlying resources.

A request will be routed to the inference servers by the load balancer. And the autoscaler will scale the number of inference servers based on the workload. We provide a domain *.modelz.live by default, with the help of a wildcard DNS server to support the public accessible subdomain for each deployment. You could also use your own domain.

You could check out the architecture documentation for more details.

Roadmap ๐Ÿ—‚๏ธ

Please checkout ROADMAP.

Contribute ๐Ÿ˜Š

We welcome all kinds of contributions from the open-source community, individuals, and partners.

Contributors โœจ

Ce Gao
Ce Gao

๐Ÿ’ป ๐Ÿ‘€ โœ…
Jinjing Zhou
Jinjing Zhou

๐Ÿ’ฌ ๐Ÿ› ๐Ÿค”
Keming
Keming

๐Ÿ’ป ๐ŸŽจ ๐Ÿš‡
Nadeshiko Manju
Nadeshiko Manju

๐Ÿ› ๐ŸŽจ ๐Ÿค”
Teddy Xinyuan Chen
Teddy Xinyuan Chen

๐Ÿ“–
Wei Zhang
Wei Zhang

๐Ÿ’ป
Xuanwo
Xuanwo

๐Ÿ–‹ ๐ŸŽจ ๐Ÿค”
cutecutecat
cutecutecat

๐Ÿค”
xieydd
xieydd

๐Ÿค”

Acknowledgements ๐Ÿ™

  • K3s for the single control-plane binary and process.
  • OpenFaaS for their work on serverless function services. It laid the foundation for OpenModelZ.
  • sslip.io for the wildcard DNS service. It makes it possible to access the server from the outside world without any setup.

openmodelz's People

Contributors

allcontributors[bot] avatar cutecutecat avatar gaocegege avatar kemingy avatar singeleaf avatar tddschn avatar xieydd avatar zwpaper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

openmodelz's Issues

bug(cli): server list -v will panic

https://github.com/tensorchord/openmodelz/blob/main/mdz/pkg/cmd/server_list.go#L103

panic: runtime error: slice bounds out of range [:-1]

goroutine 1 [running]:
github.com/tensorchord/openmodelz/mdz/pkg/cmd.labelsString(0xc0005c56b8?)
	/home/runner/work/openmodelz/openmodelz/mdz/pkg/cmd/server_list.go:103 +0x165
github.com/tensorchord/openmodelz/mdz/pkg/cmd.commandServerList(0x1bfed40, {0xc0004676d0?, 0x1?, 0x1?})
	/home/runner/work/openmodelz/openmodelz/mdz/pkg/cmd/server_list.go:72 +0x885
github.com/spf13/cobra.(*Command).execute(0x1bfed40, {0xc0004676b0, 0x1, 0x1})
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:940 +0x862
github.com/spf13/cobra.(*Command).ExecuteC(0x1c00ce0)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1068 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:992
github.com/tensorchord/openmodelz/mdz/pkg/cmd.Execute()
	/home/runner/work/openmodelz/openmodelz/mdz/pkg/cmd/root.go:46 +0x25
main.main()
	/home/runner/work/openmodelz/openmodelz/mdz/cmd/mdz/main.go:9 +0x17

A bit confused when start a HTTP server with 404 not found

After starting a server, I got output like the following:

:) mdz server start
๐Ÿšง The server is already created, skip...
๐Ÿšง Initializing the load balancer...
๐Ÿšง Nvidia Toolkit is missing, skip the GPU initialization.
๐Ÿšง Initializing the server...
๐Ÿšง No domain provided, using the server IP...
๐Ÿšง Waiting for the server to be ready...
๐Ÿšง Verifying the load balancer...
๐Ÿ‹ Checking if the server is running...
Server:
 Name:          agent
 Orchestration: kubernetes
 Version:       v0.0.15
 Build Date:    2023-08-01T09:11:21Z
 Git Commit:    12b3ded93ddb64ae379955713487796bfe50223a
 Git State:     clean
 Go Version:    go1.19.11
 Compiler:      gc
 Platform:      linux/amd64
๐Ÿณ The server is running at http://192.168.1.120
๐ŸŽ‰ You could set the environment variable to get started!

export MDZ_URL=http://192.168.1.120

There isn't enough instruction provided here, so my most obvious action was to try visiting the URL. However, I received a 404 error. And my most obvious assumpation is I did something wrong.

Maybe we should improve this part?

This issue is written by a new comer wihtout any context of mdz.


Ideas:

  • Add a qucik start like deploy a simplehttpserver for example.
  • Add a landing page for MDZ_URL.

Quickstart is not attractive enough

Start a simple-server maybe simple but not attractive and too far from our users.

Wearing User Hat

As a user interested in deploying a modal, I would like to see a quick start guide that deploys an actual modal (such as a llama) within seconds instead of just using a simple server.

Wearing Advocator Hat

As a advocator, I wanna to show something off. For example:

Hey bro, I can start a LLaMA modal with OpenModelZ in seconds!

I never say something like:

Hey bro, I can start a simple-server with OpenModelZ in seconds!

Use ports higher than `1024` as our server's default port

When using a Linux platform, starting a server that listens to ports under 1024, such as 80, requires sudo permission. This can be frustrating for new users who attempt to start the server with the command "mdz server start" and realize they need admin privileges.

:) mdz server start
[sudo] password for xuanwo:

bug: `mdz exec` will always echo the input string first

./bin/mdz exec llm -t bash                                       
(base) root@llm-8598f68565-45zmn:/workspace# ls
ls
MANIFEST.in  Makefile  README.md  build.envd  client.py  docs  images  pyproject.toml  requirements-cpu.txt  requirements.txt  src
(base) root@llm-8598f68565-45zmn:/workspace# ps      
ps
    PID TTY          TIME CMD
   1929 pts/97   00:00:00 bash
   1942 pts/97   00:00:00 ps

bug: connection 504 gateway timeout

I follow the step in the documentation Create your first OpenAI compatible API server

Here's the list info detail

โ•ฐโ”€ mdz list
 NAME      ENDPOINT                                                    STATUS   INVOCATIONS  REPLICAS
 jupyter   http://jupyter-2j8g8a2w664nwlsu.192.168.0.239.modelz.live   Scaling            5  0/1
           http://localhost:80/inference/jupyter.default
 jupyter1  http://jupyter1-s37e8r27fc0bt7ap.192.168.0.239.modelz.live  Ready            187  1/1
           http://localhost:80/inference/jupyter1.default

Then when I access the deployment by using endpoint, the error message has been raised

{"message":"no addresses for \"mdz-jupyter.default\"","request":"GET /inference/jupyter.default/","op":"inference-proxy","error":{}}

I guess this is related to the scaling status, but I can't find the root cause by using the command mdz logs jupyter

feat: Host openmodelz registry

To share some templates and deployments.

Users could launch the inference services from the registry (or template) directly.

feat(doc): Mention how modelz.live works

To tell user we don't forward the network like gradio.live, we just make the dns resolution for them.

And I also think, llm-xxx.1-2-3-4.modelz.live looks better than llm-xxxx.1.2.3.4.modelz.live. We should consider using this format with - if possible

feat: Support verbose for all command

Just like this:

$ kubectl -v 8 get pods
I0719 12:06:17.841068 1550455 loader.go:373] Config loaded from file:  /etc/rancher/k3s/k3s.yaml
I0719 12:06:17.848270 1550455 round_trippers.go:463] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods?limit=500
I0719 12:06:17.848291 1550455 round_trippers.go:469] Request Headers:
I0719 12:06:17.848303 1550455 round_trippers.go:473]     Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0719 12:06:17.848309 1550455 round_trippers.go:473]     User-Agent: kubectl/v1.27.3 (linux/amd64) kubernetes/25b4e43
I0719 12:06:17.853202 1550455 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0719 12:06:17.853224 1550455 round_trippers.go:577] Response Headers:
I0719 12:06:17.853230 1550455 round_trippers.go:580]     Content-Type: application/json
I0719 12:06:17.853235 1550455 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3c57e337-0877-4d0f-ba0f-a86bc1c664e3
I0719 12:06:17.853241 1550455 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f6900bdc-cf1e-4548-8e4b-50f2d3812a84
I0719 12:06:17.853246 1550455 round_trippers.go:580]     Date: Wed, 19 Jul 2023 04:06:17 GMT
I0719 12:06:17.853251 1550455 round_trippers.go:580]     Audit-Id: c80551f6-bb55-45ba-a9fc-4a6748954242
I0719 12:06:17.853255 1550455 round_trippers.go:580]     Cache-Control: no-cache, private
I0719 12:06:17.853497 1550455 request.go:1188] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"57412"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"string","format":"","description":"The number of times the containers in this pod have been restarted and when the last container in this pod has restarted.","priority":0},{" [truncated 8307 chars]
NAME                                      READY   STATUS    RESTARTS      AGE
release-name-openmodelz-test-connection   0/1     Error     0             24h
llm-8598f68565-45zmn                      1/1     Running   2 (20h ago)   21h

bug: The endpoint without domain get some error

For now, I have to jupyter instance

โ•ฐโ”€ mdz list
segment 2023/08/05 14:45:48 ERROR: sending request - Post "https://api.segment.io/v1/batch": x509: certificate is valid for ingress.local, not api.segment.io
segment 2023/08/05 14:45:48 ERROR: 1 messages dropped because they failed to be sent and the client was closed
segment 2023/08/05 14:45:48 ERROR: sending request - Post "https://api.segment.io/v1/batch": x509: certificate is valid for ingress.local, not api.segment.io
segment 2023/08/05 14:45:48 ERROR: 1 messages dropped because they failed to be sent and the client was closed
 NAME      ENDPOINT                                                    STATUS   INVOCATIONS  REPLICAS
 jupyter   http://jupyter-2j8g8a2w664nwlsu.192.168.0.239.modelz.live   Scaling            6  0/1
           http://localhost:80/inference/jupyter.default
 jupyter1  http://jupyter1-s37e8r27fc0bt7ap.192.168.0.239.modelz.live  Ready              2  1/1
           http://localhost:80/inference/jupyter1.default

When I access the http://localhost:80/inference/jupyter1.default, this would be redirect to http://localhost:80/tree by jupyter

feat: Investigate if we could use envd environments in mdz

Our http probe is a blocker for this.

Besides this, we only support one port in mdz. It will be 2222 in this case (the sshd server in envd). We cannot expose other ports if jupyter or streamlit is in the envd environment.

And I still cannot auth the key. Not sure if it is caused by the reverse proxy in mdz.

ssh -vvv envd-u6z0k9k4zrvhvfsu.192.168.71.93.modelz.live -i $HOME/.config/envd/id_rsa_envd

pip install openmodelz==0.0.22 failed on macOS (Intel; Python 3.11)

pipx install openmodelz

Fatal error from pip prevented installation. Full pip output in file:
    /Users/tscp/.local/pipx/logs/cmd_2023-08-10_16.57.39_pip_errors.log

pip failed to build package:
    openmodelz

Some possibly relevant errors from pip install:
    error: subprocess-exited-with-error
    AssertionError: mdz build failed with code 2

Error installing openmodelz.
[1]    17081 exit 1     pipx install openmodelz

openmodelz PyPI version: 0.0.22
Full log:
cmd_2023-08-10_16.57.39_pip_errors.log

Unit in output of `mdz server list` is too small

:) mdz server list
 NAME         PHASE  ALLOCATABLE      CAPACITY
 xuanwo-work  Ready  cpu: 32          cpu: 32
                     mem: 65699508Ki  mem: 65699508Ki

Using Ki here is a bit strange.

  • Ki is too small, it's better to use MiB or even GiB.
  • Ki is not a correct unit for memory, we must use KiB here.

feat: Support remove a node from the cluster

In some circumstances, one of the server is in a crash status, we can not connect the sever to execute mdz server stop or mdz server remove.

I think we can add new command to help the people remove the issue node manually

feat(mdz scale): Support scale subcommand

You could scale your deployment by using the mdz scale command.

$ mdz scale skilled-slug --replicas 3
Inference skilled-slug is scaled

You could configure the autoscaler to scale your deployment based on the the number of requests in flight.

$ mdz scale skilled-slug --min 2 --max 5 --target-inflight-requests 10
Inference skilled-slug is scaled

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.