Giter Club home page Giter Club logo

oscar's Introduction

OSCAR - Open Source Serverless Computing for Data-Processing Applications

Go Report Card Codacy Badge tests build GitHub release (latest by date) go.dev reference GitHub

OSCAR-logo

Introduction

OSCAR is an open-source platform to support the event-driven serverless computing model for data-processing applications. It can be automatically deployed on multi-Clouds, and even on low-powered devices, to create highly-parallel event-driven data-processing serverless applications along the computing continuum. These applications execute on customized runtime environments provided by Docker containers that run on elastic Kubernetes clusters.

Information on how to deploy an OSCAR cluster using the Infrastucture Manager can be found at: https://grycap.github.io/oscar/deploy-im-dashboard/

For more documentation visit https://grycap.github.io/oscar/

NOTE: If you detect inaccurate or unclear information on the documentation please report back to us either opening an issue or contacting us at [email protected]

Overview

Why OSCAR

FaaS platforms are typically oriented to the execution of short-lived functions, coded in a certain programming language, in response to events. Scientific application can greatly benefit from this event-driven computing paradigm in order to trigger on demand the execution of a resource-intensive application that requires processing a certain file that was just uploaded to a storage service. This requires additional support for the execution of generic applications in existing open-source FaaS frameworks.

To this aim, OSCAR supports the High Throughput Computing Programming Model initially introduced by the SCAR framework, to create highly-parallel event-driven data-processing serverless applications that execute on customized runtime environments provided by Docker containers run on AWS Lambda.

With OSCAR, users upload files to a data storage back-end and this automatically triggers the execution of parallel invocations to a service responsible for processing each file. Output files are delivered into a data storage back-end for the convenience of the user. The user only specifies the Docker image and the script to be executed, inside a container created out of that image, to process a file that will be automatically made available to the container. The deployment of the computing infrastructure and its scalability is abstracted away from the user. Synchronous invocations are also supported to create scalable HTTP-based endpoints for triggering containerised applications.

Components

OSCAR Components

OSCAR runs on an elastic Kubernetes cluster that is deployed using:

  • IM, an open-source virtual infrastructure provisioning tool for multi-Clouds.

The following components are deployed inside the Kubernetes cluster to support the enactment of the OSCAR platform:

  • CLUES, an elasticity manager that horizontally scales in and out the number of nodes of the Kubernetes cluster according to the workload.
  • MinIO, a high-performance distributed object storage server that provides an API compatible with S3.
  • Knative, a serverless framework to serve container-based applications for synchronous invocations (default Serverless Backend).
  • OSCAR Manager, the main API, responsible for the management of the services and the integration of the different components.
  • OSCAR UI, an easy-to-use web-based graphical user interface aimed at end users.

As external storage providers, the following services can be used:

  • External MinIO servers, which may be in clusters other than the platform.
  • Amazon S3, an object storage service that offers industry-leading scalability, data availability, security, and performance in the AWS public Cloud.
  • Onedata, the global data access solution for science, used in the EGI Federated Cloud.
  • dCache, a system for storing and retrieving huge amounts of data, distributed among a large number of heterogeneous server nodes, under a single virtual filesystem tree with a variety of standard access methods.

An OSCAR cluster can be easily deployed via the IM Dashboard on any major public and on-premises Cloud provider, including the EGI Federated Cloud.

Licensing

OSCAR is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Acknowledgements

This development is partially funded by the EGI Strategic and Innovation Fund.

Partially funded by the project AI-SPRINT "AI in Secure Privacy-Preserving Computing Continuum" that has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant 101016577.

Also, Grant PDC2021-120844-I00 funded by Ministerio de Ciencia e Innovación/Agencia Estatal de Investigación/ 10.13039/501100011033 and by “European Union NextGenerationEU/PRTR” and Grant PID2020-113126RB-I00 funded by Ministerio de Ciencia e Innovación/Agencia Estatal de Investigación/ 10.13039/501100011033.

financing

Silver Badge

This software has received a silver badge according to the Software Quality Baseline criteria defined by the EOSC-Synergy project. Please acknowledge the use of OSCAR by citing the following scientific publications (preprints available):

Sebastián Risco, Germán Moltó, Diana M. Naranjo and Ignacio Blanquer. (2021). Serverless Workflows for Containerised Applications in the Cloud Continuum. Journal of Grid Computing, 19(3), 30. https://doi.org/10.1007/s10723-021-09570-2
Alfonso Pérez, Sebastián Risco, Diana M. Naranjo, Miguel Caballer, and Germán Moltó,
“Serverless Computing for Event-Driven Data Processing Applications,”
in 2019 IEEE International Conference on Cloud Computing (CLOUD 2019), 2019. https://ieeexplore.ieee.org/document/8814513/

oscar's People

Contributors

catttam avatar dependabot[bot] avatar dialdroid avatar dianamariand92 avatar gmolto avatar micafer avatar rajitha1998 avatar sergiolangaritabenitez avatar srisco avatar vicente87 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oscar's Issues

Failed "wait for tiller-deploy ready status"

When deploying an OSCAR cluster the following error arises:

TASK [grycap.kubefaas : wait for tiller-deploy ready status] *******************
Monday 10 December 2018  16:28:41.059528
fatal: [158.42.104.153_0]: FAILED! => {"attempts": 20, "changed": true, "cmd": ["kubectl", "get", "pods", "--namespace=kube-system", "-o", "jsonpath={.status.containerStatuses[*].ready}"], "delta": "0:00:00.406186", "end": "2018-12-10 16:38:49.235268", "rc": 0, "start": "2018-12-10 16:38:48.829082", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

Make the function creation asynchronous

Creating a function takes between 60 and 120 seconds.
The API does not respond in all that time an some clients may think that the server has failed.

This could be solved with an asynchronous creation method that returns OK if the petition is recieved. Then another method can be used to check the function deployment process, something like /functions/{functionName}/status.

Documentation improvement for interLink integration

Regarding this document: https://github.com/grycap/oscar/blob/master/docs/interlink_integration.md

I suggest some improvements:

  • Replace Interlink with interLink, as the authors prefer.
  • Replace "HPC Vega" with "Remote host" in the image to generalize.
  • The sentence "Once the Virtual node and OSCAR are installed correctly, you use this node by adding the name of the virtual node in the InterLinkNodeName variable. Otherwise, to use a normal node of the Kubernetes cluster, let in blank """ is not clear, indicate where the InterLinkNodeName variable should be. I would add a code example for clarification (a service description, maybe?).
  • In general, only capitalize proper nouns and words at the beginning of a sentence, so in "Annotations, Restrictions, and other things to keep in mind." Restrictions should be restrictions.
  • Please clarify the following sentence: "The OSCAR services annotations persist in the virtual node and affect the behavior of the offload jobs."
  • Regarding the section "Annotations, Restrictions, and other things to keep in mind." I think it would make sense to use a list of items here or write it as an FAQ since, currently, there are many different things in one section.
  • where it reads "As a reminder, Interlink uses singularity to run a container with this characteristic" should say "As a reminder, interLink uses singularity to run a container with these characteristics".

Restrict the Origin of Docker Images to a Set of Repositories

Describe the solution you'd like

Currently, an OSCAR service can be created out of an arbitrary Docker image from a repository. The owner of the OSCAR cluster may want to limit the scope of the Docker images employed to those available in a curated set of Docker image repositories (e.g. deephdc/*). A list of repositories and the usage of wildcards should be supported (e.g. deephdc/deep-oc-posenet-*) for greater flexibility.

MinIO connection in local testing

Expected Behavior

Actual Behavior

Steps to Reproduce the Problem

Hello team,
I'm currently trying to run OSCAR locally.
I would like some clarifications about the connection of minio on localhost.

  1. I followed the doc to execute: curl -sSL http://go.oscar.grycap.net | bash
  2. Then I create a bucket, there will be an error:
request.js:150
PUT https://minio.minio:9000/abc
net::ERR_NAME_NOT_RESOLVED

Thank you.

Specifications

  • Platform: Ubuntu 20.04 Chrome

Screenshots

2022-07-18 23-53-58 的螢幕擷圖

Possible Solution

Any thoughts as to potential solutions or ideas to go about finding one. Please include links to any research.

Unable to Process Files with Unicode Characters

After trying to upload a file with a non-ascii character to an input bucket, the pod responsible to process the file fails with the following message:

sudo kubectl logs oscar-image-to-gray-708ad53a-0e7f-4c98-bcba-270c51862c90-zgqq9 -n oscar-fn
[7] Failed to execute script to_build
Traceback (most recent call last):
  File "to_build.py", line 106, in <module>
  File "to_build.py", line 94, in get_stdin
  File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode
UnicodeDecodeError: 'ascii' codec can't decode byte 0xcc in position 624: ordinal not in range(128)

The input file was named: "Foto cuádruple el 01-05-16 a las 10.57 #6"

Project setup guide | Documentation

Description

Hello, Grycap community! I have been trying to set up this project in my computer. However, the documentation found here only explains the OSCAR architecture and how to get started to use the OSCAR framework. It would be nice to have a guide regarding the project structure from a developers perspective to contribute to this project. Is there anything that I can refer to understand the project structure and code. Thanks :)

Restrict Access to a Subset of VO members

Describe the solution you'd like

Currently, an OSCAR cluster with multi-tenancy support is integrated with one or several EGI-based Virtual Organizations (VOs). Therefore, all the members of the VO can access the cluster. It would be useful to provide a set of EGI User IDs which are the only users that belong to the allowed VOs that can access this cluster, for restricted visibility.

Error Pushing the OSCAR image to the Docker Registry

Attempting to deploy an OSCAR cluster results in the following error:

TASK [grycap.kubeoscar : Build the OSCAR image and push it to the docker registry] ***
fatal: [158.42.104.167_0]: FAILED! => {"changed": false, "msg": "Error pushing image registry.docker-registry/oscar: Get https://registry.docker-registry/v1/_ping: dial tcp 10.106.195.1:443: getsockopt: connection refused"}

Failed Deployment on AWS: Unable to start service nfs-kernel-server

Issue

Deploying the OSCAR stack on AWS results on the following error (while deploying the front-end error)

TASK [grycap.nfs : Restart NFS server service] *********************************
fatal: [35.172.233.247_0]: FAILED! => {"changed": false, "msg": "Unable to start service nfs-kernel-server: Job for nfs-server.service canceled.\n"}

Way to reproduce

Deploy on AWS with the following command:

ec3 launch oscar-test-aws-1 -a /opt/im_devel/auth-solo-ec2.dat kubernetes_oscar aws-ubuntu

where aws-ubuntu is this file: aws-ubuntu.radl.

Full deployment log

Creating infrastructure
Infrastructure successfully created with ID: b9260084-d482-11e8-9dc0-0a580af40191
Error while configuring the infrastructure: 2018-10-20 16:11:39.473135: Select master VM
2018-10-20 16:11:39.475986: Wait master VM to boot
2018-10-20 16:12:30.443300: Wait master VM to have the SSH active.
2018-10-20 16:12:59.808859: Creating and copying Ansible playbook files
2018-10-20 16:13:01.486870: Copying YAML, hosts and inventory files.
2018-10-20 16:13:08.063545: Galaxy role grycap.im detected setting to install.
2018-10-20 16:13:08.063911: Galaxy role grycap.kubeminio detected setting to install.
2018-10-20 16:13:08.064124: Galaxy role grycap.clues detected setting to install.
2018-10-20 16:13:08.064318: Galaxy role grycap.oscarui detected setting to install.
2018-10-20 16:13:08.064501: Galaxy role grycap.kubeoscar detected setting to install.
2018-10-20 16:13:08.064754: Galaxy role grycap.kubernetes,nfs detected setting to install.
2018-10-20 16:13:08.064963: Galaxy role grycap.kubefaas detected setting to install.
2018-10-20 16:13:08.065164: Galaxy role grycap.kubeventgateway detected setting to install.
2018-10-20 16:13:08.065371: Galaxy role grycap.nfs detected setting to install.
2018-10-20 16:13:08.065596: Galaxy role grycap.kuberegistry detected setting to install.
2018-10-20 16:13:08.065774: Performing preliminary steps to configure Ansible.
2018-10-20 16:13:10.325116: Configure Ansible in the master VM.
2018-10-20 16:15:43.722167: Ansible successfully configured in the master VM.
VM 0:
Contextualization agent output processed successfullyGenerate and copy the ssh key

Launch task: wait_all_ssh
Waiting SSH access to VM: 35.172.233.247
Testing SSH access to VM: 172.31.3.49:22
Remote access to VM: 35.172.233.247 Open!
Changing the IP 172.31.3.49 for 35.172.233.247 in config files.
Task wait_all_ssh finished successfully
Process finished
Contextualization agent output processed successfullyGenerate and copy the ssh key
Launch task: basic
Waiting SSH access to VM: 35.172.233.247
Testing SSH access to VM: 172.31.3.49:22
Remote access to VM: 35.172.233.247 Open!
Requiretty successfully removed
Install grycap.im with ansible-galaxy.
Install grycap.kubeminio with ansible-galaxy.
Install grycap.clues with ansible-galaxy.
Install grycap.oscarui with ansible-galaxy.
Install grycap.kubeoscar with ansible-galaxy.
Install grycap.kubernetes,nfs with ansible-galaxy.
Install grycap.kubefaas with ansible-galaxy.
Install grycap.kubeventgateway with ansible-galaxy.
Install grycap.nfs with ansible-galaxy.
Install grycap.kuberegistry with ansible-galaxy.
Galaxy depencies file: - {src: grycap.im}
- {src: grycap.kubeminio}
- {src: grycap.clues}
- {src: grycap.oscarui}
- {src: grycap.kubeoscar}
- {src: grycap.kubernetes, version: nfs}
- {src: grycap.kubefaas}
- {src: grycap.kubeventgateway}
- {src: grycap.nfs}
- {src: grycap.kuberegistry}

Call Ansible

PLAY [35.172.233.247_0] ********************************************************

TASK [Check Python is installed] ***********************************************
ok: [35.172.233.247_0]

TASK [Delete apt processes] ****************************************************
changed: [35.172.233.247_0]

TASK [Bootstrap with python] ***************************************************
skipping: [35.172.233.247_0]

TASK [Install libselinux-python on redhat systems] *****************************
fatal: [35.172.233.247_0]: FAILED! => {"changed": false, "msg": "The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.. The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
...ignoring

TASK [Set the hostname of the node] ********************************************
changed: [35.172.233.247_0]

TASK [Disable SELinux in REL systems] ******************************************
fatal: [35.172.233.247_0]: FAILED! => {"changed": false, "msg": "libselinux-python required for this module"}
...ignoring

TASK [Add the authorized_key to the nodes] *************************************
changed: [35.172.233.247_0]

TASK [Add the authorized_key to the nodes again] *******************************
changed: [35.172.233.247_0]

TASK [Gather Facts] ************************************************************
ok: [35.172.233.247_0]

TASK [Ubuntu apt update] *******************************************************
ok: [35.172.233.247_0]

TASK [Ubuntu force apt update (avoid apt lock)] ********************************
skipping: [35.172.233.247_0]

TASK [Create YAML file to install the roles with ansible-galaxy] ***************
changed: [35.172.233.247_0]

TASK [Install galaxy roles] ****************************************************
changed: [35.172.233.247_0]

PLAY RECAP *********************************************************************
35.172.233.247_0           : ok=11   changed=6    unreachable=0    failed=0
35.172.233.247_0           : ok=11   changed=6    unreachable=0    failed=0

Task basic finished successfully
Process finished
Contextualization agent output processed successfullyGenerate and copy the ssh key
Launch task: main_front
Call Ansible

PLAY [allnowindows] ************************************************************

PLAY [35.172.233.247_0] ********************************************************

TASK [Copy the original /etc/hosts] ********************************************
changed: [35.172.233.247_0]

TASK [Copy the /etc/hosts] *****************************************************
changed: [35.172.233.247_0]

TASK [Merge /etc/hosts] ********************************************************
changed: [35.172.233.247_0]

TASK [Copy the /etc/hosts in windows native] ***********************************
skipping: [35.172.233.247_0]

TASK [debug] *******************************************************************
ok: [35.172.233.247_0] => {
    "changed": false,
    "msg": "Install user requested apps"
}

PLAY RECAP *********************************************************************
35.172.233.247_0           : ok=4    changed=3    unreachable=0    failed=0
35.172.233.247_0           : ok=4    changed=3    unreachable=0    failed=0

Task main_front finished successfully
Process finished
Contextualization agent output processed successfullyGenerate and copy the ssh key
Launch task: front_front
Call Ansible

PLAY [allnowindows] ************************************************************

PLAY [35.172.233.247_0] ********************************************************

TASK [iptables] ****************************************************************
[DEPRECATION WARNING]: The __init__.pyc callback plugin should be updated to
use the _get_item_label method instead. This feature will be removed in version
 2.11. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
skipping: [35.172.233.247_0] => (item=6443/tcp)
skipping: [35.172.233.247_0] => (item=8800/tcp)

TASK [firewalld] ***************************************************************
skipping: [35.172.233.247_0] => (item=6443/tcp)
skipping: [35.172.233.247_0] => (item=8800/tcp)

PLAY [35.172.233.247_0] ********************************************************

TASK [Create dir for the etcd NFS PV Top dir] **********************************
changed: [35.172.233.247_0]

TASK [grycap.nfs : include] ****************************************************
included: /etc/ansible/roles/grycap.nfs/tasks/front.yaml for 35.172.233.247_0

TASK [grycap.nfs : export the directories editing the file /etc/exports] *******
changed: [35.172.233.247_0] => (item={u'path': u'/pv/minio', u'export': u'wn*.localdomain(rw,async,no_root_squash,no_subtree_check,insecure)'})
changed: [35.172.233.247_0] => (item={u'path': u'/pv/registry', u'export': u'wn*.localdomain(rw,async,no_root_squash,no_subtree_check,insecure)'})
changed: [35.172.233.247_0] => (item={u'path': u'/pv/etcd-event-gateway', u'export': u'wn*.localdomain(rw,async,no_root_squash,no_subtree_check,insecure)'})

TASK [grycap.nfs : include] ****************************************************
included: /etc/ansible/roles/grycap.nfs/tasks/front-Debian.yaml for 35.172.233.247_0

TASK [grycap.nfs : update repositories cache and install NFS in Deb systems] ***
changed: [35.172.233.247_0]

TASK [grycap.nfs : set_fact] ***************************************************
ok: [35.172.233.247_0]

TASK [grycap.nfs : Ensure rpcbind is running] **********************************
ok: [35.172.233.247_0]

TASK [grycap.nfs : Restart NFS server service] *********************************
fatal: [35.172.233.247_0]: FAILED! => {"changed": false, "msg": "Unable to start service nfs-kernel-server: Job for nfs-server.service canceled.\n"}

PLAY RECAP *********************************************************************
35.172.233.247_0           : ok=7    changed=3    unreachable=0    failed=1
35.172.233.247_0           : ok=7    changed=3    unreachable=0    failed=1

ERROR executing playbook (1/1)
ERROR executing task front_front: (1/5)
Launch task: front_front

Unable to resolve Docker registry DN while IM is recontextualizing

Function jobs remains in "ErrImagePull" and "ImagePullBackOff" while the cluster is in the process of recontextualization.

This error occurs because the corresponding line of "registry.docker-registry" is deleted from the /etc/hosts file while a new node is being created or deleted.

The Docker daemon check in the local deployment script does not work in macOS

Expected Behavior

The local deployment approach: curl -sSL http://go.oscar.grycap.net | bash should correctly detect Docker in macOS.

Actual Behavior

Executing curl -sSL http://go.oscar.grycap.net | bash throws the error "Error: Docker daemon is not working!" even if the Docker daemon is up & running.

Specifications

  • Platform: macOS
  • Subsystem: Sonoma 14.5

Possible Solution

Use docker info instead of curl -s --unix-socket /var/run/docker.sock http://ping > /dev/nullto check for the Docker status.

The pod name of each function is random

Describe the solution you'd like

Use kubectl get pod -n=oscar-svc, the pod name of each function is random.
If the pod name contains the service name, we can know which pod belongs to which function through kubectl.

Additional information

Impact

References

Enable Kubernetes Dashboard

Kubernetes Dashboard should be enabled as an additional service. This is useful to easily access the pod logs (among other goodies)

Edit function doesn't work

The edit functionality is not working.
We should update the code to support it or remove the button until it is supported.

Problem when deploying an OSCAR cluster via the IM Dashboard

Following the steps in https://docs.oscar.grycap.net/deploy-im-dashboard/ to deploy an OSCAR cluster in AWS (with a user with enough permissions to provision the corresponding EC2 instances) results in the following problem:

TASK [grycap.kubeminio : Install (or upgrade) the chart] ***********************
Saturday 01 June 2024 17:47:00.316064
fatal: [44.203.210.214_0]: FAILED! => {"changed": true, "cmd": ["helm", "upgrade", "minio", "--install", "minio/minio", "--namespace", "minio", "--create-namespace", "--values", "/tmp/minio-values.yaml", "--version", "5.0.14"], "delta": "0:05:02.038117", "end": "2024-06-01 17:52:02.676162", "msg": "non-zero return code", "rc": 1, "start": "2024-06-01 17:47:00.638045", "stderr": "Error: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition", "stderr_lines": ["Error: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition"], "stdout": "", "stdout_lines": []}

PLAY RECAP *********************************************************************
44.203.210.214_0 : ok=6 changed=3 unreachable=0 failed=1
44.203.210.214_0 : ok=6 changed=3 unreachable=0 failed=1

ERROR executing playbook (1/1)
ERROR executing task oscar_front_conf_front: (2/3)
Sleeping 20 secs.
Launch task: oscar_front_conf_front
Call Ansible

Steps to Reproduce the Problem

Following the steps in https://docs.oscar.grycap.net/deploy-im-dashboard/ to deploy an OSCAR cluster in AWS

Specifications

  • Version:
  • Platform: AWS
  • Subsystem:

This same problem was reported Giuseppe Caccia on 15 Nov 2022 (e-mail thread: TASK [grycap.kubeminio : Install (or upgrade) the chart] --> FAILURE). At that time it was a problem with version 4.0.6 of the helm chart (hinted by @micafer). The version parameter was removed to make it work again, but apparently the problem has been reintroduced.

IM Infrastructure ID: 4b993674-203c-11ef-91ef-5a3710828b91

Mismatch between SCAR_OUTPUT_DIR and SCAR_OUTPUT_FOLDER

SCAR examples assume the existence of the environment variable SCAR_OUTPUT_DIR while OSCAR examples assume the existence of the variable SCAR_OUTPUT_FOLDER. We should use the same convention so that we can reuse the same script files across frameworks.

Create functions with default resources values

It would be convenient to add a default resources value if you create new functions without specifying resources limits and requests. The default value could be 256Mi for memory and 250m for CPU.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.