openshift-kni / baremetal-deploy Goto Github PK
View Code? Open in Web Editor NEWDeployment artifacts for OpenShift KNI bare metal clusters
Home Page: https://openshift-kni.github.io/baremetal-deploy/
License: Apache License 2.0
Deployment artifacts for OpenShift KNI bare metal clusters
Home Page: https://openshift-kni.github.io/baremetal-deploy/
License: Apache License 2.0
When running the TASK [installer : Create OpenShift Manifest] ***********************************
tasks, it fails because:
"stderr": "level=fatal msg=\"failed to fetch Master Machines: failed to load asset \\\"Install Config\\\": invalid \\\"install-config.yaml\\\" file: platform.baremetal.provisioningNetworkInterface: Invalid value: \\\"\\\": no provisioning network interface is configured, please set this value to be the interface on the provisioning network on your cluster's baremetal hosts\"",
My install-config.yaml generated by the playbook contains:
apiVersion: v1
baseDomain: xxx
metadata:
name: xxx
networking:
machineCIDR: xxx
networkType: OVNKubernetes
compute:
- name: worker
replicas: 1
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: xxx
ingressVIP: xxx
dnsVIP: xxx
hosts:
xxx
And according to https://github.com/openshift/installer/blob/master/docs/user/metal/install_ipi.md#install-config it requires platform.baremetal.provisioningNetworkInterface
I'd say to merge those two files as the purpose is almost the same.
Add the networktype OVNKubernetes as the default in the install-config.yaml
$subject
Environment Used
OCP 4.3 IPI on BM.
What is the issue?
While passing the control plane BootStrap node is unable to verify the CA, leading to API timeouts.
Host/Provisioner Node OS version
[kni@worker-0 root]$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.1 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.1"
PLATFORM_ID="platform:el8"
What is happening
Bootstrap node is up and running:
[kni@worker-0 root]$ sudo virsh list
Id Name State
----------------------------------------------------
11 kni4-ccb7p-bootstrap running
All the containers inside this VM is also UP:
core@localhost ~]$ sudo podman ps -a | awk '{print $NF}'
NAMES
ironic-api
ironic-inspector
ironic-conductor
sad_jones
sweet_kalam
amazing_curran
condescending_hugle
xenodochial_ptolemy
coreos-downloader
ipa-downloader
mystifying_northcutt
sad_brahmagupta
stupefied_kepler
httpd
dnsmasq
mariadb
The logs inside the VM states:
DEBUG
DEBUG Apply complete! Resources: 12 added, 0 changed, 0 destroyed.
DEBUG OpenShift Installer v4.4.0
DEBUG Built from commit 8aace3c8a9497f3290cd0751fd45da1a4d7c6132
INFO Waiting up to 30m0s for the Kubernetes API at https://api.kni4.cloud.lab.eng.bos.redhat.com:6443...
DEBUG Still waiting for the Kubernetes API: Get https://api.kni4.cloud.lab.eng.bos.redhat.com:6443/version?timeout=32s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-apiserver-lb-signer")
DEBUG Still waiting for the Kubernetes API: Get https://api.kni4.cloud.lab.eng.bos.redhat.com:6443/version?timeout=32s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-apiserver-lb-signer")
DEBUG Still waiting for the Kubernetes API: Get https://api.kni4.cloud.lab.eng.bos.redhat.com:6443/version?timeout=32s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-apiserver-lb-signer")
I had made sure to clean up all the older volumes and VM residues before starting this installation.
What was expected instead
After the API is up , I was expecting a OCP 4.3 cluster fully functional.
Additional Info
For reference, here is my install-config.yaml file
apiVersion: v1
baseDomain: <YourDomain>
metadata:
name: kni4
networking:
machineCIDR: 10.19.136.0/21
compute:
- name: worker
replicas: 0
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: 10.19.138.86
ingressVIP: 10.19.138.87
dnsVIP: 10.19.138.88
provisioningBridge: provisioning
externalBridge: baremetal
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://<URL>
username: ***
password: ******
bootMACAddress: ec:f4:bb:da:0c:58
hardwareProfile: default
- name: ********
role: master
bmc:
address: ipmi://<URL>
username: *****
password: *****
bootMACAddress: ec:f4:bb:da:32:88
hardwareProfile: default
- name: *********
role: master
bmc:
address: ipmi://<URL>
username: ******
password: ********
bootMACAddress: ec:f4:bb:da:0d:98
hardwareProfile: default
pullSecret: '<YourPullSecret>'
sshKey: '<YourSSH>'
I have removed few lines in between, so it will not work as is.
I just noticed selinux is disabled in the baremetal-prep.sh script here https://github.com/openshift-kni/baremetal-deploy/blob/master/baremetal-prep/baremetal-prep.sh#L222
Is this really needed? Maybe we need a comment explaining why that is so.
Each feature need to be link to top level README
Right now when the ansible-ipi-installer kicks off it creates a tmp directory that stores the oc binaries (oc, kubelet, openshift-baremetal-install). However, if an error occurred during deployment, this tmp file isn't cleaned up. Add logic that checks if these tmp dirs exist and remove them in order to save space on the local HD.
The Top level readme should point to the ansible install documentation.
Also should state it could be used for both 4.3 and 4.4
I'd like to see caching and disconnected installs added to the Ansible playbook.
Installation requires downloading 2 machine OS images to install the bootstrap and the control plane and worker nodes, which typically happens over the internet.
For the boostrap, we use the QEMU RHCOS image, and for other hosts we use the OpenStack image which has the appropriate support for disconnected installs.
To set local locations for these images, update the install-config:
platform:
baremetal:
bootstrapOSImage: http://<local mirror>>/rhcos-43.81.201912131630.0-qemu.x86_64.qcow2.gz?sha256=XYZ'
clusterOSImage: http://<local mirror>/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=XYZ'
Notes:
You may provide a URL to the gzipped image, or the uncompressed image, but in BOTH cases, the installer only verifies the hash after decompressing (this may change openshift/installer#2845)
To identify which images to download and the appropriate hashes, you can see how dev-scripts does it @ https://github.com/openshift-metal3/dev-scripts/blob/master/rhcos.sh
Currently the install-config.yaml only uses ipmi. We want the ability to use redfish or ipmi.
I want to add tags in place so there is more control of execution of top level tasks.
Would be great to have the openstack command available to debug issues.
for example: openstack baremetal node list
It looks like it is needed to deploy the 4.4 sriov operator version to work with OCP 4.4, otherwise, when you create the policy, the affected node became not ready, non scheduleable because some CNI issue.
I can confirm 4.4 works.
Add networkType: OVNKubernetes in the install-config.yaml example so it reflects the default type we want others to use.
In the event of a failed deployment, it is possible that the bootstrap VM and volumes are left behind. We should have the Ansible tool check for these old resources and remove them during execution of the installer tasks.
Attempting to use roles by include_role and tasks_from fails in 10_validation.yml
with
TASK [node-prep : Verify DNS records for API VIP, Wildcard (Ingress) VIP] ************************** Thursday 06 February 2020 00:32:37 +0000 (0:00:00.130) 0:00:03.060 ***** fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'dig'. Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig lookup requires the python 'dnspython' library and it is not installed"}
The playbook currently takes the oc binaries (oc, openshift-baremetal-install, kubelet) and places them in your /usr/local/bin directory. However, if say you wanted to reuse this provision host to install a new version of OCP, the existing playbook would notice that there is already an existing oc, openshift-baremetal-install,kubelet binary in /usr/local/bin and not place the newly created ones in /usr/local/bin. Solution is to create logic that checks if those files exist, and if they do, remove them prior to untarring the new binaries to be placed in the /usr/local/bin
Each feature should be link in the top level README
See https://github.com/openshift-kni/baremetal-deploy/blame/master/install-steps.md#L719
There's a lot of copy/pasting going on in the procedure. It's very easy to pass it up since there isn't a note or step specified to adjust this value The installer will get 99% complete and fail on machine-api.
Since we're using setenv how about making this an env var and landing the value also?
@rlopez133
This is the job executed:
if [ -f "features/performance/Makefile" ];then
cd features/performance
ISOLATED_CPUS=2 RESERVED_CPUS=2 make
else
exit 1
fi
This is the Jenkins Output:
Started by upstream project "TE-UC01" build number 36
originally caused by:
Started by user admin
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building in workspace /var/lib/jenkins/workspace/E2E-Steps/Worker-Performance
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/openshift-kni/baremetal-deploy.git
> git init /var/lib/jenkins/workspace/E2E-Steps/Worker-Performance # timeout=10
Fetching upstream changes from https://github.com/openshift-kni/baremetal-deploy.git
> git --version # timeout=10
> git fetch --tags --progress -- https://github.com/openshift-kni/baremetal-deploy.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/openshift-kni/baremetal-deploy.git # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/openshift-kni/baremetal-deploy.git # timeout=10
Fetching upstream changes from https://github.com/openshift-kni/baremetal-deploy.git
> git fetch --tags --progress -- https://github.com/openshift-kni/baremetal-deploy.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 6ceaead20d5a777505315e137d08b2ede4772a0b (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 6ceaead20d5a777505315e137d08b2ede4772a0b # timeout=10
Commit message: "Provide scripts to generate and deploy performance manifests (#36)"
> git rev-list --no-walk 6ceaead20d5a777505315e137d08b2ede4772a0b # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the SCM step.
[EnvInject] - Injecting as environment variables the properties content
KUBECONFIG=/var/lib/jenkins/.kube/e2e_kubeconfig
[EnvInject] - Variables injected successfully.
[Worker-Performance] $ /bin/sh -xe /tmp/jenkins212272282312616800.sh
+ '[' -f features/performance/Makefile ']'
+ cd features/performance
+ ISOLATED_CPUS=2
+ RESERVED_CPUS=2
+ make
./hack/generate.sh
./hack/deploy.sh
node/worker-0 labeled
machineconfigpool.machineconfiguration.openshift.io/master patched
machineconfigpool.machineconfiguration.openshift.io/worker patched
error: must specify one of -f and -k
make: *** [Makefile:3: deploy] Error 1
Build step 'Execute shell' changed build result to UNSTABLE
Finished: UNSTABLE
Have the ability to have address in install-config.yml use something other than IPMI
With the following install-config.yaml:
[kni@worker-1 ~]$ cat install-config.yaml
apiVersion: v1
baseDomain: qe.lab.redhat.com
networking:
machineCIDR: 192.168.123.0/24
metadata:
name: ocp-edge-cluster
compute:
- name: worker
replicas: 1
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: 192.168.123.5
dnsVIP: 192.168.123.6
ingressVIP: 192.168.123.10
hosts:
- name: openshift-master-2
role: master
bmc:
address: ipmi://192.168.123.1:6232
username: admin
password: password
bootMACAddress: 52:54:00:88:41:e0
hardwareProfile: default
- name: openshift-worker-0
role: worker
bmc:
address: ipmi://192.168.123.1:6233
username: admin
password: password
bootMACAddress: 52:54:00:cd:0a:b1
hardwareProfile: unknown
- name: openshift-master-0
role: master
bmc:
address: ipmi://192.168.123.1:6230
username: admin
password: password
bootMACAddress: 52:54:00:5f:b0:ef
hardwareProfile: default
- name: openshift-master-1
role: master
bmc:
address: ipmi://192.168.123.1:6231
username: admin
password: password
bootMACAddress: 52:54:00:73:53:97
hardwareProfile: default
Deployment fails because the installer deployed:
Note: deployment works correctly when I put master nodes first in the hosts list:
apiVersion: v1
baseDomain: qe.lab.redhat.com
networking:
machineCIDR: 192.168.123.0/24
metadata:
name: ocp-edge-cluster
compute:
- name: worker
replicas: 1
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: 192.168.123.5
dnsVIP: 192.168.123.6
ingressVIP: 192.168.123.10
hosts:
- name: openshift-master-2
role: master
bmc:
address: ipmi://192.168.123.1:6232
username: admin
password: password
bootMACAddress: 52:54:00:88:41:e0
hardwareProfile: default
- name: openshift-master-0
role: master
bmc:
address: ipmi://192.168.123.1:6230
username: admin
password: password
bootMACAddress: 52:54:00:5f:b0:ef
hardwareProfile: default
- name: openshift-master-1
role: master
bmc:
address: ipmi://192.168.123.1:6231
username: admin
password: password
bootMACAddress: 52:54:00:73:53:97
hardwareProfile: default
- name: openshift-worker-0
role: worker
bmc:
address: ipmi://192.168.123.1:6233
username: admin
password: password
bootMACAddress: 52:54:00:cd:0a:b1
hardwareProfile: unknown
[kni@worker-1 ~]$ echo $VERSION
4.3.0-0.nightly-2019-12-09-035405
[kni@worker-1 ~]$ echo $RELEASE_IMAGE
quay.io/openshift-release-dev/ocp-release-nightly@sha256:52d9ac31e14658a3e48bc9a2ce041220b697008e59319a9ce009e269097c3706
TASK [installer : Set Fact for RHCOS_URI and RHCOS_PATH] ***********************
task path: /home/jenkins/baremetal-deploy/ansible-ipi-install/roles/installer/tasks/30_create_metal3.yml:13
Wednesday 12 February 2020 11:38:52 +0000 (0:00:00.942) 0:01:52.304 ****
fatal: [xxx.redhat.com]: FAILED! => {
"msg": "You need to install \"jmespath\" prior to running json_query filter"
}
In RHEL8 it is provided by the python3-jmespath
package
Each feature should be linked in the top level README
https://github.com/openshift-kni/baremetal-deploy/blob/master/features/performance/README.md says to label nodes intended to be worker-rt as:
oc label node <node_name> machineconfiguration.openshift.io/role=worker-rt
vs MCP README https://github.com/openshift-kni/baremetal-deploy/blob/master/features/mcp/README.md
oc label $node node-role.kubernetes.io/worker-rt=""
Are both needed?
Each feature should be linked in the top level README
Currently there is no NTP/chrony configuration instructions. It would be nice to include how to add the chrony/ntp configuration as part of the installation with the custom manifests, otherwise if the clocks differ, it can lead to an unstable/unusuable environment.
I use this for masters:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp: null
labels:
machineconfiguration.openshift.io/role: master
name: 99-master-etc-chrony-conf
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 2.2.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,c2VydmVyIGNsb2NrLmNvcnAucmVkaGF0LmNvbSBpYnVyc3QKc3RyYXR1bXdlaWdodCAwCmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKcnRjc3luYwptYWtlc3RlcCAxMCAzCmJpbmRjbWRhZGRyZXNzIDEyNy4wLjAuMQpiaW5kY21kYWRkcmVzcyA6OjEKa2V5ZmlsZSAvZXRjL2Nocm9ueS5rZXlzCmNvbW1hbmRrZXkgMQpnZW5lcmF0ZWNvbW1hbmRrZXkKbm9jbGllbnRsb2cKbG9nY2hhbmdlIDAuNQpsb2dkaXIgL3Zhci9sb2cvY2hyb255Cg==
verification: {}
filesystem: root
group:
name: root
mode: 420
path: /etc/chrony.conf
user:
name: root
systemd: {}
osImageURL: ""
And this one for workers:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp: null
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-etc-chrony-conf
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 2.2.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,c2VydmVyIGNsb2NrLmNvcnAucmVkaGF0LmNvbSBpYnVyc3QKc3RyYXR1bXdlaWdodCAwCmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKcnRjc3luYwptYWtlc3RlcCAxMCAzCmJpbmRjbWRhZGRyZXNzIDEyNy4wLjAuMQpiaW5kY21kYWRkcmVzcyA6OjEKa2V5ZmlsZSAvZXRjL2Nocm9ueS5rZXlzCmNvbW1hbmRrZXkgMQpnZW5lcmF0ZWNvbW1hbmRrZXkKbm9jbGllbnRsb2cKbG9nY2hhbmdlIDAuNQpsb2dkaXIgL3Zhci9sb2cvY2hyb255Cg==
verification: {}
filesystem: root
group:
name: root
mode: 420
path: /etc/chrony.conf
user:
name: root
systemd: {}
osImageURL: ""
It would be nice to add some examples. For instance, for ISOLATED_CPUS
is "0-2" valid? or it needs to be "0,1,2"?
Also, it is 'all or nothing'? What if I want to deploy just the RT kernel?
Currently the PTP operator is deployed using the PTP operator github repository and it should be deployed using the operatorhub instead.
In 4.4, metal3-config.yaml is no longer required. Instead, the designation of the provisioning interface name has been moved to install-config.yaml as provisioningNetworkInterface
. We should inject this value into the generated install-config.
error: unable to recognize “STDIN”: no matches for kind “SriovNetworkNodePolicy” in version “sriovnetwork.openshift.io/v1”
The "Deploying Routers on Worker Nodes" mentions setting the replicas for the routers to be the same as the number of worker nodes deployed.
If no worker nodes are deployed during installation, what value should this be set to?
For instance, if i will have one worker node added after installation because it is the node i am provisioning from.
Can this be adjusted later? Do i set it to 0 during installation and then how do i increment it when i reconfigure the provisioning system to be a worker.
There are a few packages not needed that were used back in the day where a iso was needed. To improve the installation time, the packages installed need to be reduced.
Also, if they are installed as dependency of other packages, they should not be explicitly added.
openshift-machine-api kni1-worker-0 error provisioning kni1-worker-0-r7gcg ipmi://10.19.143.29 unknown true Image provisioning failed: node f7819fc2-50ee-4b25-bb9b-3269f3ffac85 command status errored: {'type': 'ImageWriteError', 'code': 500, 'message': 'Error writing image to device: Writing image to device /dev/sda failed with exit code 1. stdout: write_image.sh: Erasing existing GPT and MBR data structures from /dev/sda\n. stderr: blockdev: cannot open /dev/sda: No medium found\n', 'details': 'Writing image to device /dev/sda failed with exit code 1. stdout: write_image.sh: Erasing existing GPT and MBR data structures from /dev/sda\n. stderr: blockdev: cannot open /dev/sda: No medium found\n'}
Client Version: 4.3.1
Server Version: 4.3.1
Kubernetes Version: v1.16.2
- name: kni1-worker-0
role: worker
bmc:
address: ipmi://xxx
username: root
password: xxx
bootMACAddress: xxx
hardwareProfile: unknown
When configuring the bridges on the provision node via ssh, the connection will drop when removing the interface the connection is on. This can prevent all the commands from running if copy/pasted and prevents the ability to finish the configuration.
This can be fixed by wrapping the commands using nohup.
example: nohup bash -c 'nmcli commands'
Using nohup, once the configuration is done, the connection should resume.
Also, the steps will create a temporary connection called "Wired connection ..." when the interface is removed if the interface is active when it is removed.
Need to bring the interfaces down before removing them.
Each feature should be linked in the top level README
Step 4 under "Preparing the Provision node for Openshift Install" does not show attaching the system to a subscription.
Should add --auto-attach to the subscription-manager register command and make a note prior that could also use --activationkey option. Or "subscription-manager attach" command could be run after the system is registered.
A note exists under step 4 when creating the metal3-config.yaml.sample file.
The note states that the provision_ip should be modified to an available ip.
First, this should be provisioning_ip in the note. Also, it should be noted that in the config file, the deploy_kernel_url, deploy_ramdisk_url, ironic_endpoint, and ironic_inspector_endpoint should have the same ip as the provisioning_ip.
Currently there is a function that install required packages (https://github.com/openshift-kni/baremetal-deploy/blob/master/baremetal-prep/baremetal-prep.sh#L159) and other yum install here (https://github.com/openshift-kni/baremetal-deploy/blob/master/baremetal-prep/baremetal-prep.sh#L56)
It would be nice to add the required packages to the list of the dependencies instead installing it in another yum call.
Add OVNKubernetes as the default networkType so that it matches what our customers are deploying. Provide an option to switching to OpenShiftSDN
In 4.4, the metal3-config.yaml config map isn't needed anymore as the installer will pass and calculate all the required values to the machine-api-operator for standing up the Metal3 provisioning services.
Each feature should be linked in the top level README
Step 5 is unnecessary. Combine with Step 7 and reduce the number of steps overall.
Currently the baremetal-prep.sh script requires ansible to create the install-config.yaml file.
As it is just for a simple template/substitution, I think a bash script with envsubst/sed script/python script would be better so you won't need to subscribe to the ansible channel nor download a few unneeded packages
When attempting to re-run an ansible-ipi-install playbook, the TASK [installer : Deploy OpenShift Cluster]
will fail if a previous run has left old terraform.tfstate files behind in the {{ansible_user}}/clusterconfigs
directory.
Tuesday 04 February 2020 11:24:34 -0500 (0:00:00.102) 0:02:17.262 ******
...
"level=debug ms
g=\" Loading Install Config...\"", "level=debug msg=\" Loading Platform Credentials Check...\"", "level=debug msg=\" Loading Terraform Variables...\"", "level=debug msg=\" Loading Kubeadmin Password...\"", "l
evel=fatal msg=\"failed to fetch Cluster: failed to load asset \\\"Cluster\\\": \\\"terraform.tfstate\\\" already exists. There may already be a running cluster\""], "stdout": "", "stdout_lines": []}
PLAY RECAP *********************************************************************************************************************************************************************************************************
127.0.0.1 : ok=66 changed=24 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0```
Removing the directory and re-running the playbook fixes the issue.
Each feature should be linked in the top level README
Right now you can just specify the DNS VIP in the inventory and I'd say the playbook should be flexible to be able to allow the user to specify all the required VIPs in the inventory (DNS, API and Ingress) and if not, use the lookup method.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.