Giter Club home page Giter Club logo

metrics-server-exporter's People

Contributors

brodul avatar deadc avatar gkope avatar julianoborba avatar olxbr-bot avatar parinapatel avatar pedrokiefer avatar pforman-zymergen avatar rochacon avatar rsvalerio avatar sdf611097 avatar souzabrunoftw avatar wiltoninfra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metrics-server-exporter's Issues

exception calling self._socket.sendall() - broken pipe

Exception happened during processing of request from ('10.244.3.1', 39114)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/socketserver.py", line 654, in process_request_thread
self.finish_request(request, client_address)
File "/usr/local/lib/python3.6/socketserver.py", line 364, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/local/lib/python3.6/socketserver.py", line 724, in init
self.handle()
File "/usr/local/lib/python3.6/http/server.py", line 418, in handle
self.handle_one_request()
File "/usr/local/lib/python3.6/http/server.py", line 406, in handle_one_request
method()
File "/usr/local/lib/python3.6/site-packages/prometheus_client/exposition.py", line 102, in do_GET
self.wfile.write(output)
File "/usr/local/lib/python3.6/socketserver.py", line 803, in write
self._sock.sendall(b)
BrokenPipeError: [Errno 32] Broken pipe

exception

Exception happened during processing of request from ('10.244.3.1', 50760)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/socketserver.py", line 654, in process_request_thread
self.finish_request(request, client_address)
File "/usr/local/lib/python3.6/socketserver.py", line 364, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/local/lib/python3.6/socketserver.py", line 724, in init
self.handle()
File "/usr/local/lib/python3.6/http/server.py", line 418, in handle
self.handle_one_request()
File "/usr/local/lib/python3.6/http/server.py", line 406, in handle_one_request
method()
File "/usr/local/lib/python3.6/site-packages/prometheus_client/exposition.py", line 95, in do_GET
output = generate_latest(registry)
File "/usr/local/lib/python3.6/site-packages/prometheus_client/exposition.py", line 69, in generate_latest
for metric in registry.collect():
File "/usr/local/lib/python3.6/site-packages/prometheus_client/core.py", line 102, in collect
for metric in collector.collect():
File "app.py", line 83, in collect
ret = self.kube_metrics()
File "app.py", line 66, in kube_metrics
pod_data = session.get(self.set_namespaced_pod_url(namespace), headers=headers, params=query).json()
File "/usr/local/lib/python3.6/site-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/local/lib/python3.6/json/init.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Grafana dashboard

This is very cool and useful rather than running many agent to check the usage
Do we have any grafana dashboard which are already designed to view these metrics?

How to scrape in promotheus

Hello I'm trying to scrape this metrics-server-exporter, using prometheus outside kubernetes cluster
I already expose metrics-server-exporter as NodePort service
I have try prometheus yaml file from this link https://grafana.com/grafana/dashboards/12363
But prometheus not scraping the metrics, what should I do?

Here are my prometheus config,

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

scrape_configs:
  - job_name: 'prometheus-pushgateway'

    static_configs:
    - targets: ['myip:port']

  - job_name: 'prometheus-node-exporter'

    static_configs:
    - targets: ['myip:port']

  - job_name: 'prometheus-metrics-server-exporter'

    static_configs:
    - targets: ['myip:port']

    kubernetes_sd_configs:
    - role: endpoints
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: pod

LICENSE terms?

There's no LICENSE file attached to this repository.

It's a pretty useful exporter, so I'd like to be clear what license it's covered by before I start using it beyond basic testing.

docker log ERR

Traceback (most recent call last):
File "app.py", line 121, in
REGISTRY.register(MetricsServerExporter())
File "/usr/local/lib/python3.6/site-packages/prometheus_client/core.py", line 54, in register
names = self._get_names(collector)
File "/usr/local/lib/python3.6/site-packages/prometheus_client/core.py", line 91, in _get_names
for metric in desc_func():
File "app.py", line 71, in collect
nodes = json.loads(ret['nodes'].text)
File "/usr/local/lib/python3.6/json/init.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.6/json/decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)

Pod time-series missing on /metrics page

๐Ÿ‘‹ Tnx for the awesome project. I have an similar issue than #26

I have deployed the project with:

$ kubectl apply -f deploy/

When I check the /metric via port-forwarding the exporter service:

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 27615232.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 22327296.0
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1582117194.62
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 31.54
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1048576.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="6",patchlevel="9",version="3.6.9"} 1.0
# HELP kube_metrics_server_response_time Metrics Server API Response Time
# TYPE kube_metrics_server_response_time gauge
kube_metrics_server_response_time{api_url="https://kubernetes.default.svc/metrics.k8s.io"} 0.19948
# HELP kube_metrics_server_nodes_mem Metrics Server Nodes Memory
# TYPE kube_metrics_server_nodes_mem gauge
# HELP kube_metrics_server_nodes_cpu Metrics Server Nodes CPU
# TYPE kube_metrics_server_nodes_cpu gauge
# HELP kube_metrics_server_pods_mem Metrics Server Pods Memory
# TYPE kube_metrics_server_pods_mem gauge
# HELP kube_metrics_server_pods_cpu Metrics Server Pods CPU
# TYPE kube_metrics_server_pods_cpu gauge

I have some pods:

$ kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
metrics-server-exporter-6d97fb5cf7-fnqbb         1/1     Running   0          65m
prometheus-alertmanager-64497676f8-ttfpf         2/2     Running   0          72m
prometheus-kube-state-metrics-5d49966699-jpn9w   1/1     Running   0          72m
prometheus-node-exporter-8dkmm                   1/1     Running   0          72m
prometheus-pushgateway-5bb46ff89f-zjgdb          1/1     Running   0          72m
prometheus-server-7fbcdd878-whvrj                2/2     Running   0          72m
wordpress-8bcc6cf8c-rss26                        1/1     Running   0          78m
wordpress-mariadb-0                              1/1     Running   0          78m
$ kubectl top pods
NAME                                             CPU(cores)   MEMORY(bytes)   
metrics-server-exporter-6d97fb5cf7-fnqbb         6m           15Mi            
prometheus-alertmanager-64497676f8-ttfpf         1m           9Mi             
prometheus-kube-state-metrics-5d49966699-jpn9w   0m           6Mi             
prometheus-node-exporter-8dkmm                   0m           7Mi             
prometheus-pushgateway-5bb46ff89f-zjgdb          0m           4Mi             
prometheus-server-7fbcdd878-whvrj                6m           115Mi           
wordpress-8bcc6cf8c-rss26                        9m           169Mi           
wordpress-mariadb-0                              5m           76Mi            
$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods"
{"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/pods"},"items":[{"metadata":{"name":"prometheus-pushgateway-5bb46ff89f-z
jgdb","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/prometheus-pushgateway-5bb46ff89f-zjgdb","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp
":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"prometheus-pushgateway","usage":{"cpu":"0","memory":"4548Ki"}}]},{"metadata":{"name":"kube-addon-manager-minikube","namespac
e":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-addon-manager-minikube","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05
:00Z","window":"1m0s","containers":[{"name":"kube-addon-manager","usage":{"cpu":"13m","memory":"3660Ki"}}]},{"metadata":{"name":"wordpress-mariadb-0","namespace":"default","selfLink":"/api
s/metrics.k8s.io/v1beta1/namespaces/default/pods/wordpress-mariadb-0","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"
mariadb","usage":{"cpu":"5m","memory":"78392Ki"}}]},{"metadata":{"name":"prometheus-node-exporter-8dkmm","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/prometheus-node-exporter-8dkmm","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"prometheus-node-exporter","usage":{"cpu":"1m","memory":"8416Ki"}}]},{"metadata":{"name":"wordpress-8bcc6cf8c-rss26","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/wordpress-8bcc6cf8c-rss26","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"wordpress","usage":{"cpu":"14m","memory":"173780Ki"}}]},{"metadata":{"name":"prometheus-kube-state-metrics-5d49966699-jpn9w","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/prometheus-kube-state-metrics-5d49966699-jpn9w","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"prometheus-kube-state-metrics","usage":{"cpu":"1m","memory":"6900Ki"}}]},{"metadata":{"name":"prometheus-alertmanager-64497676f8-ttfpf","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/prometheus-alertmanager-64497676f8-ttfpf","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"prometheus-alertmanager","usage":{"cpu":"2m","memory":"8384Ki"}},{"name":"prometheus-alertmanager-configmap-reload","usage":{"cpu":"0","memory":"1412Ki"}}]},{"metadata":{"name":"storage-provisioner","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/storage-provisioner","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"storage-provisioner","usage":{"cpu":"0","memory":"15712Ki"}}]},{"metadata":{"name":"kube-scheduler-minikube","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"kube-scheduler","usage":{"cpu":"3m","memory":"10512Ki"}}]},{"metadata":{"name":"kube-proxy-qfzzv","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-qfzzv","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"kube-proxy","usage":{"cpu":"3m","memory":"9512Ki"}}]},{"metadata":{"name":"kube-controller-manager-minikube","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-controller-manager-minikube","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"kube-controller-manager","usage":{"cpu":"28m","memory":"36260Ki"}}]},{"metadata":{"name":"etcd-minikube","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/etcd-minikube","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"etcd","usage":{"cpu":"39m","memory":"42812Ki"}}]},{"metadata":{"name":"kube-apiserver-minikube","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-apiserver-minikube","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"kube-apiserver","usage":{"cpu":"74m","memory":"188612Ki"}}]},{"metadata":{"name":"prometheus-server-7fbcdd878-whvrj","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/prometheus-server-7fbcdd878-whvrj","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"prometheus-server-configmap-reload","usage":{"cpu":"0","memory":"1480Ki"}},{"name":"prometheus-server","usage":{"cpu":"12m","memory":"117292Ki"}}]},{"metadata":{"name":"metrics-server-exporter-6d97fb5cf7-fnqbb","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/metrics-server-exporter-6d97fb5cf7-fnqbb","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"metrics-server-exporter","usage":{"cpu":"9m","memory":"16364Ki"}}]},{"metadata":{"name":"coredns-5c98db65d4-4n5m2","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-5c98db65d4-4n5m2","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"coredns","usage":{"cpu":"7m","memory":"9756Ki"}}]},{"metadata":{"name":"coredns-5c98db65d4-9txf7","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-5c98db65d4-9txf7","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"coredns","usage":{"cpu":"6m","memory":"9720Ki"}}]},{"metadata":{"name":"metrics-server-84bb785897-rp54x","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/metrics-server-84bb785897-rp54x","creationTimestamp":"2020-02-19T14:05:39Z"},"timestamp":"2020-02-19T14:05:00Z","window":"1m0s","containers":[{"name":"metrics-server","usage":{"cpu":"1m","memory":"10648Ki"}}]}]}

Service account, role and rolebinding is present as described in deploy.

There are no logs. Any ideas?

How to run in Minikube?

How do I provide the following variables:

K8S_ENDPOINT - is this minkube ip?
K8S_TOKEN - what is this in minikube?
K8S_FILEPATH_TOKEN - what is this in minikube?

Metrics Endpoint does not show any pod or node data

I am sure this is a configuration issue.

Environment

I am running on kubernetes v1.14.6 (in Azure AKS if that matters)

yaml changes

I installed all 3 yaml files from the deployment directory. The only change I made from the master branch are in the deployment yaml
under spec/template/spec I added

nodeSelector:
          "beta.kubernetes.io/os": linux

this is required since we have windows node pools.

I also updated the image tag from v0.0.5 to v0.0.6

Output

If I port forward to port 8000 on the pod
kubectl --namespace default port-forward metrics-server-exporter-real-pod-name 8000
Then I see the following output

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 26980352.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 21794816.0
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1572639086.09
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 7.92
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1048576.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="6",patchlevel="9",version="3.6.9"} 1.0
# HELP kube_metrics_server_response_time Metrics Server API Response Time
# TYPE kube_metrics_server_response_time gauge
kube_metrics_server_response_time{api_url="https://kubernetes.default.svc/metrics.k8s.io"} 0.035198
# HELP kube_metrics_server_nodes_mem Metrics Server Nodes Memory
# TYPE kube_metrics_server_nodes_mem gauge
# HELP kube_metrics_server_nodes_cpu Metrics Server Nodes CPU
# TYPE kube_metrics_server_nodes_cpu gauge
# HELP kube_metrics_server_pods_mem Metrics Server Pods Memory
# TYPE kube_metrics_server_pods_mem gauge
# HELP kube_metrics_server_pods_cpu Metrics Server Pods CPU
# TYPE kube_metrics_server_pods_cpu gauge

I know these values are being scraped because I can python_info in my prometheus server.

There are no messages on the console of the container.

Request

Is there a way to enable debug output so I can troubleshoot the issue?

Scraping don't work on rke2?

Hi,

i setup this via

helm upgrade --install --namespace kube-system --set image.tag=v0.0.7  metrics-server-exporter ./helm

It seems to be functional, since the expected metrics show up, if i use port forward to the deployment.

Addionally i setup an ServiceMontior (my prometheus is running under kube-telemetry:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: metrics-server-exporter
  namespace: kube-telemetry
  labels:
    name: metrics-server-exporter
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: metrics-server-exporter
      app.kubernetes.io/name: metrics-server-exporter
  namespaceSelector:
    matchNames:
      - kube-system
  endpoints:
    - port: "8000"

But even with the servicemonitor no metrics show up in prometheus

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.