Giter Club home page Giter Club logo

logstash-output-datadog_logs's Introduction

logstash-output-datadog_logs

Link to the Datadog documentation

DatadogLogs lets you send logs to Datadog based on LogStash events.

Requirements

The plugin relies upon the zlib library for compressing data. Successfully tested with Logstash 6.x, 7.x and 8.x.

How to install it?

logstash-plugin install logstash-output-datadog_logs

How to use it?

The datadog_logs plugin is configured by default to send logs to a US endpoint over an SSL-encrypted HTTP connection. The logs are by default batched and compressed.

Configure the plugin with your Datadog API key:

output {
    datadog_logs {
        api_key => "<DATADOG_API_KEY>"
    }
}

To enable TCP forwarding, configure your forwarder with:

output {
    datadog_logs {
        api_key => "<DATADOG_API_KEY>"
        host => "tcp-intake.logs.datadoghq.com"
        port => 10516
        use_http => false
    }
}

To send logs to the Datadog's EU HTTP endpoint, override the default host

output {
    datadog_logs {
        api_key => "<DATADOG_API_KEY>"
        host => "http-intake.logs.datadoghq.eu"
    }
}

Configuration properties

Property Description Default value
api_key The API key of your Datadog platform nil
host Endpoint when logs are not directly forwarded to Datadog intake.logs.datadoghq.com
port Port when logs are not directly forwarded to Datadog 443
use_ssl If true, the agent initializes a secure connection to Datadog. Ensure to update the port if you disable it. true
max_retries The number of retries before the output plugin stops 5
max_backoff The maximum time waited between each retry in seconds 30
use_http Enable HTTP forwarding. If you disable it, make sure to update the port to 10516 if use_ssl is enabled or 10514 otherwise. true
use_compression Enable log compression for HTTP true
compression_level Set the log compression level for HTTP (1 to 9, 9 being the best ratio) 6
no_ssl_validation Disable SSL validation (useful for proxy forwarding) false
http_proxy Proxy address for http proxies none

For additional options, see the Datadog endpoint documentation

Add metadata to your logs

In order to get the best use out of your logs in Datadog, it is important to have the proper metadata associated with them (including hostname, service and source). To add those to your logs, add them into your logs with a mutate filter:

filter {
  mutate {
    add_field => {
      "host"     => "<HOST>"
      "service"  => "<SERVICE>"
      "ddsource" => "<MY_SOURCE_VALUE>"
      "ddtags"   => "<KEY1:VALUE1>,<KEY2:VALUE2>"
    }
  }
}

Need Help?

If you need any support please contact us at [email protected].

logstash-output-datadog_logs's People

Contributors

achntrl avatar ajacquemot avatar belkirill avatar carlosroman avatar cedricvanrompay-datadog avatar gaetan-deputier avatar ganeshkumarsv avatar jszwedko avatar mstbbs avatar nbparis avatar remeh avatar tmsch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-output-datadog_logs's Issues

Feature: Support for Tags

Do you guys have the support for tags with this plugin

output {
    datadog_logs {
        api_key => "<your_datadog_api_key>"
    }
    tags => ["a","b"]
}

Similar to the other datadog output plugin that is targeting your logs ingestion

Concurrency level limits performance

Hi,
I've noticed the output is single-threaded. This limits performance significantly. In our case lag of sending log messages to DD is sometimes greater than 30 mins.
This is mainly caused by the fact that only one worker thread sends messages to DD. All other worker threads that finished processing inputs and filters are blocked.
This is due to the default concurrency level used by the plugin, which is :legacy.

Is it possible to change concurrency level of the plug-in to :shared to allow sending requests to DD in parallel?
The code change is described here: https://www.elastic.co/guide/en/logstash/current/output-new-plugin.html
From the doc:

# This sets the concurrency behavior of this plugin. By default it is :legacy, which was the standard
  # way concurrency worked before Logstash 2.4
  #
  # You should explicitly set it to either :single or :shared as :legacy will be removed in Logstash 6.0
  #
  # When configured as :single a single instance of the Output will be shared among the
  # pipeline worker threads. Access to the `#multi_receive/#multi_receive_encoded/#receive` method will be synchronized
  # i.e. only one thread will be active at a time making threadsafety much simpler.
  #
  # You can set this to :shared if your output is threadsafe. This will maximize
  # concurrency but you will need to make appropriate uses of mutexes in `#multi_receive/#receive`.
  #
  # Only the `#multi_receive/#multi_receive_encoded` methods need to actually be threadsafe, the other methods
  # will only be executed in a single thread
  concurrency :single

If you agree this is a good idea, I can do the change and create pull request.

I'm using Logstash v.6.x.x

Debugging lines printed for every message

Describe what happened:
I am getting these debugging lines printed in the Logstash log with every outgoing message:

[2018-07-18T15:53:00,141][INFO ][logstash.outputs.datadog ] DD convo {:request=>"#<Net::HTTP::Post POST>", :response=>"#<Net::HTTPAccepted 202 Accepted readbody=true>"}
[2018-07-18T15:53:00,350][INFO ][logstash.outputs.datadog ] DD convo {:request=>"#<Net::HTTP::Post POST>", :response=>"#<Net::HTTPAccepted 202 Accepted readbody=true>"}
[2018-07-18T15:53:00,423][INFO ][logstash.outputs.datadog ] DD convo {:request=>"#<Net::HTTP::Post POST>", :response=>"#<Net::HTTPAccepted 202 Accepted readbody=true>"}
[2018-07-18T15:53:00,517][INFO ][logstash.outputs.datadog ] DD convo {:request=>"#<Net::HTTP::Post POST>", :response=>"#<Net::HTTPAccepted 202 Accepted readbody=true>"}

Describe what you expected:
Debugging disabled by default or a way to disable these.

Steps to reproduce the issue:
Install plugin and start using normally.

Additional environment details (Operating System, Cloud provider, etc):
Plugin version is 3.0.5. But the code actually seems to be different than the code in this repository. In the plugin on system there is a line for:

@logger.info("DD convo", :request => request.inspect, :response => response.inspect)

I would guess this should be a debug logger instead.

Logstash - datadog - Logs missing

Good morning,

In my logstash environment (version 7.4 and 7.5), we installed these Datadog plugins:
• logstash-output-datadog (3.0.5)
• logstash-output-datadog_logs (0.3.1)

After installing, we installed filebeat on linux server and this filbeat process send all syslog to logstash and logstash takes care to sending these logs to my Datadog platform.

The issue we have, logstash don’t send all logs in my Datadog platform. Some logs are missing when I’m looking in Log explorer in my Datadog.

In the log file of logstash, when we don’t receive the log in Datadog, we see this error:
Dec 12 14:39:11 lvz-logstash-p001 logstash[1112]: [2019-12-12T14:39:11,133][WARN ][logstash.outputs.datadoglogs][main] Could not send payload {:exception=>#<IOError: Broken pipe>, :backtrace=>["org/jruby/ext/openssl/SSLSocket.java:950:in syswrite'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/jopenssl23/openssl/buffering.rb:322:in do_write'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/jopenssl23/openssl/buffering.rb:339:in block in write'", "org/jruby/RubyArray.java:1800:in each'", "org/jruby/RubyEnumerable.java:1093:in inject'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/jopenssl23/openssl/buffering.rb:338:in write'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.3.1/lib/logstash/outputs/datadog_logs.rb:36:in block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-codec-json-3.0.5/lib/logstash/codecs/json.rb:42:in encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:31:in block in encode'", "org/logstash/instrument/metrics/AbstractSimpleMetricExt.java:45:in time'", "org/logstash/instrument/metrics/AbstractNamespacedMetricExt.java:44:in time'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:30:in encode'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.3.1/lib/logstash/outputs/datadog_logs.rb:55:in receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:89:in block in multi_receive'", "org/jruby/RubyArray.java:1800:in each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:89:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:250:in `block in start_workers'"]}
Dec 12 14:39:12 lvz-logstash-p001 logstash[1112]: [2019-12-12T14:39:12,164][INFO ][logstash.outputs.datadoglogs][main] Starting SSL connection {:host=>"intake.logs.datadoghq.com", :port=>10516}

The message before this error was not sending to Datadog. We receive this message not all the time….
Do you have any idea about this problem???

We did have this error with logstash version 7.4 and 7.5…

@ajacquemot

Je parle Français...

Erreur logstash - DD - logs missing.txt

output installation plugin logstash-output-datadog.txt

Connection via Proxy

Describe what happened:
This plugin does not seem to inherit logstash proxy configuration.

Describe what you expected:
Use logstash proxy config

Steps to reproduce the issue:

Additional environment details (Operating System, Cloud provider, etc):

Incomplete log event sent. Truncates at 16k boundary.

Describe what happened:
Some JSON events did not show up in datadog correctly. JSON was truncated / garbled.

Describe what you expected:
Data showing up without issues

Steps to reproduce the issue:
Send an event that serializes to more than 16kb. Only the first 16kb will be sent.

I added some log statements around
https://github.com/DataDog/logstash-output-datadog_logs/blob/master/lib/logstash/outputs/datadog_logs.rb#L37

sent_bytes = client_socket.syswrite(message)
@logger.warn("Sent", :sent_bytes => sent_bytes, :message_length => message.length)

23:47:37.514 [[main]>worker10] WARN  logstash.outputs.datadoglogs - Sent {:sent_bytes=>16384, :message_length=>25922}

The plugin doesn't support interruption. Not being interruptible prevents it from interacting nicely with logstash.

Describe what happened:

  • A Logstash pipeline has been defined to output to the console and to DataDog (via DataDog Logstash Output Plugin)
  • DataDog's Logstash Output Plugin's "max_retries" is configured to a relatively high number (e.g. 420)
  • Logstash instance is intentionally running with limited connectivity (to simulate being unable to reach DataDog servers)
  • Logstash instance's persistent queue is set to "persisted" (https://www.elastic.co/guide/en/logstash/current/persistent-queues.html)
  • A log message is sent to Logstash
  • Logstash shutdown process is initiated
  • Logstash logs show that it is waiting on DataDog's plugin
  • Logstash is forcefully terminated
  • Logstash is started
  • Logstash logs show that Logstash isn't trying to send the previously received log message that wasn't send to DataDog due to limited connectivity

Describe what you expected:

  • Logstash shutdown process ends gracefully (i.e. DataDog Logstash Output Plugin doesn't block Logstash from shutting down gracefully)
  • After restarting, a Persistent-Queue-enabled Logstash attempts to resend the logs that weren't sent (i.e. DataDog Logstash Output Plugin now properly lets Logstash know that certain logs weren't sent)

Steps to reproduce the issue:

  • Set queue.type: persisted in Logstash configuration
  • Add a Logstash pipeline and configure the DataDog Logstash Output Plugin parameter max_retries => 420
  • Start Logstash (with limited internet connectivity) in an interactive command prompt
  • Send a log message to Logstash
  • Look at Logstash's command prompt and watch DataDog Logstash Output Plugin retry
  • Attempt to stop Logstash
  • Forcefully stop Logstash
  • Start Logstash in an interactive command prompt
  • Look at Logstash's command prompt and watch it not retry to send the previously received log message

Additional environment details (Operating System, Cloud provider, etc):

  • Ran in a linux container with limited connectivity

[logstash.outputs.datadoglogs] TCP exception {:exception=>#<EOFError: End of file reached>

Every now and then I get the following error:

logstash_1_872cd354b170 | [2018-11-23T12:35:09,942][WARN ][logstash.outputs.datadoglogs] TCP exception {:exception=>#<EOFError: End of file reached>, :backtrace=>["org/jruby/ext/openssl/SSLSocket.java:857:in `sysread'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-datadog_logs-0.2.1/lib/logstash/outputs/datadog_logs.rb:33:in `block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-json-3.0.5/lib/logstash/codecs/json.rb:42:in `encode'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-datadog_logs-0.2.1/lib/logstash/outputs/datadog_logs.rb:54:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:89:in `block in multi_receive'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:89:in `multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:114:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:97:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:373:in `block in output_batch'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:372:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:324:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:286:in `block in start_workers'"]}

It unfortunately completely crashes the processing for further logs of that type (rsyslog)

Support for logstash 8.X version

Describe what happened:
In datadog-logstash integration, the compability[https://docs.datadoghq.com/integrations/logstash/#compatibility] is with Logstash 5.x, 6.x and 7.x versions.

Describe what you expected:
I wonder when Logstash 8.X version is available for datadog integration.

Thanks.

Steps to reproduce the issue:

Additional environment details (Operating System, Cloud provider, etc):

Crashes with v0.4.0

Describe what happened:

With the latest plugin v0.4.0 version, logstash 7.1.1 is crashing (tested on several logstash deployments)

[2020-02-27T11:14:01,985][ERROR][org.logstash.execution.WorkerLoop] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.
org.jruby.exceptions.StandardError: (SocketTimeout) Read timed out
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.manticore_minus_0_dot_6_dot_4_minus_java.lib.manticore.response.initialize(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37) ~[?:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.manticore_minus_0_dot_6_dot_4_minus_java.lib.manticore.response.call(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79) ~[?:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.send(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:197) ~[?:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.send_retries(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:158) ~[?:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.process_encoded_payload(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:70) ~[?:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.multi_receive(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:56) ~[?:?]
	at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1792) ~[jruby-complete-9.2.7.0.jar:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.multi_receive(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:55) ~[?:?]
	at org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multi_receive(org/logstash/config/ir/compiler/OutputStrategyExt.java:118) ~[logstash-core.jar:?]
	at org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101) ~[logstash-core.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235) ~[?:?]
warning: thread "[main]>worker0" terminated with exception (report_on_exception is true):
java.lang.IllegalStateException: org.jruby.exceptions.StandardError: (SocketTimeout) Read timed out
	at org.logstash.execution.WorkerLoop.run(org/logstash/execution/WorkerLoop.java:85)
	at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
	at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:425)
	at org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:292)
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235)
	at org.jruby.RubyProc.call(org/jruby/RubyProc.java:295)
	at org.jruby.RubyProc.call(org/jruby/RubyProc.java:274)
	at org.jruby.RubyProc.call(org/jruby/RubyProc.java:270)
	at java.lang.Thread.run(java/lang/Thread.java:748)
Caused by: org.jruby.exceptions.StandardError: (SocketTimeout) Read timed out
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.manticore_minus_0_dot_6_dot_4_minus_java.lib.manticore.response.initialize(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37)
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.manticore_minus_0_dot_6_dot_4_minus_java.lib.manticore.response.call(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79)
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.send(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:197)
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.send_retries(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:158)
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.process_encoded_payload(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:70)
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.multi_receive(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:56)
	at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1792)
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_datadog_logs_minus_0_dot_4_dot_0.lib.logstash.outputs.datadog_logs.multi_receive(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.4.0/lib/logstash/outputs/datadog_logs.rb:55)
	at org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multi_receive(org/logstash/config/ir/compiler/OutputStrategyExt.java:118)
	at org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101)
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235)

Workaround to fix the plugin version:

bin/logstash-plugin install --version 0.3.1 logstash-output-datadog_logs

Datadog logstash plugin installation failed with logstash 7.16.1

Describe what happened:
Datadog logstash plugin installation started failing suddenly from 2 hrs back
Logstash version used : 7.16.1

Command for installation:
sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-datadog_logs

Error message during installation:
Unhandled Java exception: java.lang.NoSuchMethodError: java.nio.ByteBuffer.limit(I)Ljava/nio/ByteBuffer; java.lang.NoSuchMethodError: java.nio.ByteBuffer.limit(I)Ljava/nio/ByteBuffer;

Describe what you expected:
It should ideally work fine and logs should be pushed from EC2 instance to Datadog. This version was working fine from quite sometime and it suddenly stopped working

Steps to reproduce the issue:
Please refer above command

Additional environment details (Operating System, Cloud provider, etc):
OS details:
NAME="Amazon Linux AMI"
VERSION="2018.03"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.