Giter Club home page Giter Club logo

dnsmonster's Introduction

Build Status Go Version Latest Version License Open Issues

Table of Contents

Dnsmonster

Passive DNS monitoring framework built on Golang. dnsmonster implements a packet sniffer for DNS traffic. It can accept traffic from a pcap file, a live interface or a dnstap socket, and can be used to index and store hundreds of thousands of DNS queries per second as it has shown to be capable of indexing 200k+ DNS queries per second on a commodity computer. It aims to be scalable, simple, and easy to use, and help security teams to understand the details about an enterprise's DNS traffic. dnsmonster doesn't look to follow DNS conversations, rather it aims to index DNS packets as soon as they come in. It also doesn't aim to breach the privacy of the end-users, with the ability to mask Layer 3 IPs (IPv4 and IPv6), enabling teams to perform trend analysis on aggregated data without being able to trace back the queries to an individual. Blogpost

The code before version 1.x is considered beta quality and is subject to breaking changes. Please visit the release notes for each tag to see the list of breaking scenarios between each release, and how to mitigate potential data loss.

graph TD
    subgraph Input
        B1["network input"]
        B2["pcap file"]
        B3["dnstap socket"]
    end
    
    subgraph "Process"
        C1["Sampling based of ratio"]
        C2["Packet Process"]
        C3["Dispatcher"]
        O11["Output1"]
        O12["Domain Skip (optional)"]
        O13["Domain Allow (optional)"]
        O21["Output2"]
        O22["Domain Skip (optional)"]
        O23["Domain Allow (optional)"]
        O31["Output3"]
        O32["Domain Skip (optional)"]
        O33["Domain Allow (optional)"]
    end
    
    B1 --> Process
    B2 --> Process
    B3 --> Process
    
    C1 --> C2
    C2 --> C3
    C3 --> O11
    C3 --> O21
    C3 --> O31
    
    O11 --> O12 --> O13
    O21 --> O22 --> O23
    O31 --> O32 --> O33
    
    subgraph Output
        Splunk
        Syslog
        H["ClickHouse"]
        Postgres
        Kafka
        I["JSON File"]
        Influx
        Elastic
        J["stdout"]
        Parquet
        Sentinel
    end
    
    O13 --> H
    O23 --> I
    O33 --> J
Loading

Main features

  • Ability to use Linux's afpacket and zero-copy packet capture.
  • Supports BPF
  • Ability to mask IP address to enhance privacy
  • Ability to have a pre-processing sampling ratio
  • Ability to have a list of "skip" fqdns to avoid writing some domains/suffix/prefix to storage
  • Ability to have a list of "allow" domains, used to log access to certain domains
  • Hot-reload of skip and allow domain files/urls
  • Modular output with configurable logic per output stream.
  • Automatic data retention policy using ClickHouse's TTL attribute
  • Built-in Grafana dashboard for ClickHouse output.
  • Ability to be shipped as a single, statically linked binary
  • Ability to be configured using environment variables, command line options or configuration file
  • Ability to sample outputs using ClickHouse's SAMPLE capability
  • Ability to send metrics using prometheus and statstd
  • High compression ratio thanks to ClickHouse's built-in LZ4 storage
  • Supports DNS Over TCP, Fragmented DNS (udp/tcp) and IPv6
  • Supports dnstap over Unix socket or TCP
  • built-in SIEM integration with Splunk and Microsoft Sentinel

Installation

Linux

Best way to get started with dnsmonster is to download the binary from the release section. The binary is statically built against musl, hence it should work out of the box for many distros. For afpacket support, you must use kernel 3.x+. Any modern Linux distribution (CentOS/RHEL 7+, Ubuntu 14.0.4.2+, Debian 7+) is shipped with a 3.x+ version so it should work out of the box. If your distro isn't working with the pre-compiled version, please submit an issue with the details, and build dnsmonster manually using this section Build Manually.

Container

Since dnsmonster uses raw packet capture funcationality, Docker/Podman daemon must grant the capability to the container

sudo docker run --rm -it --net=host --cap-add NET_RAW --cap-add NET_ADMIN --name dnsmonster ghcr.io/mosajjal/dnsmonster:latest --devName lo --stdoutOutputType=1

Build manually

  • with libpcap: Make sure you have go, libpcap-devel and linux-headers packages installed. The name of the packages might differ based on your distribution. After this, simply clone the repository and run go build ./cmd/dnsmonster
git clone https://github.com/mosajjal/dnsmonster --depth 1 /tmp/dnsmonster 
cd /tmp/dnsmonster
go get
go build -o dnsmonster ./cmd/dnsmonster
  • without libpcap: dnsmonster only uses one function from libpcap, and that's converting the tcpdump-style filters into BPF bytecode. If you can live with no BPF support, you can build dnsmonster without libpcap. Note that for any other platform, the packet capture falls back to libpcap so it becomes a hard dependency (*BSD, Windows, Darwin)
git clone https://github.com/mosajjal/dnsmonster --depth 1 /tmp/dnsmonster 
cd /tmp/dnsmonster
go get
go build -o dnsmonster -tags nolibpcap ./cmd/dnsmonster

The above build also works on ARMv7 (RPi4) and AArch64.

Build statically

If you have a copy of libpcap.a, you can build the statically link it to dnsmonster and build it fully statically. In the code below, please change /root/libpcap-1.9.1/libpcap.a to the location of your copy.

git clone https://github.com/mosajjal/dnsmonster --depth 1 /tmp/dnsmonster
cd /tmp/dnsmonster/
go get
go build --ldflags "-L /root/libpcap-1.9.1/libpcap.a -linkmode external -extldflags \"-I/usr/include/libnl3 -lnl-genl-3 -lnl-3 -static\"" -a -o dnsmonster ./cmd/dnsmonster

For more information on how the statically linked binary is created, take a look at this Dockerfile.

Windows

Bulding on Windows is much the same as Linux. Just make sure that you have npcap. Clone the repository (--history 1 works), and run go get and go build ./cmd/dnsmonster

As mentioned, Windows release of the binary depends on npcap to be installed. After installation, the binary should work out of the box. It's been tested in a Windows 10 environment and it executed without an issue. To find interface names to give --devName parameter and start sniffing, you'll need to do the following:

  • open cmd.exe as Administrator and run the following: getmac.exe, you'll see a table with your interfaces' MAC address and a Transport Name column with something like this: \Device\Tcpip_{16000000-0000-0000-0000-145C4638064C}
  • run dnsmonster.exe in cmd.exe like this:
dnsmonster.exe --devName \Device\NPF_{16000000-0000-0000-0000-145C4638064C}

Note that you must change \Tcpip from getmac.exe to \NPF and then pass it to dnsmonster.exe.

FreeBSD and MacOS

Much the same as Linux and Windows, make sure you have git, libpcap and go installed, then follow the same instructions:

git clone https://github.com/mosajjal/dnsmonster --depth 1 /tmp/dnsmonster 
cd /tmp/dnsmonster
go get
go build -o dnsmonster ./cmd/dnsmonster

Architecture

All-in-one Installation using Docker

Basic AIO Diagram

In the example diagram, the egress/ingress of the DNS server traffic is captured, after that, an optional layer of packet aggregation is added before hitting the DNSMonster Server. The outbound data going out of DNS Servers is quite useful to perform cache and performance analysis on the DNS fleet. If an aggregator isn't available for you, you can have both TAPs connected directly to DNSMonster and have two DNSMonster Agents looking at the traffic.

running ./autobuild.sh creates multiple containers:

  • multiple instances of dnsmonster to look at the traffic on any interface. Interface list will be prompted as part of autobuild.sh
  • an instance of clickhouse to collect dnsmonster's output and saves all the logs/data to a data and logs directory. Both will be prompted as part of autobuild.sh
  • an instance of grafana looking at the clickhouse data with pre-built dashboard.

All-in-one Demo

AIO Demo

Enterprise Deployment

Basic AIO Diagram

Configuration

DNSMonster can be configured using 3 different methods. Command line options, Environment variables and configuration file. Order of precedence:

  • Command line options (Case-insensitive)
  • Environment variables (Always upper-case)
  • Configuration file (Case-sensitive, lowercase)
  • Default values (No configuration)

Command line options

Note that command line arguments are case-insensitive as of v0.9.5

# [capture]
# Device used to capture
--devname=

# Pcap filename to run
--pcapfile=

# dnstap socket path. Example: unix:///tmp/dnstap.sock, tcp://127.0.0.1:8080
--dnstapsocket=

# Port selected to filter packets
--port=53

# Capture Sampling by a:b. eg sampleRatio of 1:100 will process 1 percent of the incoming packets
--sampleratio=1:1

# Cleans up packet hash table used for deduplication
--dedupcleanupinterval=1m0s

# Set the dnstap socket permission, only applicable when unix:// is used
--dnstappermission=755

# Number of routines used to handle received packets
--packethandlercount=2

# Size of the tcp assembler
--tcpassemblychannelsize=10000

# Size of the tcp result channel
--tcpresultchannelsize=10000

# Number of routines used to handle tcp packets
--tcphandlercount=1

# Size of the channel to send packets to be defragged
--defraggerchannelsize=10000

# Size of the channel where the defragged packets are returned
--defraggerchannelreturnsize=10000

# Size of the packet handler channel
--packetchannelsize=1000

# Afpacket Buffersize in MB
--afpacketbuffersizemb=64

# BPF filter applied to the packet stream. If port is selected, the packets will not be defragged.
--filter=((ip and (ip[9] == 6 or ip[9] == 17)) or (ip6 and (ip6[6] == 17 or ip6[6] == 6 or ip6[6] == 44)))

# Use AFPacket for live captures. Supported on Linux 3.0+ only
--useafpacket

# The PCAP capture does not contain ethernet frames
--noetherframe

# Deduplicate incoming packets, Only supported with --devName and --pcapFile. Experimental 
--dedup

# Do not put the interface in promiscuous mode
--nopromiscuous

# [clickhouse_output]
# Address of the clickhouse database to save the results. multiple values can be provided.
--clickhouseaddress=localhost:9000

# Username to connect to the clickhouse database
--clickhouseusername=

# Password to connect to the clickhouse database
--clickhousepassword=

# Database to connect to the clickhouse database
--clickhousedatabase=default

# Interval between sending results to ClickHouse. If non-0, Batch size is ignored and batch delay is used
--clickhousedelay=0s

# Clickhouse connection LZ4 compression level, 0 means no compression
--clickhousecompress=0

# Debug Clickhouse connection
--clickhousedebug

# Use TLS for Clickhouse connection
--clickhousesecure

# Save full packet query and response in JSON format.
--clickhousesavefullquery

# What should be written to clickhouse. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--clickhouseoutputtype=0

# Minimum capacity of the cache array used to send data to clickhouse. Set close to the queries per second received to prevent allocations
--clickhousebatchsize=100000

# Number of Clickhouse output Workers
--clickhouseworkers=1

# Channel Size for each Clickhouse Worker
--clickhouseworkerchannelsize=100000

# [elastic_output]
# What should be written to elastic. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--elasticoutputtype=0

# elastic endpoint address, example: http://127.0.0.1:9200. Used if elasticOutputType is not none
--elasticoutputendpoint=

# elastic index
--elasticoutputindex=default

# Send data to Elastic in batch sizes
--elasticbatchsize=1000

# Interval between sending results to Elastic if Batch size is not filled
--elasticbatchdelay=1s

# [file_output]
# What should be written to file. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--fileoutputtype=0

# Path to output folder. Used if fileoutputType is not none
--fileoutputpath=

# Interval to rotate the file in cron format
--fileoutputrotatecron=0 0 * * *

# Number of files to keep. 0 to disable rotation
--fileoutputrotatecount=4

# Output format for file. options:json, csv, csv_no_header, gotemplate. note that the csv splits the datetime format into multiple fields
--fileoutputformat=json

# Go Template to format the output as needed
--fileoutputgotemplate={{.}}

# [influx_output]
# What should be written to influx. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--influxoutputtype=0

# influx Server address, example: http://localhost:8086. Used if influxOutputType is not none
--influxoutputserver=

# Influx Server Auth Token
--influxoutputtoken=dnsmonster

# Influx Server Bucket
--influxoutputbucket=dnsmonster

# Influx Server Org
--influxoutputorg=dnsmonster

# Minimum capacity of the cache array used to send data to Influx
--influxoutputworkers=8

# Minimum capacity of the cache array used to send data to Influx
--influxbatchsize=1000

# [kafka_output]
# What should be written to kafka. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--kafkaoutputtype=0

# kafka broker address(es), example: 127.0.0.1:9092. Used if kafkaOutputType is not none
--kafkaoutputbroker=

# Kafka topic for logging
--kafkaoutputtopic=dnsmonster

# Minimum capacity of the cache array used to send data to Kafka
--kafkabatchsize=1000

# Output format. options:json, gob. 
--kafkaoutputformat=json

# Kafka connection timeout in seconds
--kafkatimeout=3

# Interval between sending results to Kafka if Batch size is not filled
--kafkabatchdelay=1s

# Compress Kafka connection
--kafkacompress

# Compression Type Kafka connection [snappy gzip lz4 zstd]; default(snappy).
--kafkacompressiontype=snappy

# Use TLS for kafka connection
--kafkasecure

# Path of CA certificate that signs Kafka broker certificate
--kafkacacertificatepath=

# Path of TLS certificate to present to broker
--kafkatlscertificatepath=

# Path of TLS certificate key
--kafkatlskeypath=

# [parquet_output]
# What should be written to parquet file. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--parquetoutputtype=0

# Path to output folder. Used if parquetoutputtype is not none
--parquetoutputpath=

# Number of records to write to parquet file before flushing
--parquetflushbatchsize=10000

# Number of workers to write to parquet file
--parquetworkercount=4

# Size of the write buffer in bytes
--parquetwritebuffersize=256000

# [psql_output]
# What should be written to Microsoft Psql. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--psqloutputtype=0

# Psql endpoint used. must be in uri format. example: postgres://username:password@hostname:port/database?sslmode=disable
--psqlendpoint=

# Number of PSQL workers
--psqlworkers=1

# Psql Batch Size
--psqlbatchsize=1

# Interval between sending results to Psql if Batch size is not filled. Any value larger than zero takes precedence over Batch Size
--psqlbatchdelay=0s

# Timeout for any INSERT operation before we consider them failed
--psqlbatchtimeout=5s

# Save full packet query and response in JSON format.
--psqlsavefullquery

# [sentinel_output]
# What should be written to Microsoft Sentinel. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--sentineloutputtype=0

# Sentinel Shared Key, either the primary or secondary, can be found in Agents Management page under Log Analytics workspace
--sentineloutputsharedkey=

# Sentinel Customer Id. can be found in Agents Management page under Log Analytics workspace
--sentineloutputcustomerid=

# Sentinel Output LogType
--sentineloutputlogtype=dnsmonster

# Sentinel Output Proxy in URI format
--sentineloutputproxy=

# Sentinel Batch Size
--sentinelbatchsize=100

# Interval between sending results to Sentinel if Batch size is not filled. Any value larger than zero takes precedence over Batch Size
--sentinelbatchdelay=0s

# [splunk_output]
# What should be written to HEC. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--splunkoutputtype=0

# splunk endpoint address, example: http://127.0.0.1:8088. Used if splunkOutputType is not none, can be specified multiple times for load balanace and HA
--splunkoutputendpoint=

# Splunk HEC Token
--splunkoutputtoken=00000000-0000-0000-0000-000000000000

# Splunk Output Index
--splunkoutputindex=temp

# Splunk Output Proxy in URI format
--splunkoutputproxy=

# Splunk Output Source
--splunkoutputsource=dnsmonster

# Splunk Output Sourcetype
--splunkoutputsourcetype=json

# Send data to HEC in batch sizes
--splunkbatchsize=1000

# Interval between sending results to HEC if Batch size is not filled
--splunkbatchdelay=1s

# [stdout_output]
# What should be written to stdout. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--stdoutoutputtype=0

# Output format for stdout. options:json,csv, csv_no_header, gotemplate. note that the csv splits the datetime format into multiple fields
--stdoutoutputformat=json

# Go Template to format the output as needed
--stdoutoutputgotemplate={{.}}

# Number of workers
--stdoutoutputworkercount=8

# [syslog_output]
# What should be written to Syslog server. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--syslogoutputtype=0

# Syslog endpoint address, example: udp://127.0.0.1:514, tcp://127.0.0.1:514. Used if syslogOutputType is not none
--syslogoutputendpoint=udp://127.0.0.1:514

# [zinc_output]
# What should be written to zinc. options:
#	0: Disable Output
#	1: Enable Output without any filters
#	2: Enable Output and apply skipdomains logic
#	3: Enable Output and apply allowdomains logic
#	4: Enable Output and apply both skip and allow domains logic
--zincoutputtype=0

# index used to save data in Zinc
--zincoutputindex=dnsmonster

# zinc endpoint address, example: http://127.0.0.1:9200/api/default/_bulk. Used if zincOutputType is not none
--zincoutputendpoint=

# zinc username, example: [email protected]. Used if zincOutputType is not none
--zincoutputusername=

# zinc password, example: password. Used if zincOutputType is not none
--zincoutputpassword=

# Send data to Zinc in batch sizes
--zincbatchsize=1000

# Interval between sending results to Zinc if Batch size is not filled
--zincbatchdelay=1s

# Zing request timeout
--zinctimeout=10s

# [general]
# Garbage Collection interval for tcp assembly and ip defragmentation
--gctime=10s

# Duration to calculate interface stats
--capturestatsdelay=1s

# Mask IPv4s by bits. 32 means all the bits of IP is saved in DB
--masksize4=32

# Mask IPv6s by bits. 32 means all the bits of IP is saved in DB
--masksize6=128

# Name of the server used to index the metrics.
--servername=default

# Set debug Log format
--logformat=text

# Set debug Log level, 0:PANIC, 1:ERROR, 2:WARN, 3:INFO, 4:DEBUG
--loglevel=3

# Size of the result processor channel size
--resultchannelsize=100000

# write cpu profile to file
--cpuprofile=

# write memory profile to file
--memprofile=

# GOMAXPROCS variable
--gomaxprocs=-1

# Limit of packets logged to clickhouse every iteration. Default 0 (disabled)
--packetlimit=0

# Skip outputing domains matching items in the CSV file path. Can accept a URL (http:// or https://) or path
--skipdomainsfile=

# Hot-Reload skipdomainsfile interval
--skipdomainsrefreshinterval=1m0s

# Allow Domains logic input file. Can accept a URL (http:// or https://) or path
--allowdomainsfile=

# Hot-Reload allowdomainsfile file interval
--allowdomainsrefreshinterval=1m0s

# Skip TLS verification when making HTTPS connections
--skiptlsverification

# [metric]
# Metric Endpoint Service
--metricendpointtype=

# Statsd endpoint. Example: 127.0.0.1:8125 
--metricstatsdagent=

# Prometheus Registry endpoint. Example: http://0.0.0.0:2112/metric
--metricprometheusendpoint=

# Format for  output.
--metricformat=json

# Interval between sending results to Metric Endpoint
--metricflushinterval=10s

Environment variables

all the flags can also be set via env variables. Keep in mind that the name of each parameter is always all upper case and the prefix for all the variables is "DNSMONSTER."

Example:

$ export DNSMONSTER_PORT=53
$ export DNSMONSTER_DEVNAME=lo
$ sudo -E dnsmonster

Configuration file

you can run dnsmonster using the following command to use configuration file:

$ sudo dnsmonster --config=dnsmonster.ini

# Or you can use environment variables to set the configuration file path
$ export DNSMONSTER_CONFIG=dnsmonster.ini
$ sudo -E dnsmonster

What's the retention policy

The default retention policy for the ClickHouse tables is set to 30 days. You can change the number by building the containers using ./autobuild.sh. Since ClickHouse doesn't have an internal timestamp, the TTL will look at incoming packet's date in pcap files. So while importing old pcap files, ClickHouse may automatically start removing the data as they're being written and you won't see any actual data in your Grafana. To fix that, you can change TTL to a day older than your earliest packet inside the PCAP file.

NOTE: to change a TTL at any point in time, you need to directly connect to the Clickhouse server using a clickhouse client and run the following SQL statement (this example changes it from 30 to 90 days):

ALTER TABLE DNS_LOG MODIFY TTL DnsDate + INTERVAL 90 DAY;`

NOTE: The above command only changes TTL for the raw DNS log data, which is the majority of your capacity consumption. To make sure that you adjust the TTL for every single aggregation table, you can run the following:

ALTER TABLE DNS_LOG MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_DOMAIN_COUNT` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_DOMAIN_UNIQUE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_PROTOCOL` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_GENERAL_AGGREGATIONS` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_EDNS` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_OPCODE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_TYPE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_CLASS` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_RESPONSECODE` MODIFY TTL DnsDate + INTERVAL 90 DAY;
ALTER TABLE `.inner.DNS_SRCIP_MASK` MODIFY TTL DnsDate + INTERVAL 90 DAY;

UPDATE: in the latest version of clickhouse, the .inner tables don't have the same name as the corresponding aggregation views. To modify the TTL you have to find the table names in UUID format using SHOW TABLES and repeat the ALTER command with those UUIDs.

Sampling and Skipping

pre-process sampling

dnsmonster supports pre-processing sampling of packet using a simple parameter: sampleRatio. this parameter accepts a "ratio" value, like 1:2. 1:2 means for each 2 packet that arrives, only process one of them (50% sampling). Note that this sampling happens AFTER bpf filters and not before. if you have an issue keeping up with the volume of your DNS traffic, you can set this to something like 2:10, meaning 20% of the packets that pass your bpf filter, will be processed by dnsmonster.

skip domains

dnsmonster supports a post-processing domain skip list to avoid writing noisy, repetitive data to your Database. The domain skip list is a csv-formatted file, with only two columns: a string and a logic for that particular string. dnsmonster supports three logics: prefix, suffix and fqdn. prefix and suffix means that only the domains starting/ending with the mentioned string will be skipped to be written to DB. Note that since the process is being done on DNS questions, your string will most likely have a trailing . that needs to be included in your skip list row as well (take a look at skipdomains.csv.sample for a better view). You can also have a full FQDN match to avoid writing highly noisy FQDNs into your database.

allow domains

dnsmonster has the concept of allowdomains, which helps building the detection if certain FQDNs, prefixes or suffixes are present in the DNS traffic. Given the fact that dnsmonster supports multiple output streams with different logic for each one, it's possible to collect all DNS traffic in ClickHouse, but collect only allowlist domains in stdout or in a file in the same instance of dnsmonster.

SAMPLE in clickhouse SELECT queries

By default, the main tables created by tables.sql (DNS_LOG) file have the ability to sample down a result as needed, since each DNS question has a semi-unique UUID associated with it. For more information about SAMPLE queries in Clickhouse, please check out this document.

Supported Inputs

  • Live capture via libpcap/ncap (Ethernet and raw IP are supported)
  • Live capture via afpacket (Ethernet and raw IP are supported)
  • Dnstap socket (listen mode)
  • Pcap file (Ethernet frame)

NOTE: if your pcap file is captured by one of Linux's meta-interfaces (for example tcpdump -i any), dnsmonster won't be able to read the Ethernet frame off of it since it doesn't exist. you can use a tool like tcprewrite to convert the pcap file to Ethernet.

Supported Outputs

  • Clickhouse
  • Kafka
  • Elasticsearch
  • Splunk HEC
  • Stdout
  • File
  • Syslog (Linux Only)
  • Microsoft Sentinel
  • InfluxDB

Roadmap

  • Down-sampling capability for SELECT queries
  • Adding afpacket support
  • Configuration file option
  • Exclude FQDNs from being indexed
  • FQDN whitelisting to only log certain domains
  • dnstap support
  • Kafka output support
  • Ability to load allowDomains and skipDomains from HTTP(S) endpoints
  • Elasticsearch output support
  • Splunk HEC output support
  • Syslog output support
  • Grafana dashboard performance improvements
  • remove libpcap dependency and move to pcapgo for packet processing
  • Getting the data ready to be used for ML & Anomaly Detection
  • De-duplication support (WIP)
  • Optional SSL for Clickhouse
  • statsd and Prometheus support
  • Splunk Dashboard
  • Kibana Dashbaord
  • Clickhouse versioning and migration tool
  • tests and benchmarks

Related projects

dnsmonster's People

Contributors

adulau avatar dependabot[bot] avatar dstruck avatar edevil avatar mosajjal avatar mzealey avatar parsa97 avatar pchaseh avatar srudush avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dnsmonster's Issues

Compilation failed on MacOS M1

Sorry to raise this but I'm failing to compile the main branch.

Error:

go build -o dnsmonster .
# github.com/mosajjal/dnsmonster/capture
capture/livecap_bsd.go:69:26: h.sniffer.Stats undefined (type bsdbpf.BPFSniffer has no field or method Stats)

Mac OS 12.6
Go 1.19.3 darwin/arm64

Not logged all data from Interface Device

Hi ..
we are observing a issue in monster while logging data to click-house from monster.
we have two standalone VMs in same location and configured monster with latest version.
but in one VMs data are being logged less than second VMs.
Supoose we have 100 packets of data then one VMs loogged 80 packets and second VMs logged 30 Packets.
I suppose that both VMs should logged full 100 of packets without any loss.

Clickhouse flushing algo

It seems like the default config is just batching clickhouse output and only writing when the batch is full, which is a bit confusing - changing to using 1s flush interval makes the flushes work correctly in low-quantity testing.

May I suggest that you have a flush algorithm that flushes EITHER when flush interval OR batch size is reached as that is the standard way to do things? And that flush interval is set to eg 5s as default?

PCAP to JSON workflow - Split/maximum packets per output file?

Would it be possible to be able to limit the size of the output json files by packet count?

As you'd be aware pcap to json conversion balloons the size of the files, for us it's taking a gzip compressed pcap from 1GB to a JSON file of 26GB.

Would be great if we could split that out with the following example:

Input:
example.pcap

Output:
example_001.json
example_002.json
etc etc

Also helps with downstream horizontal scaling as the files are processed off S3, so smaller but more numerous files allows more simultanteous workers to process the output.

Support GeoIP maxmind

This feature is useful for get geolocation information from DNS requests using MaxMind databases.

Like the output with at least the following fields:

Continent
CountryISOCode
City
AS Number
AS Owner

PR: #105

File output supporting compression

Fairly simple question, would it be possible to have gzip compression options on the file output option?

Saves me having to use a file watcher on a directory to collect files and compress them prior to AWS S3 upload.

Build manually Error !!!

Dear Mosajjal,

Error During Installation:-

[root@Home-Server tmp]# cd
[root@Home-Server ~]# git clone https://github.com/mosajjal/dnsmonster --depth 1 /tmp/dnsmonster
Cloning into '/tmp/dnsmonster'...
remote: Enumerating objects: 209, done.
remote: Counting objects: 100% (209/209), done.
remote: Compressing objects: 100% (188/188), done.
remote: Total 209 (delta 21), reused 146 (delta 9), pack-reused 0
Receiving objects: 100% (209/209), 3.90 MiB | 3.40 MiB/s, done.
Resolving deltas: 100% (21/21), done.
[root@Home-Server ~]# cd /tmp/dnsmonster
[root@Home-Server dnsmonster]# go get

go: no package to get in current directory

[root@Home-Server dnsmonster]# go build -o dnsmonster .
no Go files in /tmp/dnsmonster
[root@Home-Server dnsmonster]# go build -o dnsmonster ./cmd/dnsmonster

github.com/gopacket/gopacket/pcap

/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:30:22: undefined: pcapErrorNotActivated
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:52:17: undefined: pcapTPtr
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:64:10: undefined: pcapPkthdr
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:103:6: undefined: pcapBpfProgram
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:110:7: undefined: pcapPkthdr
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:268:33: undefined: pcapErrorActivated
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:269:33: undefined: pcapWarningPromisc
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:270:33: undefined: pcapErrorNoSuchDevice
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:271:33: undefined: pcapErrorDenied
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:750:14: undefined: pcapTPtr
/root/go/pkg/mod/github.com/gopacket/[email protected]/pcap/pcap.go:271:33: too many errors
+++++++++++++++++++++++++++++++++++++++++++
Brother, a long time ago I installed it in the same way and it got installed, but this time it is giving some error as I have mentioned above, maybe I am making some mistake from my side, I apologize for this but I searched for the whole day on this error and could not understand anything. So please suggest me.

ClickHouse over network

I made a small change that leverages builtin driver's support. Most likely I'll make a change to the way the Address is provided so we can have different credentials and TLS support for each address later on. Please let me know if the latest commit solves your issue. Happy to re-open if it doesn't

Originally posted by @mosajjal in #27 (comment)

higher precision timestamps in clickhouse

It would be nice to have higher precision timestamps for the package's time in clickhouse.

The current DNS_Log table:

CREATE TABLE IF NOT EXISTS DNS_LOG (
PacketTime DateTime,
IndexTime DateTime64,

Am not sure why IndexTime is higher precision, but PacketTime is just seconds.

The PostgreSQL output uses timestamp for both

`CREATE TABLE IF NOT EXISTS DNS_LOG (PacketTime timestamp, IndexTime timestamp,

which is already higher precision, see https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-INPUT

Maybe the PacketTime column in clickhouse's DNS_LOG could be changed to the higher-resolution https://clickhouse.com/docs/en/sql-reference/data-types/datetime64 as well?

Just changing the data type for the column seems to work.

Clickhouse datasource plugin showing as unsigned in grafana

First of all I would like to say a big thank you for making this.

Everything gets installed fine, even getting the logs on clickhouse container when checked clickhouse-client, So that works brilliantly. However, while looking at the datasource in grafana. The local ch datasource doesn't load, tried adding new then noticed following error at the top
Screenshot 2021-09-03 at 6 04 05 PM

Tried changing the flag "allow_loading_unsigned_plugins" too, but the error is not going away. Clickhouse is not available in the list of datasources.

Screenshot 2021-09-03 at 6 05 03 PM

pcap by stdin?

Is it possible to pipe pcap data to dnsmonster? Sometimes a network stream needs first to be decapsulated.

Something like pcap.OpenOfflineFile(os.Stdin)?

can dnsmonster handle 5,000,000 QPS?

Hi,,
Can dnsmonster handle 5,000,000 QPS? We have a passive server about receive DNS traffic, and we want analyze the traffic data by dnsmonster, I'm not sure if it's a best way?
bellow is my server network info:
image

Splunk Output mode >1 broken, 8.4.0

Hello,

It seems the filtering/allow logic is broken for the Splunk HEC Output as once the splunkOutputType is increased above 1, all domains get skipped, no matter what is in/not in the allow/skip files:
{"level":"info","msg":"output: {Name:splunk SentToOutput:0 Skipped:99441}","time":"2021-07-13T15:03:29+10:00"}
{"level":"info","msg":"{PacketsGot:99485 PacketsLost:0 PacketLossPercent:0}","time":"2021-07-13T15:03:29+10:00"}

Config file:
useAfpacket=true
devName=myerspan
splunkOutputType=3
skipDomainsFile=/app/dnsmonster/filterDomains.csv
splunkOutputEndpoint=:8088
splunkOutputToken=
skipTlsVerification=true
splunkOutputIndex=
splunkOutputSource=
splunkOutputSourceType=

filterDomains:
empty

Are you able to please advise if something is wrong with the config or if this bug has been fixed in commits past the 8.4.0 release?

Thanks,
Lachlan

Warning Error While logging data on DNStap socket to Click-House

Hi ,
Kindly look into this !!!
Dnsmonster Service.
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="Starting DNStap capture"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="listening on DNStap socket unix:///var/cache/bind/dnstap1.sock"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="socket exists, will try to overwrite the socket"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="Creating handler #0"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="Creating handler #1"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="Creating the dispatch Channel"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="Creating Clickhouse Output Channel"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="skipping skipDomains refresh since it's not provided"
Jun 22 14:12:50 two dnsmonster[19067]: time="2023-06-22T14:12:50+05:30" level=info msg="skipping allowDomains refresh since it's not provided"
Jun 22 14:13:00 two dnsmonster[19067]: time="2023-06-22T14:13:00+05:30" level=warning msg="failed to convert metrics to JSON."
Jun 22 14:13:00 two dnsmonster[19067]: 2023-06-22T14:13:00+05:30 metrics:
Jun 22 14:13:00 two dnsmonster[19067]: time="2023-06-22T14:13:00+05:30" level=info msg="ipv4 flushed: 0, closed: 0"
Jun 22 14:13:00 two dnsmonster[19067]: time="2023-06-22T14:13:00+05:30" level=info msg="ipv6 flushed: 0, closed: 0"
Jun 22 14:13:10 two dnsmonster[19067]: time="2023-06-22T14:13:10+05:30" level=warning msg="failed to convert metrics to JSON."
Jun 22 14:13:10 two dnsmonster[19067]: 2023-06-22T14:13:10+05:30 metrics:
Jun 22 14:13:10 two dnsmonster[19067]: time="2023-06-22T14:13:10+05:30" level=info msg="ipv4 flushed: 0, closed: 0"
Jun 22 14:13:10 two dnsmonster[19067]: time="2023-06-22T14:13:10+05:30" level=info msg="ipv6 flushed: 0, closed: 0"
Jun 22 14:13:20 two dnsmonster[19067]: time="2023-06-22T14:13:20+05:30" level=warning msg="failed to convert metrics to JSON."
Jun 22 14:13:20 two dnsmonster[19067]: 2023-06-22T14:13:20+05:30 metrics:
Jun 22 14:13:20 two dnsmonster[19067]: time="2023-06-22T14:13:20+05:30" level=info msg="ipv4 flushed: 0, closed: 0"
Jun 22 14:13:20 two dnsmonster[19067]: time="2023-06-22T14:13:20+05:30" level=info msg="ipv6 flushed: 0, closed: 0"
Jun 22 14:13:30 two dnsmonster[19067]: time="2023-06-22T14:13:30+05:30" level=warning msg="failed to convert metrics to JSON."
Jun 22 14:13:30 two dnsmonster[19067]: 2023-06-22T14:13:30+05:30 metrics:
Jun 22 14:13:30 two dnsmonster[19067]: time="2023-06-22T14:13:30+05:30" level=info msg="ipv4 flushed: 0, closed: 0"
Jun 22 14:13:30 two dnsmonster[19067]: time="2023-06-22T14:13:30+05:30" level=info msg="ipv6 flushed: 0, closed: 0"

Clickhouse schema weirdness

I've noticed that on a number of the CH tables you are including timestamp in the order by. Rather than doing this you should probably have a truncated timestamp such as by minute (or at least truncated to per-second) otherwise there's not much point in the MV's compared to just sampling from the raw table itself.

Additionally there are a number of times when you sum() a value such as DoBit which is a UInt8 in the primary table. It would be better to cast those to UInt64 and then sum that to avoid overflows.

Question regarding CNAME and Query response on Clickhouse

Hello,
First, I would like to thank you for this excellent and useful project!
Regarding Clickhouse output, I am running a local stack and while monitoring my main interface, I cannot find CNAME queries in DNS_LOG table (Types are either 1 or 28 which are mapped to A and AAAA respectively). Is this an issue with my configuration or is this by default?

Best,

dnsmonser not sending packets from live interface to clickhouse

Hi !!

Hope You are doing Well.
I downloaded the latest dnsmonster(v0.9.5) binary from the release section . but dnsmonser not sending packets from live interface to clickhouse.
The error is given Below :--
[4515]: time="2022-10-07T18:11:53+05:30" level=warning msg="Error while executing batch: clickhouse [Append]: clickhouse: expected 18 arguments, got 17"

but in your previous version binary like (v0.9.2 & v0.9.3) is working fine ...

kindly check and do the needful ....

Thanks !!!!

Save dnstap `identity` as server name

If we have multiple servers sending data over dnstap to dnsmonster it would be good to have an option to use the dnstap identity field as the server name which gets recorded in the logs.

"IPv6 Packet Destination Top 20 Prefix" -- IPv6 addresses are incorrect

In "IPv6 Packet Destination Top 20 Prefix" panel, the IPv6 addresses are incorrect.

e.g. in my panel ,it is showing random IPs.

Screenshot 2022-03-22 at 11 38 28

In csv.go page, the code for converting IPv6 address to decimal is --

SrcIP = binary.BigEndian.Uint64(d.SrcIP[:8]) //limitation of clickhouse-go doesn't let us go more than 64 bits for ipv6 at the moment DstIP = binary.BigEndian.Uint64(d.DstIP[:8])

As per my understanding from above comment, clickhouse is not allowing more than 64 bits for a variable. Is there a way to show correct data on the panel ?

Clickhouse Cloud

I noticed that the scheme appears to have changed and I tried to apply that to the replicated example, but it is not working as expected.

Found wrong sql in grafana example panel.json

When I use grafana panel.json (https://github.com/mosajjal/dnsmonster/blob/main/grafana/panel.json), I found 2 cahrt that grafana can't draw:
image,
Then I found that two SQL statements encountered errors while executing.

SELECT 0, groupArray((IP, total)) FROM (SELECT IPv6NumToString(toFixedString(SrcIP, 16)) AS IP,sum(c) as total FROM DNS_SRCIP_MASK PREWHERE IPVersion=4 WHERE $timeFilter      GROUP BY SrcIP order by SrcIP desc limit 20);
SELECT 0, groupArray((IP, total)) FROM (SELECT IPv6NumToString(toFixedString(SrcIP, 16)) AS IP,sum(c) as total FROM DNS_SRCIP_MASK PREWHERE IPVersion=6 WHERE $timeFilter     GROUP BY SrcIP order by SrcIP desc limit 20)

After converting to regular SQL statements and executing them in ClickHouse, the following error is displayed below:

SELECT
    0,
    groupArray((IP, total))
FROM
(
    SELECT
        IPv6NumToString(toFixedString(SrcIP, 16)) AS IP,
        sum(c) AS total
    FROM DNS_SRCIP_MASK
    PREWHERE IPVersion = 4
    WHERE (DnsDate >= toDate(1697869430)) AND (DnsDate <= toDate(1697880230)) AND (timestamp >= toDateTime(1697869430)) AND (timestamp <= toDateTime(1697880230))
    GROUP BY SrcIP
    ORDER BY SrcIP DESC
    LIMIT 20

Query id: 82ad98fe-6377-4980-9325-a9ca63682f27


0 rows in set. Elapsed: 0.002 sec.

Received exception from server (version 23.3.14):
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: toFixedString is only implemented for types String and FixedString: While processing IPv6NumToString(toFixedString(SrcIP, 16)) AS IP, sum(c) AS total. (NOT_IMPLEMENTED)
)

7a3c86218e97 :) select * from DNS_SRCIP_MASK

SELECT *
FROM DNS_SRCIP_MASK

Query id: a0453307-81e5-4347-a840-e5cd9381946b

┌────DnsDate─┬───────────timestamp─┬─Server──┬─IPVersion─┬─SrcIP─────────────────┬───c─┐
│ 2023-10-21 │ 2023-10-21 06:51:18 │ default │         4 │ ::ffff:172.23.160.1   │ 136 │
│ 2023-10-21 │ 2023-10-21 06:51:18 │ default │         4 │ ::ffff:172.23.162.110 │ 126 │
└────────────┴─────────────────────┴─────────┴───────────┴───────────────────────┴─────┘
┌────DnsDate─┬───────────timestamp─┬─Server──┬─IPVersion─┬─SrcIP───────────────┬─c─┐
│ 2023-10-21 │ 2023-10-21 09:18:32 │ default │         4 │ ::ffff:172.23.160.1 │ 1 │
└────────────┴─────────────────────┴─────────┴───────────┴─────────────────────┴───┘
┌────DnsDate─┬───────────timestamp─┬─Server──┬─IPVersion─┬─SrcIP─────────────────┬─c─┐
│ 2023-10-21 │ 2023-10-21 09:18:32 │ default │         4 │ ::ffff:172.23.162.110 │ 1 │
└────────────┴─────────────────────┴─────────┴───────────┴───────────────────────┴───┘

Question around pcap file behaviour

Thanks for writing a really useful tool for pcap parsing into dns json!

I just have a question, I'm running the command as such below:

$ dnsmonster --pcapfile="output.pcap" --fileoutputpath=dns.json --fileoutputformat=json --fileoutputtype=1
INFO[2022-11-04T15:44:41Z] Creating the dispatch Channel
INFO[2022-11-04T15:44:41Z] Creating File Output Channel
INFO[2022-11-04T15:44:41Z] Using File: output.pcap

INFO[2022-11-04T15:44:41Z] skipping skipDomains refresh since it's not provided
INFO[2022-11-04T15:44:41Z] skipping allowDomains refresh since it's not provided
WARN[2022-11-04T15:44:41Z] BPF Filter is not supported in offline mode.
INFO[2022-11-04T15:44:41Z] Reading off Pcap file
INFO[2022-11-04T15:44:41Z] Creating handler #0
INFO[2022-11-04T15:44:41Z] Creating handler #1
2022-11-04T15:44:51Z metrics: {"fileSentToOutput":{"count":136405},"fileSkipped":{"count":0},"packetLossPercent":{"value":0},"packetsCaptured":{"value":0},"packetsDropped":{"value":0},"packetsDuplicate":{"count":0},"packetsOverRatio":{"count":0}}
2022-11-04T15:45:01Z metrics: {"fileSentToOutput":{"count":136405},"fileSkipped":{"count":0},"packetLossPercent":{"value":0},"packetsCaptured":{"value":0},"packetsDropped":{"value":0},"packetsDuplicate":{"count":0},"packetsOverRatio":{"count":0}}
...

But this never ends? According to top/iotop the process has finished writing and I can confirm the output json file seems to have stopped writing - but dnsmonster never terminates back to the shell.

Is this expected behaviour?

dnsmonser not sending all packets from pcap to clickhouse

I noticed that the number of DNS packets stored in my local clickhouse instance were always multiples of the clickhousebatchsize:
When using dnsmonster to process pcap files and clickhousebatchsize set to non-zero, the clickhouse output will not send all results to clickhouse.

Scenario:

  • clickhousebatchsize = 100000 (default)
  • output to clickhouse
  • pcaps with number of DNS packets which is not multiple of the batch size, e.g. 1234567

Result:
dnsmonster sends info for only 1200000 packets to clickhouse, in batches of 100000.
Missing the 34567 packets as the batch size was not reached.

The code section responsible is

if int(c%chConfig.ClickhouseBatchSize) == div {
now = time.Now()
err = batch.Send()
if err != nil {
log.Warnf("Error while executing batch: %v", err)
clickhouseFailed.Inc(int64(c))
}
c = 0
batch, _ = conn.PrepareBatch(ctx, "INSERT INTO DNS_LOG")
}

I did not see anything that makes sure the remainder is being sent to clickhouse when the end of the pcap file has been reached.

Minor documentation fix

In the README.MD in the Configuration File section, the command line is listed as

sudo dnsmonster -config=dnsmonster.ini

It should be

sudo dnsmonster --config=dnsmonster.ini

dnsmonster to clickhouse data replication

First of all thank you for this amazing dnsmonster !!!
We have a cluster of three clickhouse nodes and our dns data is going through dnsmonster. But in the configuration of dnsmonster only one IP address of the clickhouse node is mentioned. The problem is that if our mentioned clickhouse node goes down, will our data replicate to the other cluster node?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.