Giter Club home page Giter Club logo

go-ipam's Introduction

go-ipam

Actions GoDoc Go Report Card codecov License

go-ipam is a module to handle IP address management. It can operate on networks, prefixes and IPs.

It also comes as a ready to go microservice which offers a grpc api.

IP

Most obvious this library is all about IP management. The main purpose is to acquire and release an IP, or a bunch of IP's from prefixes.

Prefix

A prefix is a network with IP and mask, typically in the form of 192.168.0.0/24. To be able to manage IPs you have to create a prefix first.

Library Example usage:

package main

import (
    "fmt"
    "time"
    goipam "github.com/metal-stack/go-ipam"
)

func main() {
    // create a ipamer with in memory storage
    ipam := goipam.New()


    bgCtx := context.Background()
    // Optional with Namespace
    ctx := goipam.NewContextWithNamespace(bgCtx, "tenant-a")

    ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
    defer cancel()
    prefix, err := ipam.NewPrefix(ctx, "192.168.0.0/24")
    if err != nil {
        panic(err)
    }

    ip, err := ipam.AcquireIP(ctx, prefix.Cidr)
    if err != nil {
        panic(err)
    }
    fmt.Printf("got IP: %s\n", ip.IP)

    prefix, err = ipam.ReleaseIP(ctx, ip)
    if err != nil {
        panic(err)
    }
    fmt.Printf("IP: %s released.\n", ip.IP)

    // Now a IPv6 Super Prefix with Child Prefixes
    prefix, err = ipam.NewPrefix(ctx, "2001:aabb::/48")
    if err != nil {
        panic(err)
    }
    cp1, err := ipam.AcquireChildPrefix(ctx, prefix.Cidr, 64)
    if err != nil {
        panic(err)
    }
    fmt.Printf("got Prefix: %s\n", cp1)
    cp2, err := ipam.AcquireChildPrefix(ctx, prefix.Cidr, 72)
    if err != nil {
        panic(err)
    }
    fmt.Printf("got Prefix: %s\n", cp2)
    ip21, err := ipam.AcquireIP(ctx, cp2.Cidr)
    if err != nil {
        panic(err)
    }
    fmt.Printf("got IP: %s\n", ip21.IP)
}

GRPC Service

First start the go-ipam container with the database backend of your choice already up and running. For example if you have a postgres database for storing the ipam data, you could run the grpc service like so:

docker run -it --rm ghcr.io/metal-stack/go-ipam postgres

From a client perspective you can now talk to this service via grpc.

GRPC Example usage:

package main

import (
    "http"

    "github.com/bufbuild/connect-go"
    v1 "github.com/metal-stack/go-ipam/api/v1"
    "github.com/metal-stack/go-ipam/api/v1/apiv1connect"
)
func main() {

    c := apiv1connect.NewIpamServiceClient(
            http.DefaultClient,
            "http://localhost:9090",
            connect.WithGRPC(),
    )

    bgCtx := context.Background()

    // Optional with Namespace
    ctx := goipam.NewContextWithNamespace(bgCtx, "tenant-a")

    result, err := c.CreatePrefix(ctx, connect.NewRequest(&v1.CreatePrefixRequest{Cidr: "192.168.0.0/16",}))
    if err != nil {
        panic(err)
    }
    fmt.Println("Prefix:%q created", result.Msg.GetPrefix().GetCidr())
}

GRPC client

There is also a cli provided in the container which can be used to make calls to the grpc endpoint manually:

docker run -it --rm --entrypoint /cli ghcr.io/metal-stack/go-ipam

Docker Compose example

Ensure you have docker with compose support installed. Then execute the following command:

docker compose up -d

# check if up and running
docker compose ps

NAME                 IMAGE             COMMAND                  SERVICE    CREATED          STATUS                    PORTS
go-ipam-ipam-1       go-ipam           "/server postgres"       ipam       14 seconds ago   Up 13 seconds (healthy)   0.0.0.0:9090->9090/tcp, :::9090->9090/tcp
go-ipam-postgres-1   postgres:alpine   "docker-entrypoint.sā€¦"   postgres   8 minutes ago    Up 13 seconds             5432/tcp


# Then execute the cli to create prefixes and acquire ips

docker compose exec ipam /cli prefix create --cidr 192.168.0.0/16
prefix:"192.168.0.0/16" created

docker compose exec ipam /cli ip acquire --prefix  192.168.0.0/16
ip:"192.168.0.1" acquired

# Queries can also made against the Rest api like so:

curl -v -X POST -d '{}' -H 'Content-Type: application/json' localhost:9090/api.v1.IpamService/ListPrefixes

Supported Databases & Performance

Database Acquire Child Prefix Acquire IP New Prefix Prefix Overlap Production-Ready Geo-Redundant
In-Memory 106,861/sec 196,687/sec 330,578/sec 248/sec N N
File N N
KeyDB 777/sec 975/sec 2,271/sec Y Y
Redis 773/sec 958/sec 2,349/sec Y N
MongoDB 415/sec 682/sec 772/sec Y Y
Etcd 258/sec 368/sec 533/sec Y N
Postgres 203/sec 331/sec 472/sec Y N
CockroachDB 170/sec 300/sec 470/sec Y Y

The benchmarks above were performed using:

  • cpu: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
  • postgres:16-alpine
  • cockroach:v23.1.0
  • redis:7.2-alpine
  • keydb:alpine_x86_64_v6.3.1
  • etcd:v3.5.9
  • mongodb:7

Database Version Compatibility

Database Details
KeyDB
Redis
MongoDB mongodb-go compatibility
Etcd
Postgres
CockroachDB

Testing individual Backends

It is possible to test a individual backend only to speed up development roundtrip.

backend can be one of Memory, Postgres, Cockroach, Etcd, Redis, and MongoDB.

BACKEND=backend make test

go-ipam's People

Contributors

alejandrojnm avatar dave-tucker avatar davidefalcone1 avatar dependabot[bot] avatar droid42 avatar f00actual avatar gerrit91 avatar majst01 avatar mwindower avatar nazarewk avatar nerdalert avatar ralfonso avatar realharshthakur avatar rene-at-dell avatar sh4d1 avatar suckowbiz avatar ulrichschreiner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-ipam's Issues

Create high level functions or specific error's

hi,

in metal-api we have some cleanup code that looks like this:

// release and delete
	err := a.ReleaseIP(*ip)
	if err != nil {
		return fmt.Errorf("cannot release IP %q: %w", ip.IPAddress, err)
	}
	err = a.DeleteIP(ip)
	if err != nil {
		return fmt.Errorf("cannot delete IP %q: %w", ip.IPAddress, err)
	}

the problem with the ReleaseIP and the DeleteIP is that they only return errors or OK. but the caller does not know if the error can be ignored or not.

suppose a function which does the upper block until it success. this is a problem if the ReleaseIP succeeds but the DeleteIP does not. In this case the next call of this block will fail with the ReleaseIP because the previous call to this code already released the IP. So the DeleteIP will never be reached again.

  • the functions should return specific error-codes like NotFound so the caller can ignore this specific errors.
    --- OR ---
  • the library should contain higher level functions like ReleaseAndDeleteIP which work with one transaction and they succeed or fail.

Postgres Connection String Issues (again)

#102 has caused a regression in handling special characters in passwords.

In this case, my username/password comes from a kubernetes operator - i don't have control over how it's generated.

2023/02/20 17:16:42 Error in cli: parse "postgres://ipam:+S-@u]JBpWo^kduE7+([email protected]:5432/ipam?sslmode=require": net/url: invalid userinfo: unable to parse base URL:postgres://ipam:+S-@u]JBpWo^kduE7+([email protected]:5432/ipam?sslmode=require

Read and work with errors

Hi. First of all I want to thank you for this project ;-)

I would like to know how to read your error messages and work with them.
The following scenario: I want to acquire a specific IP with AcquireSpecificIP() and it could be possible that this IP is already acquired, but that's ok. So I would like to read the error message like this

_, err = ipam.Ipamer.AcquireSpecificIP(ctx, prefix, ip)
if err != nil {
	if err == goipam.ErrAlreadyAllocated {
		log.Warnln("this is ok")
	} else {
		log.Errorln(err)
		return err
	}
}

But this does not work as expected, since I could see the error message is an assembled string. Can you tell me how to work with Variables and your error messages?

Support classless prefixes

For prefixes which are not for machines, but only for prefix ranges available for other purposes, wasting the network and the broadcast address is bad.

Add a flag for prefix acquire to skip this reservation

Is this lib stateless ready with Redis storage?

I'm trying to implement IPAM in stateless environment using Redis as central DB. After analyzing redis module it seems that there will be racing and traffic congestion issues.

  1. To AcquireIP() library first calls ReadPrefix(). Then it locally fetches next available IP. After that UpdatePrefix() to write data back.
    Problem here that another node might have called ReadPrefix() with same state and will try to attempt same UpdatePrefix()
  2. UpdatePrefix() relates on 'optimistic lock' by checking version. On failure (other node changed version) the whole process will be started again up to 10 times.
    Here I see that in case there are many nodes trying to work with same CIDR there will be racing condition. In the end current node can easily stumble on 10 retries and fail to allocate IP.
    Also Redis locks data only on read-write operations. So between version Read and updated prefix Write operation another node might update same prefix data. This result that 2 nodes allocates same IP from CIDR pool and both will consider operation to be successful.
  3. Traffic congestion on lock checks, retries.

Library is OK for single node operation, but won't do on multinode due to lack of syncronization. Easy and dirty solution would be SETNX as a mutex. But this will slow operatons while other nodes wait for lock release.

There is also PubSub model. Where nodes can passively watch for key chanes. This way nodes can track updates and react faster on them.

And finally we should keep in mind latency. Extra checks and retries increases traffic. Instead it is more preferable to Pipeline or Script Redis commands.

mongodb storage backend

I would like to add support for a mongodb storage back-end, and I'm also willing to contribute the code. I've tested it locally, with tests passing and benchmarks appearing favorable. Are you open to such a contribution? Cheers.

Storage cannot be implemented by others

The current Storage interface returns Prefix structs. But the Prefix struct contains private fields which cannot be filled/returned by external implementations as they do not see this fields. But the Prefix implementation relies on these fields and their correctness.

i see the following options:

  • extend the Storage interface with methods for manipulating the current private fields
  • create a new StoragePrefix which is returned/consumed by a storage and contains only the data from the storage. the Prefix always copies the data to it's own fields.
  • make the interface private and remove the NewWithStorage fuction. new storage backends have to be provided by the library and must provided as PR's

i'd propose the third option as atm i cannot see any other implementations for Storage from outside of our project.

ipam not working with etcd service

I have ran the etcd on my local mac and I can curl without an issue, I tried to access ipam with edcd backend using a docker compose file and getting an error

{"level":"warn","ts":"2024-05-05T11:58:26.550315Z","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400037a8c0/localhost:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused""}
ipam-1 | 2024/05/05 11:58:26 Error in cli: context deadline exceeded
ipam-1 exited with code 1

ipam: image: go-ipam environment: - GOIPAM_GRPC_SERVER_ENDPOINT=0.0.0.0:9090 - GOIPAM_ETCD_HOST=localhost - GOIPAM_ETCD_PORT=2379 # - GOIPAM_ETCD_CERT_FILE=cert.pem # - GOIPAM_ETCD_KEY_FILE=key.pem - GOIPAM_ETCD_INSECURE_SKIP_VERIFY=false restart: always command: etcd ports: - 9090:9090 healthcheck: test: ["CMD", "/bin/grpc_health_probe", "-addr","localhost:9090"] interval: 10s timeout: 2s retries: 3 start_period: 10s

this is a part of the compose file I used.

Missing `AcquireSpecificIP` implementation in `IpamServiceClient`

Hi, first off, awesome project, you all have done a really nice job!!

Quick question, are there plans to implement AcquireSpecificIP in the grpc IpamServiceClient Interface?

Looks like it accepts the /32 specific requested address in a prefix to lease but doesnt use it. apiv1.AcquireIPRequest{...}

Here we pass in a specific /32 but get returned the next ip in the pool.

	res, err := ct.ipam.AcquireIP(ctx, connect.NewRequest(&apiv1.AcquireIPRequest{
		PrefixCidr: ipamPrefix,
		Ip:         &nodeAddress,
	}))

Happy to hack, just wondering if this was a todo or specifically left out of the IpamServiceClient implementation. Thanks again for the neat project! šŸ„‡

Add Dump and Load funcs

Add the ability to create a backup which does not depend on the used storage engine which enables end-users to migrate from one storage backend to another

Support overlapping cidrs/prefixes

Currently cidr/prefix is the primary key, therefore it is not possible to allocate the same prefix for two or more usage scenarios.
This restriction is valid for example for internet prefixes, but not for private network prefixes of nodes. They can overlap, the node prefixes must be disjoint, only if communication between the clusters is required.

A possible solution would be to make the primary key a joined primary key from UUID and CIDR/Prefix. The UUID is either totally unique, or a grouping,for example like a project or internet prefix.

Migration of the existing data to the new schema is a hard requirement.

This approach would also help to identify ips more easily in scenarios where they where released and acquired by different users.

@Gerrit91

AcquireSpecificIP will block forever if IPv6 is already taken

Hey everyone!
I am playing around with this library in my IPAM controller for Kubernetes https://github.com/routerd/kube-ipam.

Today I stumbled across a strange Bug.
When a specific IPv6 address is already allocated AcquireSpecificIP will block forever...
(or at least longer than I am patient for :D -> 5min+ )

Hope I am not just grossly misusing this library :)

Tested on v1.8.1 according to go.mod

$ go version
go version go1.15.8 linux/amd64

Here is a little go-snippet that I have used to reproduce the issue:

const (
	cidr = "fd9c:fd74:6b8d:1020::/64"
)

func main() {
	ipam := goipam.New()
	ipam.NewPrefix(cidr)

	ip1, err := ipam.AcquireSpecificIP(cidr, "fd9c:fd74:6b8d:1020::1")
	if err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(1)
	}
	fmt.Println("got static: ", ip1)

	ip2, err := ipam.AcquireSpecificIP(cidr, "fd9c:fd74:6b8d:1020::1")
	if err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(1)
	}
	fmt.Println("got static: ", ip2)
}

will return:

$ go run ./ipv6/main.go 
got static:  &{fd9c:fd74:6b8d:1020::1 fd9c:fd74:6b8d:1020::/64}
< blocks forever >

The same code using IPv4 is working fine:

const (
	cidr = "172.20.0.0/24"
)

func main() {
	ipam := goipam.New()
	ipam.NewPrefix(cidr)

	ip1, err := ipam.AcquireSpecificIP(cidr, "172.20.0.1")
	if err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(1)
	}
	fmt.Println("got static: ", ip1)

	ip2, err := ipam.AcquireSpecificIP(cidr, "172.20.0.1")
	if err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(1)
	}
	fmt.Println("got static: ", ip2)
}
$ go run ./ipv4/main.go
got static:  &{172.20.0.1 172.20.0.0/24}
NoIPAvailableError: no more ips in prefix: 172.20.0.0/24 left, length of prefix.ips: 3
exit status 1

Method in the `Ipamer` interface to fetch existing Prefixes

From what I could tell, the Ipamer interface is there to abstract out the Storage backend.

Wouldn't it make sense to have a method in the Ipamer interface to fetch existing CIDRs?

Only way I can see how to do it right now is via the ReadAllPrefixCidrs() method in the Storage interface.

Am I missing something?

Calling Prefix.Usage() on a child prefixes with mask size 31 or 32 panics with "negative shift amount"

Version: go-ipam v1.11.6

Here's the exact line that panics: https://github.com/metal-stack/go-ipam/blob/master/prefix.go#L642

The code to reproduce the issue:

package main

import (
	"context"
	"fmt"

	goipam "github.com/metal-stack/go-ipam"
)

type Subnet struct {
	cidr       string
	parentCidr string
}

var parentSubnets = []Subnet{
	{
		cidr:       "192.168.0.0/24",
		parentCidr: "",
	},
}

var subnets = []Subnet{
	{
		cidr:       "192.168.0.0/30",
		parentCidr: "192.168.0.0/24",
	},
}

var storage = map[string][]Subnet{
	"parents":  parentSubnets,
	"children": subnets,
}

func main() {
	ipam := goipam.New()

	ctx := context.Background()

	for _, parent := range storage["parents"] {
		parPrefix, err := ipam.NewPrefix(ctx, parent.cidr)
		if err != nil {
			fmt.Println(err)
		}

		for _, child := range storage["children"] {
			if child.parentCidr == parent.cidr {
				chPrefix, err := ipam.AcquireSpecificChildPrefix(ctx, parent.cidr, child.cidr)
				if err != nil {
					fmt.Println(err)
				}

                                 // This call will panic
				u := chPrefix.Usage()
				_ = u
			}
		}
	}
}

Example/Usage of IP acquire and IP release with etcd or mongodb backend

Example/Usage of IP acquire and IP release with etcd or mongodb backend

I have deployed etcd and mongodb as a containerized service. I could not see any examples for etcd or mongodb backed IP acquire and IP release operations.

Current examples shows how to do IP acquire and release using in memory db not with external db.

Need your help for any pointers.

Performance degrades severely under prefix load

When a prefix is "loaded" - meaning that we have acquired a large number of ips from it - performance degrades as O(number of acquired IPs)

The benchmark code misses this because it immediately releases IPs after acquiring them. But with a slight change, you can see this:

--- a/prefix_benchmark_test.go
+++ b/prefix_benchmark_test.go
@@ -33,6 +33,7 @@ func BenchmarkAcquireIP(b *testing.B) {
                if err != nil {
                        panic(err)
                }
+               ips := make([]*IP, b.N)
                for n := 0; n < b.N; n++ {
                        ip, err := ipam.AcquireIP(ctx, p.Cidr)
                        if err != nil {
@@ -41,7 +42,10 @@ func BenchmarkAcquireIP(b *testing.B) {
                        if ip == nil {
                                panic("IP nil")
                        }
-                       p, err = ipam.ReleaseIP(ctx, ip)
+                       ips[n] = ip
+               }
+               for _, i := range ips {
+                       _, err := ipam.ReleaseIP(ctx, i)
                        if err != nil {
                                panic(err)
                        }

You should see performance degrade significantly.

On my machine, if i measure the time it takes to acquire the last IP, with in-memory storage the 2nd acquired is ~1us, the 101st is 13us, and the 10001 is 1ms.

I believe this is due to this loop: https://github.com/metal-stack/go-ipam/blob/master/prefix.go#L378, which will scan the entire range.

mux in Prefix only works for memory storage

the Prefix struct contains a mutex named mux which is used for locking. This will only work when using the memory storage as you are using a central map with pointers to prefixes.

when using the database storage, every call to PrefixFrom returns a new prefix so every prefix will contain a new mutex. The locking will not work here.

I'd propose to completely remove this locking. if you really want to lock, it must be an implemenation detail of the concrete storage mechanism.

Namespace is ignored when using ReadAllPrefixCidrs

Using file.json ( other storage types untested ), creating new prefixes in separate namespaces works fine. However, when usingReadAllPrefixCidrs, it always returns values from the root namespace, completely ignoring the ones in the namespace it should query.

Code sample:

package main

import (
	context "context"
	"log"
	"time"

	"github.com/metal-stack/go-ipam"
)

var (
	rootNamespace = "root"
	ns            = "tenant-a"
	rootPrfx      = "10.76.0.0/25"
	contextPrfx   = "10.76.0.128/25"
)

func main() {

	rootCtx := context.Background()
	rootCtx, cancel := context.WithTimeout(rootCtx, 5*time.Second)
	defer cancel()

	storage := ipam.NewLocalFile(rootCtx, "file.json")

	ipamer := ipam.NewWithStorage(storage)

	ipamer.CreateNamespace(rootCtx, ns)
	log.Printf("\nCreated namespace %v", ns)

	log.Printf("\nCreating CIDR %v in %v namespace", rootPrfx, rootNamespace)
	_, err := ipamer.NewPrefix(rootCtx, rootPrfx)

	if err != nil {
		panic(err.Error())
	}

	tenantCtx := ipam.NewContextWithNamespace(rootCtx, ns)

	log.Printf("\nCreating CIDR %v in %v namespace", contextPrfx, ns)
	_, err = ipamer.NewPrefix(tenantCtx, contextPrfx)

	if err != nil {
		panic(err.Error())
	}

	rootCidrs, err := ipamer.ReadAllPrefixCidrs(rootCtx)
	if err != nil {
		panic(err.Error())
	}

	log.Printf("Discovered context cidr %v for %v", rootCidrs, rootNamespace)

	contextCidrs, err := ipamer.ReadAllPrefixCidrs(tenantCtx)
	if err != nil {
		panic(err.Error())
	}

	log.Printf("Discovered context cidr %v for %v", contextCidrs, ns)
}

Current result:

2023/11/21 16:41:56
Created namespace tenant-a
2023/11/21 16:41:56
Creating CIDR 10.76.0.0/25 in root namespace
2023/11/21 16:41:56
Creating CIDR 10.76.0.128/25 in tenant-a namespace
2023/11/21 16:41:56 Discovered context cidr [10.76.0.0/25] for root
2023/11/21 16:41:56 Discovered context cidr [10.76.0.0/25] for tenant-a

Expected result:

2023/11/21 16:41:56
Created namespace tenant-a
2023/11/21 16:41:56
Creating CIDR 10.76.0.0/25 in root namespace
2023/11/21 16:41:56
Creating CIDR 10.76.0.128/25 in tenant-a namespace
2023/11/21 16:41:56 Discovered context cidr [10.76.0.0/25] for root
2023/11/21 16:41:56 Discovered context cidr [10.76.0.128/25] for tenant-a

For reference, the json file that is being created looks like this:

{
  "root": {
    "10.76.0.0/25": {
      "Cidr": "10.76.0.0/25",
      "ParentCidr": "",
      "Namespace": "",
      "AvailableChildPrefixes": {},
      "ChildPrefixLength": 0,
      "IsParent": false,
      "IPs": {
        "10.76.0.0": true,
        "10.76.0.127": true
      },
      "Version": 0
    }
  },
  "tenant-a": {
    "10.76.0.128/25": {
      "Cidr": "10.76.0.128/25",
      "ParentCidr": "",
      "Namespace": "",
      "AvailableChildPrefixes": {},
      "ChildPrefixLength": 0,
      "IsParent": false,
      "IPs": {
        "10.76.0.128": true,
        "10.76.0.255": true
      },
      "Version": 0
    }
  }
}

Redis should use Scan instead of Keys

As per official Redis guidelines

Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using SCAN or sets.

https://redis.io/commands/keys/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.