karlseguin / ccache Goto Github PK
View Code? Open in Web Editor NEWA golang LRU Cache for high concurrency
License: MIT License
A golang LRU Cache for high concurrency
License: MIT License
I might be missing something, but it seems that the OnDelete
callback is not called when an item is removed because the size limit is reached. I think we'd just need to add these lines into the gc
function:
if c.onDelete != nil {
c.onDelete(item)
}
Does that sound right?
Hi. Exist the possibility of add a method like GetWithoutPromote, that only return the element. I used the cache but i refresh when its expired with a new version of the item
Thanks!
Goroutine (associated with cache worker) prevent cache from being collected.
In my opinion there should be a way to stop it.
Alternatively, this fact should be documented.
Keep up the good work,
Enrico
The unit tests for our project are normally run with the -race
option, because we do a lot of concurrency stuff and want to avoid subtle unsafe usages. When I integrated ccache
into the project, I started getting race detector errors in a test scenario where the cache is shut down with Stop()
at the end of the test.
It seems that this is due to what the race detector considers to be unsafe usage of the promotables
channel, where there is the potential for a race between close
and a previous channel send, as documented here. The race detector isn't saying that a race really did happen during the test run, but it can tell, based on the pattern of accesses to the channel, that one could happen— so it considers that to be an automatic fail.
I wondered why such an issue wouldn't have shown up in ccache
's own unit tests, but that's because—
Stop
at all. Like, there's no defer cache.Stop()
after creating a store (so I imagine there are a lot of orphaned goroutines being created during test runs)— and also there doesn't seem be any test coverage of Stop
itself.When I added a single defer cache.Stop()
to a test, and then ran go test -race ./...
instead of go test ./...
, I immediately got the same kind of error. In any codebase where concurrency is very important, like this one, this is a bit concerning. Even if this particular kind of race condition might not have significant consequences in itself, the fact that it's not possible to run tests with race detection means we can't use that tool to detect other kinds of concurrency problems.
Go modules expects version tags to start with a v
character.
Latest releases 2.0.4
and 2.0.5
does not contain it, resulting in the command go get github.com/karlseguin/ccache@latest
resolve to version v2.0.3
(which does not contain a go.mod
fille).
The fix would be as simple as creating the corresponding tags v2.0.4
and v2.0.5
.
Thanks!
Hi,
Is there any way to get all keys the cache contains? I need to do some forced cache eviction based on a prefix of the key, and I'm having a hard time making it work by using another structure to keep track of the keys. Just being able to list all registered keys from ccache would make my life much easier.
Thanks for the nice piece of software btw
Thanks for this great software! In my usage scenario, i observed that cache size keep going up and dropped equals 0 for a long time, this eventually result in oom. I think there maybe some bug here.
Here is a test code:
func TestPrune(t *testing.T) {
maxSize := int64(5000)
cache := ccache.New(ccache.Configure[string]().MaxSize(maxSize))
epoch := 0
for {
epoch += 1
expired := make([]string, 0)
for i := 0; i < 50; i += 1 {
key := strconv.FormatInt(rand.Int63n(maxSize*20), 10)
item := cache.Get(key)
if item == nil || item.TTL() > 1*time.Minute {
expired = append(expired, key)
}
}
for _, key := range expired {
cache.Set(key, key, 5*time.Minute)
}
if epoch%5000 == 0 {
size := cache.GetSize()
dropped := cache.GetDropped()
fmt.Printf("size=%d dropped=%d\n", size, dropped)
time.Sleep(100 * time.Millisecond)
}
}
}
When running this code, the size will greater than 5000, and dropped keep equals 0.
=== RUN TestPrune
size=30270 dropped=171439
size=48587 dropped=149851
size=7654 dropped=225531
size=42967 dropped=146521
size=28343 dropped=162295
size=93191 dropped=195
size=98497 dropped=0
size=93155 dropped=15731
size=98476 dropped=0
size=98913 dropped=0
size=98936 dropped=0
size=98965 dropped=0
This may be the reason:
oldItem
is send to deletables
but not consumedgc()
triggered, oldItem
removed by List.Remove()
, oldItem
.node.Prev set to nildeletables
consumed, oldItem
removed again by List.Remove()
, l.Tail
set to nil (oldItem.node.Prev)gc()
will be skipped, because node = c.list.Tail = nil
, size keep going up and dropped keep equals 0deletables
not been gc, l.Tail
maybe set to non-nil, gc()
may recovered, but it will fail in the furture with the same reasonfunc (c *Cache[T]) gc() int {
dropped := 0
node := c.list.Tail
itemsToPrune := int64(c.itemsToPrune)
if min := c.size - c.maxSize; min > itemsToPrune {
itemsToPrune = min
}
for i := int64(0); i < itemsToPrune; i++ {
if node == nil { // gc is skipped if c.list.Tail = nil
return dropped
}
prev := node.Prev
item := node.Value
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.key).delete(item.key)
c.size -= item.size
c.list.Remove(node)
if c.onDelete != nil {
c.onDelete(item)
}
dropped += 1
item.promotions = -2
}
node = prev
}
return dropped
}
func (l *List[T]) Remove(node *Node[T]) {
next := node.Next
prev := node.Prev
if next == nil {
l.Tail = node.Prev // second Remove will make Tail = nil
} else {
next.Prev = prev
}
if prev == nil {
l.Head = node.Next
} else {
prev.Next = next
}
node.Next = nil
node.Prev = nil
}
Hi there! I'm doing some tests with this package but it seems that there's any issue when dealing with race conditions, not sure. Any advice?
func (storage *Storage) GetTokenValue(key string, t interface{}) error {
var (
data []interface{}
err error
)
if item := Cache.Get(key); item != nil {
if !item.Expired() {
data = item.Value().([]interface{})
}
}
if len(data) == 0 {
data, err = getCacheFromRedis(key)
if err != nil {
return err
}
}
if err = redis.ScanStruct(data, t); err != nil {
return err
}
Cache.Set(key, data, time.Duration(10)*time.Minute)
return nil
}
And, when I run go test -race -i ./...
I get this warning:
==================
WARNING: DATA RACE
Read by goroutine 20:
sync/atomic.AddUint32()
/usr/local/Cellar/go/1.3.3/libexec/src/pkg/sync/atomic/race.go:147 +0x4e
sync/atomic.AddInt32()
/usr/local/Cellar/go/1.3.3/libexec/src/pkg/sync/atomic/race.go:140 +0x3c
github.com/karlseguin/ccache.(*Item).shouldPromote()
/Users/alberto/Code/golang/src/github.com/karlseguin/ccache/item.go:70 +0x48
github.com/karlseguin/ccache.(*Cache).conditionalPromote()
/Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:138 +0x67
github.com/karlseguin/ccache.(*Cache).Get()
/Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:52 +0xfc
github.com/backstage/backstage/db.(*Storage).GetTokenValue()
/Users/alberto/Code/golang/src/github.com/backstage/backstage/db/storage.go:77 +0xc0
github.com/backstage/backstage/auth.get()
/Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token.go:130 +0x174
github.com/backstage/backstage/auth.RevokeTokensFor()
/Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token.go:93 +0x146
github.com/backstage/backstage/auth.(*S).TestRevokeTokensFor()
/Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token_test.go:54 +0x20b
runtime.call16()
/usr/local/Cellar/go/1.3.3/libexec/src/pkg/runtime/asm_amd64.s:360 +0x31
reflect.Value.Call()
/usr/local/Cellar/go/1.3.3/libexec/src/pkg/reflect/value.go:411 +0xed
gopkg.in/check%2ev1.func·003()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:763 +0x56b
gopkg.in/check%2ev1.func·001()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:657 +0xf4
Previous write by goroutine 7:
github.com/karlseguin/ccache.(*Cache).doPromote()
/Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:171 +0x64
github.com/karlseguin/ccache.(*Cache).worker()
/Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:152 +0xae
Goroutine 20 (running) created at:
gopkg.in/check%2ev1.(*suiteRunner).forkCall()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:658 +0x523
gopkg.in/check%2ev1.(*suiteRunner).forkTest()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:795 +0x168
gopkg.in/check%2ev1.(*suiteRunner).runTest()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:800 +0x3e
gopkg.in/check%2ev1.(*suiteRunner).run()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:606 +0x4e8
gopkg.in/check%2ev1.Run()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/run.go:92 +0x56
gopkg.in/check%2ev1.RunAll()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/run.go:84 +0x12d
gopkg.in/check%2ev1.TestingT()
/Users/alberto/Code/golang/src/gopkg.in/check.v1/run.go:72 +0x4f1
github.com/backstage/backstage/auth.Test()
/Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/suite_test.go:10 +0x34
testing.tRunner()
/usr/local/Cellar/go/1.3.3/libexec/src/pkg/testing/testing.go:422 +0x10f
Goroutine 7 (running) created at:
github.com/karlseguin/ccache.New()
/Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:37 +0x38e
github.com/backstage/backstage/db.init()
/Users/alberto/Code/golang/src/github.com/backstage/backstage/db/cache.go:5 +0xc3
github.com/backstage/backstage/auth.init()
/Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token_test.go:57 +0xac
main.init()
github.com/backstage/backstage/auth/_test/_testmain.go:48 +0x93
==================
We submitted #52 and would like to fetch the latest version. Could you please create a new release? Thank you.
I have some items whose values are very expensive to initialize. I want to initialize each of them on the first read of the corresponding key and then cache indefinitely (unless evicted due to cache size). I also want to do this atomically in such way that reads and writes to other keys in the cache can proceed while the expensive initialization occurs, but each value is initialized at most once.
I can work around the fact that the initialization is slow by setting a placeholder value with a mutex that will be released once the initialization is complete.
What I can't figure out how to do with this API is the atomic Get/Set (without using additional locks). Would you be open to adding a TrackingGetOrSet(key string, defaultValue interface{}, duration time.Duration) (item TrackingItem item, didSet bool)
method which atomically gets the key if found, or sets it to the new value if not? By looking at the code it seems fairly straightforward to implement as it can occur inside a bucket's RWMutex.
That said I can see it's a pretty specific request. I'm happy to make a PR.
PS: Is there a race condition in TrackingGet
? It seems like the item could be removed from the cache between the get()
and the track()
calls... But maybe I'm missing something?
Thanks!
Hope it can be maintained and optimized for speed improvements (i mean garbage collection in this case, actually it's very good already being used as a cache). Overall, the feature is good enough. Thanks.
I dude,
I need hight availability and Im thinking about to promote items just when it's saved, it's that posible?. I don't want to wait for get first item calls to promote it.
Unless I'm missing something, the current implementation requires that all items have a finite TTL. That's fine for many use cases, but if I'm caching the results of some expensive computation that won't change over time for any given key, then I really wouldn't ever want values to be evicted (or recomputed by Fetch
) simply because they're old— I would only want LRU entries to be evicted due to the cache being too full. I'd like to be able to specify a zero or negative time.Duration
to mean "this doesn't expire."
Hi. I think cache size must be a public field. This would be useful for troubleshooting and monitoring.
Line 14 in 692cd61
Flaky test sen in https://github.com/karlseguin/ccache/actions/runs/6953182798/job/18917940507#step:4:7
go test -race -count=1 ./...
--- FAIL: Test_ConcurrentClearAndSet (0.09s)
cache_test.go:438: expected true, got false
FAIL
FAIL github.com/karlseguin/ccache/v3 [7](https://github.com/karlseguin/ccache/actions/runs/6953182798/job/18917940507#step:4:8).077s
? github.com/karlseguin/ccache/v3/assert [no test files]
FAIL
make: *** [Makefile:17: t] Error 1
Error: Process completed with exit code 2.
Go makes it really easy to write synthetic benchmarks. It would be nice if we added some to ccache, since right it's hard to now the perf impact (both in terms of CPU or allocations) of an arbitrary PR change.
When setting a new key, Cache would use c.promote
to add new item in c.list
. But when c.promotables
is full, c.promote
would do nothing, which means new item would not be added into c.list
.
That would cause item leak until a get operation when c.promotables
is not full. If there is no operation about this key in the future before c.promote
successfully takes effect, the memory of the item would never be released because of the reference from the map.
// https://github.com/karlseguin/ccache/blob/master/cache.go
// Set the value in the cache for the specified duration
func (c *Cache) Set(key string, value interface{}, duration time.Duration) {
c.set(key, value, duration, false)
}
func (c *Cache) set(key string, value interface{}, duration time.Duration, track bool) *Item {
item, existing := c.bucket(key).set(key, value, duration, track)
if existing != nil {
c.deletables <- existing
}
c.promote(item)
return item
}
func (c *Cache) promote(item *Item) {
select {
case c.promotables <- item:
default:
}
}
LayeredCache.Fetch
has return types of (interface{}, error)
. Why is interface{}
being used instead of *Item
when Cache.Fetch
returns *Item
?
Thanks.
When I import "github.com/karlseguin/ccache/v3", create a cache which max size is 3. After I add 4 element into cache, cache still remain all of the four element, does set() do not run lru? So in lru we should delete the first one when add the No.4 element, right?
any limit of this cache's min size?
Hi dude, what is the better way to get the max performance for ccache with concurrent transactions or without there?
Tks.
Currently, expired items are only evicted when the cache fills to its maximum configured size.
Add an API method for clearing evicted items, possibly with a grace period (e.g. allow 30 minutes for the item to be .Extend()ed).
For longer running caches it is critically important to be able to obtain operational metrics - number of cached items, rate of eviction, possibly some internal statistics. All decent caches thus provide a method to unintrusively obtain those in run-time.
Ccache should have something like that to be trusted in most production settings.
If I call Fetch() simultaneously from multiple goroutines and the fetch func is rather slow, then it keeps being called until one of the calls returns.
So, it looks like Fetch() is not thread-safe
the program is good. running fairly well but i saw the code and realised the 350 bytes associated with it.
maybe you can look into https://github.com/allegro/bigcache
and see if can reduce what you have written to something less gc intensive (you did your own gc() inside your code which seems to be highly CPU intensive given data cache of 16GB RAM of around 1 million entries... that's very "slow" at times with huge GC processing i think)
any ways to look into the code and speed it up taking GC into consideration and reducing the 350bytes further?
item == nil, is it true for expired cache.Get("user:4") too?
item := cache.Get("user:4")
if item == nil {
//handle
} else {
user := item.Value().(*User)
}
i would like to check if item is empty. how do i check that? will expired item be emptied?
I understand that the problem I'm describing isn't a problem for LayeredCache
's intended use (as described in readme).
But I had an idea that I could use ccache
as
LayeredCache
to partition my cache into functional parts (which I could wipe out with cache.DeleteAll("somepartition")
etc.)But with the above I would only have a handful of primary keys, so each partition will end up in its own bucket, and I, not surprisingly, see lots of lock contention in the profiler.
I can certainly simplify my setup to not use the LayeredCache
, but it would be convenient, and In my head this should be fixed if the bucket
method below considered both keys in the hash:
func (c *LayeredCache) set(primary, secondary string, value interface{}, duration time.Duration, track bool) *Item {
item, existing := c.bucket(primary).set(primary, secondary, value, duration, track)
if existing != nil {
c.deletables <- existing
}
c.promote(item)
return item
}
func (c *LayeredCache) bucket(key string) *layeredBucket {
h := fnv.New32a()
h.Write([]byte(key))
return c.buckets[h.Sum32()&c.bucketMask]
}
//this isn't thread safe. It's meant to be called from non-concurrent tests
But even in non-concurrents tests, go test -race
reports a race:
WARNING: DATA RACE
Write at 0x00c000162800 by goroutine 23:
github.com/karlseguin/ccache/v2.(*LayeredCache).Clear()
/Users/bep/go/pkg/mod/github.com/karlseguin/ccache/[email protected]/layeredcache.go:172 +0xb4
github.com/gohugoio/hugo/cache/memcache.(*Cache).Clear()
/Users/bep/dev/go/gohugoio/hugo/cache/memcache/memcache.go:211 +0x63c
github.com/gohugoio/hugo/cache/memcache.TestCache()
/Users/bep/dev/go/gohugoio/hugo/cache/memcache/memcache_test.go:47 +0x52b
testing.tRunner()
/Users/bep/dev/go/dump/go/src/testing/testing.go:1109 +0x202
Previous write at 0x00c000162800 by goroutine 31:
github.com/karlseguin/ccache/v2.(*LayeredCache).doPromote()
/Users/bep/go/pkg/mod/github.com/karlseguin/ccache/[email protected]/layeredcache.go:269 +0x50e
github.com/karlseguin/ccache/v2.(*LayeredCache).worker()
/Users/bep/go/pkg/mod/github.com/karlseguin/ccache/[email protected]/layeredcache.go:229 +0x8d3
item, err := cache.Fetch("user:4", time.Minute * 10, func() (interface{}, error) {
//code to fetch the data incase of a miss
//should return the data to cache and the error, if any
})
the function inside Fetch, currently cant accept parameters, can we let the function use key as parameter, so we can rebuild cache if not find value by key, something like that:
item, err := cache.Fetch("user:4", time.Minute * 10, func("user:4") (interface{}, error) {
//get value from db by id=user:4
//return value from db
})
Nice to have this function.
I'm using it but can you help save memory / garbage collection time by looking into this cache below?
https://github.com/coocood/freecache
I hope it can be optimally used like freecache without a lot of GC happening. How does ccache compared with it? I will be using ccache frequently.
I imagine that it's safe to issue Get/Set/Fetch operations on the same cache from multiple threads, given the package is "aimed at high concurrency", and that the Clear method documents the opposite explicitly.
But if that's correct, might be good to call it out explicitly. Something like:
Unless otherwise noted (e.g., for
Clear
), methods on caches are thread-safe.
?
https://github.com/karlseguin/ccache/blob/master/item.go#L42
I ended up getting numbers in the log output when I used the gerb library... turns out it was this line :)
Similar to #21
I can create a PR for this.
First, thanks for this library, I tested this here and it works great.
I have one challenge (which I may just skip in its first version) though, and that is how to control the size of the cache.
I understand that you can somehow control this by implementing Size()
(the same strategy as used by Ristretto, but implementing that in a general way for structs/maps seems to be non-trivial. For my use case I can probably do some approximations.
Which is why I'm lifting the idea about a SetMaxSize
method that could be adjust while running to handle "low on memory" situations.
When Running TrackingGet
, func gc
could insert like this:
// Used when the cache was created with the Track() configuration option.
// Avoid otherwise
func (c *Cache) TrackingGet(key string) TrackedItem {
item := c.Get(key)
if item == nil {
return NilTracked
}
// switch to gc goroutine
...
if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 {
c.bucket(item.key).delete(item.key)
c.size -= item.size
c.list.Remove(element)
if c.onDelete != nil {
c.onDelete(item)
}
dropped += 1
item.promotions = -2
}
...
// switch back
item.track()
return item
}
That would cause getting items which have been processed onDelete func.
Hi, at the moment doing go get github.com/karlseguin/ccache
does not give us OnDelete
method because latest tag is quite old.
Could you consider tagging new version if current master is stable enough
I would expect Fetch()
to behave a little differently and not return stale items. Instead it seems like the fetch
function argument is invoked only if the item is missing entirely from the cache.
Would you be open to PRs changing that behavior, or provide a FetchAndRefresh()
that runs the fetch
function both if the item is expired and missing?
Essentially this:
func (c *Cache) Fetch(key string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error) {
item := c.Get(key)
if item != nil && !item.Expired() {
return item, nil
}
value, err := fetch()
if err != nil {
return nil, err
}
return c.set(key, value, duration), nil
}
Commented in #76 (comment)
why doesnt the below work? also, am i doing it right? i've tried multiple different ways. there's no error shown... pls help thx.
i realised it seems to stuck at displaying the value... if not initialized. how do i resolve? I think (int) for nil will jam without error displayed
package httpcachetesting
import (
"github.com/karlseguin/ccache"
"fmt"
)
var (
HttpContentCache = ccache.Layered(ccache.Configure())
)
func HTTPCacheGet(urlhost string, urlreq string) (int, []byte, []byte, int, bool) {
fmt.Printf("*** Start\n")
httpcaches := HttpContentCache.Get(urlhost, "s"+urlreq) //same as stored in redis
fmt.Printf("*** THIS MSG IS NOT SHOWING?!?!?! V : %v\n", httpcaches.Value().(int))
}
I was wondering why the value was made generic with the introduction of Go generics, but the key not. A quick check in the code revealed that there are some parts of the API that rely on a string as key, but overall it seems possible to support other types.
The motivation behind this is, that we try to avoid unneeded GC overhead at work, and often we have an integer as key. By making the key generic, we could avoid the extra conversion to a string just to be able to lookup values in the cache.
The size of cache is not an exposed variable, therefore couldn't be accessed. If we could define a method which returns the size, it would be helpful for finding overflow or cache miss.
Sometimes cache give old result even though ttl expires.
How can we solve this situation? Could you give me some help? We do default cache configuration @karlseguin
i have some strange experience using layered cache, maybe it's my code, maybe it's fixed here...?
does layered cache comes with write lock?
It looks like the latest Github release is from 2019, and there have been 12 commits since then with some like GetDropped
and migrating to modules. Could you consider releasing new version?
is this production ready? any issues i shld be looking out for?
This is sort of nitpicky performance-tuning stuff and I could be wrong on some of the details, but here's what I'm talking about:
Fetch
currently takes a func() (interface{}, error)
. In other words, it doesn't pass the key (or any other parameters) to the function that computes new values. That means that the function can't be defined globally or at any other scope other than the scope that knows what the key is. It has to be a closure, or a method of some object that knows what the key is.
Due to how the Go runtime works, that has some undesirable implications if you're trying to optimize for minimal heap allocations:
This is a kind of thing that wouldn't come up in a language like Java where you don't really have any choice because almost everything is an object on the heap. But in high-throughput services implemented in Go, since you can control that stuff to some degree as long as you're careful, it can be a bit annoying to have one's control limited by API choices like this... when as far as I can tell there's no great reason for the API to work that way. (That is, everywhere that you are calling the fetch function, you already know what the key is, so it would be perfectly easy to pass the key to the function. And that's how cache-loading functions work in pretty much every other caching API I've seen that has such a concept.)
There is a problem here. Buckets and list cleanup must be synchronized. Otherwise, if we concurrently call Set
we may end up with an empty list and non-empty buckets. In this case we got memory leak like this. There is a small leak, but it's still there. At the next cleanup, these objects will be removed, but new leaking objects will appear again.
I see there's an upcoming release that leverages generics.
But no accompanying issue, so thought I'd create one.
This way I can subscribe to it and get notified when it's released. 😄
cannot use cvalue.Value() (type interface {}) as type int in assignment: need type assertion
userid := 1
intvalue := 1 //initial value
cvalue := cache.Get(userid)
if(cvalue!=nil){
intvalue = cvalue.Value() //what is wrong with this line?
}else{
cache.Set(userid, 2, time.Second * 1)
}
pls help. thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.