asdine / storm Goto Github PK
View Code? Open in Web Editor NEWSimple and powerful toolkit for BoltDB
License: MIT License
Simple and powerful toolkit for BoltDB
License: MIT License
I had a look at the generated GoDoc, and it doesn't look bad, but the Index types clutters the API (Index, ListIndex, UniqueIndex).
I may be missing something, but shouldn't these be undexported? Are they of real value outside of the storm package?
BoltDB supports nested buckets
, but it's not implemented in Storm
.
As to "why do we need this?":
Many database applications have natural partitions, where it is typically one partition in use at a time.
With GitHub
as an example: A user has many repositories, but looks mostly at a single repository at a time, with its:
In the world of relational databases this is solved with where clauses
and joins
.
Translated to the BoltDB
world we could:
buckets
.Storm
currently stores everything below the root bucket with a bucket name derived from the struct name and package. This is perfectly fine for many applications.
Suggestion: New bucket tag that points to the parent bucket:
type Issue struct {
ParentBucket [][]string `storm:"container"`
}
Save
operations.Find
and One
etc.This could be solved by either
parentBucket ...string
to these methods:func (s *DB) Find(fieldName string, value interface{}, to interface{}, parentBucket ...string) error {
//
}
func (c *Container) Find(fieldName string, value interface{}, to interface{}) error {
//
}
Not sure what is best/simplest, but it should good enough as a foundation for discussion.
Some additional integration/helpers would be nice, but the above is a start.
I started using the Drop func today, and the API felt a little weird.
func (n *Node) Save(data interface{}) error
func (n *Node) Drop(bucketName string) error
The first one creates a bucket if it doesn't exist. I don't really know what the bucket name is, and I need it if I want to drop it. I really don't care either. I guess the struct User will end up in the bucket "User", but now I have a loosely typed code, not very refactor-friendly.
Batch
writes can be very important when dealing with concurrency.
Passing an option to Save
can be a good solution to activate the Batch
instead of the Update
for specific writes.
The option must be ignored if already in transaction though
@alessandrousseglioviretta still interested?
For better performance, generated code would work better than reflection.
Something like a goa/gorma DSL could be a potential approach.
https://github.com/goadesign/gorma
If a bucket does not yet exist, Node.One()
currently fails with:
fmt.Errorf("bucket %s doesn't exist", info.Name)
This happens if for example you try to look up an object by ID but no objects of that type have been saved yet. It is much more usable to simply return ErrNotFound
. Alternatively, fail with an error that can be easily checked, rather than the generic fmt.Errorf(...)
hi,
is any way to speed up this?
https://play.golang.org/p/LYVmsy3ysD i need to insert 700 000 rows
Mindaugass-MacBook-Pro:epg_bolt mindaugas$ time go run main.go --config config.ini
parsing xml
storm begins
num: 0, time: 2.85197ms
num: 1000, time: 2.026768464s
num: 2000, time: 6.523655515s
num: 3000, time: 9.663367105s
num: 4000, time: 11.105321635s
num: 5000, time: 11.937392684s
num: 6000, time: 14.066639588s
num: 7000, time: 17.177086791s
num: 8000, time: 21.325859315s
num: 9000, time: 20.814317701s
num: 10000, time: 25.686991882s
num: 11000, time: 25.98401264s
num: 12000, time: 29.643677727s
num: 13000, time: 38.521215154s
It may be confusing to some (including me) that the keys are encoded with some ... other encoder (gob). Esp. if you want to interact directly with the BoltDB.
This would potentially be a breaking change, though.
I don't know how you implement an update on an existing structures guys, but i think we can all agree that an update function for an existing structure would be awesome.
For now, it's fetch, copy, delete in db, edit copied, save for me.
how do you guys do updates on existing structures?
mhz, you wrote that indexed field is sorted but inresult i get unsorted
db.From("tv3").Select(q.Gte("Timestamp", 1472981431)).Find(&videos)
i get
15 {ID:25 Timestamp:1472982298 Channel:tv3 Filename:stream-1472982298.ts Dirname:2016/09/04/12 Duration:10}
16 {ID:26 Timestamp:1472982356 Channel:tv3 Filename:stream-1472982356.ts Dirname:2016/09/04/12 Duration:10}
17 {ID:27 Timestamp:1472982365 Channel:tv3 Filename:stream-1472982365.ts Dirname:2016/09/04/12 Duration:10}
18 {ID:9 Timestamp:1472981432 Channel:tv3 Filename:stream-1472981432.ts Dirname:2016/09/04/12 Duration:10}
Implement OrderBy(fieldName string)
for Find
, All
, AllByIndex
, Range
and Select
using a tree before inserting the value in the slice
If I'm not mistaken, don't you have to call ".Batch" instead of ".Update" if you want to batch calls together? This would be pretty easy to implement, but I'm unsure if it is necessary (Do update calls already get batched together?)
Just wondering, I like this code though, I was half way through writing some of these features myself, but this saved me some work for sure.
I'd like to auto increment but not use that field as the index. Quick glance indicated it was not yet possible. Would you be interested in this change?
Just an idea, but when using AutoIncrement() it would be useful to have a function similar to LastInserId() in the sql package:
https://golang.org/pkg/database/sql/#Result
// LastInsertId returns the integer generated by the database // in response to a command. Typically this will be from an // "auto increment" column when inserting a new row. Not all // databases support this feature, and the syntax of such // statements varies.
In a transaction you can get the new ID and use it for other stuff. Right now it's not straight forward to get the id of an item that has been inserted. I assume this could be returned from Bolt's NextSequence() or similar?
I notice Sereal versions their codec, current at version 3.
I suggest we make sure that the current /sereal package goes against version 3 (use the NewEncoderV3
etc.) -- and if there is any interesting development in this area in the future, add /sereal4 or whatever.
We currently have:
type EncodeDecoder interface {
Encode(v interface{}) ([]byte, error)
Decode(b []byte, v interface{}) error
}
Which in the different implementations delegates to Marshal/Unmarshal or Encode/Decode (gob).
Go's stdlib have a distinction between Marsha
(convert to []byte
) and Encode
(stream to an io.Writer
).
It is all put below the package encoding
so i think Storm´s Codec
is a good name for it. The interface, however, could maybe be renamed to:
type Marshaler interface {
Marshal(v interface{}) ([]byte, error)
Unmarshal(b []byte, v interface{}) error
}
This isn't really of world-class importance, but would maybe make the API a little bit more familiar to the common Gopher.
my application run on mips system, boltdb dos't suport , I hope can change to goleveldb with this orm!!! thx
An index for the Primary Key is redundant, it just adds a read op and some storage space, as the PKs are readily available in the top level bucket.
Storm should be able to find non-indexed fields.
Indexes are there to speed up the requests but when wanting to find a non indexed field you get stuck by the limitations of Storm.
Natural keys tend to be composites. Not sure how hard this is to implement, but it would be nice.
See transaction.go
line 4, Begin()
takes a value receiver but all other methods take a pointer receiver.
https://github.com/asdine/storm/blob/master/transaction.go#L4
The reason is this:
http://dave.cheney.net/2015/11/18/wednesday-pop-quiz-spot-the-race
The solution is simple, once you have a pointer receiver, make all methods on a type pointer receivers.
Not sure whether there's some other code in storm that does the same thing, Node.Begin()
is what I ran into so far.
BoltDB provides a very useful NextSequence method on every bucket that returns an autoincrementing integer for the bucket. Since Storm does not provide direct access to the bucket (Its encapsulated in a Node), how does one access the NextSequence?
One way to do it would be to add a NextSequence method on the Node. Would you be open to that?
Just toying with the example from the README, I noticed that as you save (update) the same user but with a different CreatedAt time stamp, multiple buckets inside __storm_index_CreatedAt are created but (n-1) of them are empty. Not sure if it matters with boltdb in terms of performance, but it would be nice to delete an index bucket if empty.
View from boltbrowser:
- User | Path: User/__storm_index_CreatedAt/ �� Time �
- __storm_index_CreatedAt | � �� ��d�)��`�\
- �� Time �� �� ��c� �Q��\ | Buckets: 0
- �� Time �� �� ��c� �I��\ | Pairs: 1
- �� Time �� �� ��d� ��1�\ |
- �� Time �� �� ��d�$����\ |
- �� Time �� �� ��d�)��`�\ |
10: 10 |
- storm__ids |
10: �� Time �� �� ��d�)��`�\ |
- __storm_index_Email |
[email protected]: 10 |
- __storm_index_Group |
- staff |
10: 10 |
- storm__ids |
10: staff |
10: |
E�� User �� ID Group Email |
Name CreatedAt �� �� Time �� |
8�� 10 staff [email protected] John |
��d�)��`�\
Instead of saving an object with a set ID over an old object with that same ID, it copies it (but leaves the ID in the json alone) So I'm getting two objects with the same ID. After I generate the second object, I can update that one just fine, but the first one is effectively permanent.
I've also tried setting the ID field to unique and a couple other options. All I get is an "Already Exists" error.
type Base struct {
Ident string `storm:"id"`
}
type User struct {
Base `storm:"inline"`
Group string `storm:"index"`
Email string `storm:"unique"`
Name string
CreatedAt time.Time `storm:"index"`
}
func main() {
db, err := storm.Open("my.db")
if err != nil {
log.Fatal("db open:", err)
}
defer db.Close()
user := User{
Base: Base{Ident: "10"},
Group: "staff",
Email: "[email protected]",
Name: "John",
CreatedAt: time.Now(),
}
if err = db.Save(user); err != nil {
log.Fatal("save user:", err)
}
panic: reflect.Value.Addr of unaddressable value
goroutine 1 [running]:
panic(0x1275e0, 0xc8200687c0)
/usr/local/Cellar/go/1.6.2/libexec/src/runtime/panic.go:481 +0x3e6
reflect.Value.Addr(0x15cd40, 0xc8200a2120, 0x99, 0x0, 0x0, 0x0)
/usr/local/Cellar/go/1.6.2/libexec/src/reflect/value.go:240 +0x8c
github.com/asdine/storm.extractField(0xc82008ea80, 0xc820094070, 0xc820093140, 0x0, 0x0, 0x0)
/Users/x/code/go/src/github.com/asdine/storm/extract.go:150 +0xa99
github.com/asdine/storm.extract(0x1724e0, 0xc8200a2120, 0x0, 0x0, 0x0, 0xc81fffaef9, 0x0, 0x0)
/Users/x/code/go/src/github.com/asdine/storm/extract.go:102 +0xa37
github.com/asdine/storm.(*Node).Save(0xc82006c8a0, 0x1724e0, 0xc8200a2120, 0x0, 0x0)
/Users/x/code/go/src/github.com/asdine/storm/save.go:12 +0x63
github.com/asdine/storm.(*DB).Save(0xc8200a2000, 0x1724e0, 0xc8200a2120, 0x0, 0x0)
/Users/x/code/go/src/github.com/asdine/storm/save.go:108 +0x45
Note db.Save(user)
rather than db.Save(&user)
above, an easy mistake to make, if it's even considered a mistake.
DB.Save()
should probably check and only accept pointers to structs to avoid these cases.
Alternatively extractField()
should probably check CanAddr()
first and fail with a more descriptive error.
Storm
's primary goal is to be simple to use.
It has never been about achieving good performance, that's why reflection is heavily used.
I am not a fan of reflection, it just happened naturally when designing the API.
I sacrified raw speed for ease of use and i am glad i did.
But i believe we can do something to avoid reflection when not needed, and without a lot of work.
Basically, getting one of several records is like this:
The reflection boilerplate is essentially done at the first step, it is necessary so we can collect the following informations:
I think that if we can provide a set of methods that allow the users to manually provide these informations, we could achieve excellent performance.
These methods could also be used internally to simplify some parts of the code.
But the most interesting part is that we may be able to transform current struct declarations that use reflection into using the new methods that we talked about above, at compile time, using go generate
or whatever.
If adding a new index to an existing struct, any already persisted data would have to be re-indexed by re-saving each item again. It would be nice to have a simple way to do it without each user writing the same loop.
Ideally storm would detect it and just do it automatically, but for that it would probably need to persist some info about indexes on each struct.
That should be enough for most people.
See #18
Nice project!
It would be even nicer if one could define ID uint64
and have it populated by the bucket´s NextSequence
for new entities.
Using something like this would make the DB
creation more consistent:
db := storm.Open("my.db", storm.BoltOptions(0600, &bolt.Options{Timeout: 1 * time.Second}))
With Bolt you can use a cursor and go from the end -> start/start -> end. Storm seems to always return everything in ascending order. Is there any way to reverse the order All() etc return their records in? This would make stuff like record paging super easy ;D
To clarify, I also mean changing the order while utilizing Limit() and Skip()
One of the edge
nodes crashed with a SEGFAULT
after a few million requests. I've attached stacktraces below.
It fails with a SEGFAULT
in Tx.Commit()
-> Bucket.spill() -> Node.write()
1.7rc2
v1.2.1-5-g05e441d
- 05e441d7b3ded9164c5b912521504e7711dd0ba2
1: running [Created by edge.(*Refresher).loop @ .:0]
runtime panic.go:566 throw(0xe94a1f, 0x5)
runtime sigpanic_unix.go:27 sigpanic()
bolt node.go:205 (*node).write(0xc427ed4c40, 0xc42c5ffff0)
bolt bucket.go:598 (*Bucket).write(0xc427c3a700, 0xc425f01101, 0xc428b43158, 0x80)
bolt bucket.go:506 (*Bucket).spill(0xc427c3a640, 0xc425f01000, 0xc428b433c8)
bolt bucket.go:508 (*Bucket).spill(0xc427c3a600, 0xc425f00f00, 0xc428b43638)
bolt bucket.go:508 (*Bucket).spill(0xc42a6dba58, 0x99a82de, 0x147ce80)
bolt tx.go:163 (*Tx).Commit(0xc42a6dba40, 0, 0)
bolt db.go:602 (*DB).Update(0xc4200de3c0, 0xc428b438e0, 0, 0)
storm save.go:51 (*Node).Save(0xc4201a3620, 0xcda920, #9, 0x1, #9)
storm save.go:113 (*DB).Save(#4, 0xcda920, #9, 0x3, #8)
macrophage index.go:108 thunderbolt.Set(0x134e4e0, #4, #8, 0x71, 0, 0, 0x60f, #2, #2, 0, ...)
macrophage macro.go:132 (*Macrophage).MetaSet(#3, #7, 0x1f, #10, 0x51, 0x60f, #2, #2, #6, 0x24, ...)
cache write.go:67 CacheWriter.WriteMeta(#7, 0x1f, #10, 0x51, 0x134d760, #3, 0xc42c5ffd80, 0x60f, #1, #1, ...)
edge refresher.go:109 (*Refresher).refresh(#5, #7, 0x1f, #10, 0x51, 0x60f, #1, #1, #6, 0x24, ...)
edge refresher.go:179 (*Refresher).loop.func1(0xc4205fafc0, 0xc4205fafd0, 0xc42840e880, 0x71, #5, 0xc420054320)
runtime asm_amd64.s:2086 goexit()
unexpected fault address 0xc42c600000
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc42c600000 pc=0x877d46]
goroutine 57198547 [running]:
runtime.throw(0xe94a1f, 0x5)
/usr/local/go/src/runtime/panic.go:566 +0x95 fp=0xc428b42d60 sp=0xc428b42d40
runtime.sigpanic()
/usr/local/go/src/runtime/sigpanic_unix.go:27 +0x288 fp=0xc428b42db8 sp=0xc428b42d60
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*node).write(0xc427ed4c40, 0xc42c5ffff0)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/node.go:205 +0x86 fp=0xc428b42ef8 sp=0xc428b42db8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).write(0xc427c3a700, 0xc425f01101, 0xc428b43158, 0x80)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:598 +0xb1 fp=0xc428b42f58 sp=0xc428b42ef8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).spill(0xc427c3a640, 0xc425f01000, 0xc428b433c8)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:506 +0x101 fp=0xc428b431c8 sp=0xc428b42f58
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).spill(0xc427c3a600, 0xc425f00f00, 0xc428b43638)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:508 +0x937 fp=0xc428b43438 sp=0xc428b431c8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).spill(0xc42a6dba58, 0x99a82de, 0x147ce80)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:508 +0x937 fp=0xc428b436a8 sp=0xc428b43438
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Tx).Commit(0xc42a6dba40, 0x0, 0x0)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/tx.go:163 +0x125 fp=0xc428b437f8 sp=0xc428b436a8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*DB).Update(0xc4200de3c0, 0xc428b438e0, 0x0, 0x0)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/db.go:602 +0x10d fp=0xc428b43848 sp=0xc428b437f8
github.com/GitbookIO/cdn/vendor/github.com/asdine/storm.(*Node).Save(0xc4201a3620, 0xcda920, 0xc42a17a190, 0x1, 0xc42a17a190)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/asdine/storm/save.go:51 +0x271 fp=0xc428b43938 sp=0xc428b43848
github.com/GitbookIO/cdn/vendor/github.com/asdine/storm.(*DB).Save(0xc420375560, 0xcda920, 0xc42a17a190, 0x3, 0xc428a4aa00)
/go/src/github.com/GitbookIO/cdn/vendor/github.com/asdine/storm/save.go:113 +0x43 fp=0xc428b43970 sp=0xc428b43938
github.com/GitbookIO/cdn/cache/macrophage.thunderbolt.Set(0x134e4e0, 0xc420375560, 0xc428a4aa00, 0x71, 0x0, 0x0, 0x60f, 0x57defcd8, 0x57defcd8, 0x0, ...)
/go/src/github.com/GitbookIO/cdn/cache/macrophage/index.go:108 +0xaf fp=0xc428b439b0 sp=0xc428b43970
github.com/GitbookIO/cdn/cache/macrophage.(*Macrophage).MetaSet(0xc4201a4940, 0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x60f, 0x57defcd8, 0x57defcd8, 0xc42558b440, 0x24, ...)
/go/src/github.com/GitbookIO/cdn/cache/macrophage/macro.go:132 +0x15b fp=0xc428b43a80 sp=0xc428b439b0
github.com/GitbookIO/cdn/cache.CacheWriter.WriteMeta(0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x134d760, 0xc4201a4940, 0xc42c5ffd80, 0x60f, 0x57def749, 0x57def749, ...)
/go/src/github.com/GitbookIO/cdn/cache/write.go:67 +0x17c fp=0xc428b43b20 sp=0xc428b43a80
github.com/GitbookIO/cdn/edge.(*Refresher).refresh(0xc4203815c0, 0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x60f, 0x57def749, 0x57def749, 0xc42558b440, 0x24, ...)
/go/src/github.com/GitbookIO/cdn/edge/refresher.go:109 +0x7ff fp=0xc428b43ec0 sp=0xc428b43b20
github.com/GitbookIO/cdn/edge.(*Refresher).loop.func1(0xc4205fafc0, 0xc4205fafd0, 0xc42840e880, 0x71, 0xc4203815c0, 0xc420054320)
/go/src/github.com/GitbookIO/cdn/edge/refresher.go:179 +0x12d fp=0xc428b43f70 sp=0xc428b43ec0
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc428b43f78 sp=0xc428b43f70
created by github.com/GitbookIO/cdn/edge.(*Refresher).loop
/go/src/github.com/GitbookIO/cdn/edge/refresher.go:182 +0x2dd
This fails with an error:
type Base struct {
ID int `storm:"id"`
}
type User struct {
Base `storm:"inline"`
Group string `storm:"index"`
Email string `storm:"unique"`
Name string
Age int
CreatedAt time.Time `storm:"index"`
}
dir, _ := ioutil.TempDir(os.TempDir(), "storm")
defer os.RemoveAll(dir)
// Open takes a list of optional options as last argument.
// AutoIncrement enables auto-incrementing ids.
db, err := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())
fmt.Println(err)
// Output: <nil>
user := User{
Group: "staff",
Email: "[email protected]",
Name: "John",
Age: 21,
CreatedAt: time.Now(),
}
err = db.Save(&user)
fmt.Println(err)
// Output: <nil>
fmt.Println(user.ID)
// Output: 1
There should be a low level engine that executes the queries to BoltDB.
This engine would be able to fetch one or more values based on some options:
This engine would also be used by indexes to fetch indexed values and could also be exported for those who want to make custom queries.
When possible, every call to BoltDB would be made via this engine.
Example with One
:
db.One
Index
This engine would be able to find records without indexes (see #42)
hi,
is any way to set bucket for save?
eg i want save &Users{} to users_china and users_russia
See http://blog.golang.org/examples
I didn't know about this until tonight, but it looks really good in the GoDoc.
AutoIncrement doesn't set the ID field of the variable, which is making it absurdly difficult to update with the .Save() function. Couldn't we call AutoIncrement before marshaling the data and save that into the field so I don't end up with a bunch of |Key:124, Value{ID=0}| in the database?
If I have a struct field defined as:
SubURL string storm:"unique"
Then I can safely try to create the same entry in the DB multiple times, knowing that only the first one will success.
But if I define it as:
SubURL string storm:"index" storm:"unique"
Then "unique" is ignored and I get duplicate entries in the DB for the same contents.
Ah, just as I was writing this I tried
SubURL string storm:"unique" storm:"index"
And that worked. So the order of the tags seems to be important. Is this a feature or a bug?
To get it in line with the other(s).
See #18
Now it looks hardcoded to JSON:
Values are encoded using the codec. But they are encoded differently when saved to an index. The same goes for the IDs.
BoltDB sorts the keys in the bucket, so the indexes used in Storm are naturally sorted. This way, a function like AllByIndex
returns everything already sorted out without doing anything.
The problem with the default codec (json
) is that it doesn't keep the natural order on some types. If we have three numbers 98, 99, 100, json
will encode them as three strings "98", "99", "100" which get sorted in this order 100, 98, 99 (1 comes before 9, like A before B).
That's the main reason toBytes
exists, as it encodes using gob
.
But not using the selected codec doesn't feel right.
Here are some possible solutions:
gob
, and remove toBytes
Hi,
Could we return the AutoIncrementId after call save? Or could you point out how to get the id?
Thanks.
It would be nice to do something like:
accountA.Amount -= 100
accountB.Amount += 100
db.Save(accountA)
db.Save(accountB)
As part of a transaction.
From briefly looking at the code, it seems that every time an object is saved, the tags are re-parsed via reflection. It may not be a big deal in the big scheme of things, but it may also help improve performance if meta-data about each struct is cached for the duration of the process.
Also, see issue #50 which would make this issue moot.
See https://github.com/asdine/storm/blob/master/scan_test.go#L89
There are some natural interfaces inside Storm (node vs db etc.). I'm not sure what or who they are, but they can help simplify some code. If only to DRY up some tests. scan_test has 100% line coverage. If not for the interface I would probably have stopped around 80%.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.