lnopenmetrics / go-lnmetrics.reporter Goto Github PK
View Code? Open in Web Editor NEW:bar_chart: Reference implementation written in Go to collect and report of the lightning node metrics :bar_chart:
License: GNU General Public License v2.0
:bar_chart: Reference implementation written in Go to collect and report of the lightning node metrics :bar_chart:
License: GNU General Public License v2.0
Have a migration procedure that deprecates some functionality it is useful when some change in the plugin and the database need to be migrated.
We can add different information to the plugin rpc method to show some running information. We can add information like the
logfile path, and the db path.
In addition, the payload contains a very old version of the plugin.
➜ ~ lightning-cli lnmetrics-reporter
{
"Name": "go-lnmetrics-reporter",
"Version": "0.1",
"LangVersion": "Go lang 1.15.8",
"Architecture": "amd64"
}
I think this debug messages are pretty confusing
➜ VincentSSD cat .lightning/bitcoin/metrics.log
time="2021-10-26T09:12:26+02:00" level=debug msg="Home dir /media/vincent/VincentSSD/.lightning/bitcoin"
time="2021-10-26T09:12:26+02:00" level=info msg="Created and Connected to the database at /media/vincent/VincentSSD/.lightning/bitcoin/metrics/db"
time="2021-10-26T09:12:26+02:00" level=info msg="Loading metrics with id 1 end name metric_one"
time="2021-10-26T09:12:26+02:00" level=info msg="Metrics available on DB, loading them."
time="2021-10-26T09:12:26+02:00" level=error msg="leveldb: not found"
time="2021-10-26T09:12:26+02:00" level=info msg="No metric with empity key"
time="2021-10-26T09:12:36+02:00" level=debug msg="Calling on time function function"
Have null propriety in the JSON payload is somethings very bad, remove it with an empty array
➜ ~ clightning diagnostic 1 | grep "null"
"forwards": null,
"forwards": null,
Sometime it is useful disable the plugin and stop to collect the data for some reason, so we should provide a plugin option to disable the collection like --disable-lnmetrics
Maybe we need to call another type the onEvent to collect the data?
I would like to have somethings like that
ln_impl: {
name: "c-lightning",
version: "0.10.1",
on_experimental: ["offers"]
}
maybe it could be a good choice as library https://stackoverflow.com/a/34150696/10854225
We can discover if the user uses the server or not from the list of URLs provided by the user.
If the user doesn't want to use the server, we can give the possibility to persist the data and keep the memory empty.
It is useful when a node will start that the plugin is able to get in memory the old information of the metrics.
However, this could be not good at some point, because it could require a lot of memory if the metrics are a lot.
At this point, the question is if the memory db is good.
As @openoms suggested, we need to have the client able to talk over the tor network, we can configure a client with the JSON payload passed by c-lightning to the plugin
We should avoid example like the following status
"channels_info": [
{
"channel_id": ""
},
{
"channel_id": ""
},
{
"channel_id": ""
},
{
"channel_id": ""
},
{
"channel_id": ""
},
{
"channel_id": ""
}
]
While we are running the online check on cln, I start to note the following error log
time="2022-08-20T17:03:57Z" level=error msg="Error during ping node 02b78caed0f45120acc48efe867aa506e8ea60f0712a23303178471da0ca2213f5: -1:Ping already pending"
time="2022-08-20T17:04:00Z" level=error msg="Error: No channel found for short channel id 580612x1826x0"
time="2022-08-20T17:04:00Z" level=error msg="Error: No channel found for short channel id 580612x1826x0"
time="2022-08-20T17:04:00Z" level=error msg="Error during ping node 03841f794d1380ee3003f223a3c3b83f740fa71793583a1ab4fb2647a21188fd6b: -1:Ping already pending"
time="2022-08-20T17:04:00Z" level=error msg="Error: No channel found for short channel id 588324x277x0"
time="2022-08-20T17:04:01Z" level=error msg="Error: No channel found for short channel id 588324x277x0"
time="2022-08-20T17:04:06Z" level=error msg="Error during ping node 03f2a0d82e057ed46017bc270c8cb0a926af8f46807df23ca7d821783b9cd718f4: -1:Ping already pending"
time="2022-08-20T17:04:06Z" level=error msg="Error: No channel found for short channel id 617793x3130x0"
time="2022-08-20T17:04:06Z" level=error msg="Error: No channel found for short channel id 617793x3130x0"
time="2022-08-20T17:04:06Z" level=error msg="Error during ping node 027d0de66d08f956a8d606c0d1c34e59bda38c05a3b1cc738fdd6378716c644997: -1:Ping already pending"
time="2022-08-20T17:04:06Z" level=error msg="Error: No channel found for short channel id 624949x429x0"
time="2022-08-20T17:04:06Z" level=error msg="Error: No channel found for short channel id 624949x429x0"
time="2022-08-20T17:04:28Z" level=error msg="Error during ping node 035d61eb24f59334d7318061cafd57a481d11d2fc597a873270514f7cdcfb1a12c: -1:Ping already pending"
time="2022-08-20T17:04:38Z" level=error msg="Error during ping node 03baa70886d9200af0ffbd3f9e18d96008331c858456b16e3a9b41e735c6208fef: -1:Ping already pending"
time="2022-08-20T17:04:38Z" level=error msg="Error: No channel found for short channel id 662212x672x1"
time="2022-08-20T17:04:38Z" level=error msg="Error: No channel found for short channel id 662212x672x1"
time="2022-08-20T17:05:24Z" level=error msg="Error during ping node 02b37a7de2b3a6b4b04485a2a6eba03feff0d5d34813a0401d09fbcbfedfbb1a1f: -1:Ping already pending"
time="2022-08-20T17:05:56Z" level=error msg="Error during ping node 039d746c7e5dadaf9e5afb79c9d4a89aba6eb426ddaaa833a811e20fe47428ffd1: -1:Ping already pending"
time="2022-08-20T17:05:56Z" level=error msg="Error: No channel found for short channel id 710182x479x0"
time="2022-08-20T17:05:56Z" level=error msg="Error: No channel found for short channel id 710182x479x0"
time="2022-08-20T17:05:56Z" level=error msg="Error during ping node 03e2408a49f07d2f4083a47344138ef89e7617e63919202c92aa8d49b574a560ae: -1:Ping already pending"
time="2022-08-20T17:06:19Z" level=error msg="Error during ping node 0219c2f8818bd2124dcc41827b726fd486c13cdfb6edf4e1458194663fb07891c7: -1:Ping already pending"
time="2022-08-20T17:06:19Z" level=error msg="Error: No channel found for short channel id 721589x547x1"
time="2022-08-20T17:06:19Z" level=error msg="Error: No channel found for short channel id 721589x547x1"
time="2022-08-20T17:06:31Z" level=error msg="Error during ping node 03f3c108ccd536b8526841f0a5c58212bb9e6584a1eb493080e7c1cc34f82dad71: -1:Ping already pending"
At the moment we have only the payment forwards information on the metrics one, it is useful to have also the payments made and received by the node.
Check if the cause is the library https://github.com/niftynei/glightning/blob/927c0b8cf00364cc23afd0056d643cb656df1f44/glightning/lightning.go#L2085
The JSON propriety in the up_time payload about "channels": is unclear. It reports only the number and not the information about the status. I would like to move it on the json object with nodeid and status.
like
{
"node_id": "0000000",
"state": "CHANNELD_AWAITING_LOCKIN",
}
I receive the following crash at the startup of the plugin
➜ ~ lightningd --disable-plugin bcli --daemon
➜ ~ panic: unaligned 64-bit atomic operation
goroutine 29 [running]:
runtime/internal/atomic.panicUnaligned()
/home/pi/compilers/go/src/runtime/internal/atomic/unaligned.go:8 +0x24
runtime/internal/atomic.Xadd64(0x1cb637c, 0x1)
/home/pi/compilers/go/src/runtime/internal/atomic/atomic_arm.s:256 +0x14
github.com/niftynei/glightning/jrpc2.(*Client).NextId(...)
/home/pi/github/go-metrics-reported/vendor/github.com/niftynei/glightning/jrpc2/client.go:229
github.com/niftynei/glightning/jrpc2.(*Client).Request(0x1cb6360, {0x279d6c, 0x3c2db4}, {0x207010, 0x1e0e080})
/home/pi/github/go-metrics-reported/vendor/github.com/niftynei/glightning/jrpc2/client.go:166 +0x40
github.com/niftynei/glightning/glightning.(*Lightning).GetInfo(...)
/home/pi/github/go-metrics-reported/vendor/github.com/niftynei/glightning/glightning/lightning.go:1060
github.com/OpenLNMetrics/go-metrics-reported/internal/plugin.(*MetricOne).OnInit(0x1c6a050, 0x1c988d8)
/home/pi/github/go-metrics-reported/internal/plugin/metrics_one.go:307 +0x50
github.com/OpenLNMetrics/go-metrics-reported/internal/plugin.(*MetricsPlugin).RegisterOneTimeEvt.func1.1(0x3b1350, {0x27d870, 0x1c6a050})
/home/pi/github/go-metrics-reported/internal/plugin/plugin.go:118 +0x30
created by github.com/OpenLNMetrics/go-metrics-reported/internal/plugin.(*MetricsPlugin).RegisterOneTimeEvt.func1
/home/pi/github/go-metrics-reported/internal/plugin/plugin.go:117 +0xcc
See this library to improve the quality of system info https://github.com/zcalusic/sysinfo
➜ ~ panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x6c4d26]
goroutine 5473 [running]:
github.com/OpenLNMetrics/go-lnmetrics.reporter/pkg/graphql.(*Client).MakeRequest.func1(0x0)
/home/vincent/github/go-metrics-reported/pkg/graphql/client.go:49 +0x26
github.com/OpenLNMetrics/go-lnmetrics.reporter/pkg/graphql.(*Client).MakeRequest(0xc0000f8450, 0xc000110030, 0x81f540, 0xc002d12300)
/home/vincent/github/go-metrics-reported/pkg/graphql/client.go:69 +0x4b4
github.com/OpenLNMetrics/go-lnmetrics.reporter/pkg/graphql.(*Client).UploadMetrics(0xc0000f8450, 0xc001c52000, 0x42, 0xc000189f00, 0x0, 0xc00011c8d0)
/home/vincent/github/go-metrics-reported/pkg/graphql/client.go:96 +0x23b
github.com/OpenLNMetrics/go-lnmetrics.reporter/internal/plugin.(*MetricOne).Upload(0xc0000c4000, 0xc0000f8450, 0xc0000c4000, 0x1)
/home/vincent/github/go-metrics-reported/internal/plugin/metrics_one.go:379 +0xa5
github.com/OpenLNMetrics/go-lnmetrics.reporter/internal/plugin.(*MetricsPlugin).updateAndUploadMetric(0x9f3920, 0x827850, 0xc0000c4000)
/home/vincent/github/go-metrics-reported/internal/plugin/plugin.go:98 +0xc5
created by github.com/OpenLNMetrics/go-lnmetrics.reporter/internal/plugin.(*MetricsPlugin).RegisterRecurrentEvt.func1
/home/vincent/github/go-metrics-reported/internal/plugin/plugin.go:110 +0x10e
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x6cfe46]
after #88 the following error happens
me="2022-04-02T08:13:25Z" level=info msg="Register one time event after 10s"
time="2022-04-02T08:13:35Z" level=debug msg="Calling on time function function"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error: channel not exist for direction INCOOMING"
time="2022-04-02T08:13:37Z" level=error msg="Error: Error: channel not exist for direction INCOOMING"
time="2022-04-02T08:13:37Z" level=error msg="Error: Error: channel not exist for direction INCOOMING"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=error msg="Error json: Unmarshal(nil *cache.NodeInfoCache)"
time="2022-04-02T08:13:37Z" level=info msg="Plugin initialized with OnInit event"
Change to up_times
Inside the json payload there are the following mistake
When the plugin will make a screenshot of the plugin, we are not able to catch if the new payment forwards by the node will fail or not. I would like to move the records "forwards" in an object instance in an accumulator.
From: "forwards": 184,
to:
"forwards": {
"by_channel": "1x2x3"
"success": 0,
"failure": 0,
"pending": 0,
}
In addition, it will be cool to have also from what channel this payment came, and some information about the payment id
With the listeners we can have information about all the node, and this mean that we could use better the listpers and void stupid pinging around the code.
A json example can be
{
"id": "0242a4ae0c5bef18048fbecf995094b74bfb0f7391418d71ed394784373f41e4f3",
"connected": true,
"netaddr": [
"3.124.63.44:9735"
],
"features": "0252a1",
"channels": [
{
"state": "CHANNELD_NORMAL",
"scratch_txid": "eb5d7ba111d58a7296e83a873e743ec9e9d105b02d9f370490fa0a0862f0411a",
"last_tx_fee_msat": "184000msat",
"feerate": {
"perkw": 253,
"perkb": 1012
},
"owner": "channeld",
"short_channel_id": "697850x2450x0",
"direction": 1,
"channel_id": "903a0a085bb5e0d94f64517d21cdcc5564cbde882e38d62c5af88dfb8eedeb6b",
"funding_txid": "6bebed8efb8df85a2cd6382e88decb6455cccd217d51644fd9e0b55b080a3a90",
"close_to_addr": "bc1qgmt7mrjpeqrn4f67chfyxtsvgtsxrpjeh7f326",
"close_to": "001446d7ed8e41c8073aa75ec5d2432e0c42e0618659",
"private": false,
"opener": "local",
"features": [
"option_static_remotekey"
],
"funding": {
"local_msat": "234857000msat",
"remote_msat": "0msat"
},
"msatoshi_to_us": 184849898,
"to_us_msat": "184849898msat",
"msatoshi_to_us_min": 184849898,
"min_to_us_msat": "184849898msat",
"msatoshi_to_us_max": 234857000,
"max_to_us_msat": "234857000msat",
"msatoshi_total": 234857000,
"total_msat": "234857000msat",
"fee_base_msat": "0msat",
"fee_proportional_millionths": 2000,
"dust_limit_satoshis": 546,
"dust_limit_msat": "546000msat",
"max_htlc_value_in_flight_msat": 18446744073709551615,
"max_total_htlc_in_msat": "18446744073709551615msat",
"their_channel_reserve_satoshis": 2348,
"their_reserve_msat": "2348000msat",
"our_channel_reserve_satoshis": 2348,
"our_reserve_msat": "2348000msat",
"spendable_msatoshi": 181961898,
"spendable_msat": "181961898msat",
"receivable_msatoshi": 47659102,
"receivable_msat": "47659102msat",
"htlc_minimum_msat": 0,
"minimum_htlc_in_msat": "0msat",
"their_to_self_delay": 144,
"our_to_self_delay": 144,
"max_accepted_htlcs": 30,
"state_changes": [
{
"timestamp": "2021-08-27T16:15:58.497Z",
"old_state": "CHANNELD_AWAITING_LOCKIN",
"new_state": "CHANNELD_NORMAL",
"cause": "user",
"message": "Lockin complete"
}
],
"status": [
"CHANNELD_NORMAL:Reconnected, and reestablished.",
"CHANNELD_NORMAL:Funding transaction locked. Channel announced."
],
"in_payments_offered": 0,
"in_msatoshi_offered": 0,
"in_offered_msat": "0msat",
"in_payments_fulfilled": 0,
"in_msatoshi_fulfilled": 0,
"in_fulfilled_msat": "0msat",
"out_payments_offered": 1,
"out_msatoshi_offered": 50007102,
"out_offered_msat": "50007102msat",
"out_payments_fulfilled": 1,
"out_msatoshi_fulfilled": 50007102,
"out_fulfilled_msat": "50007102msat",
"htlcs": []
}
]
},
It is more correct report how many ready channels are in the node each time we will make the dump of information
It is cool to have the metrics plugin in the lightning network folder.
In the uptime, we maintain the count of the channels present in the response, in this case, need to remove un the uptime of this invalid channels and maintains the channels info map/set
We maintain an array of channels info https://github.com/OpenLNMetrics/go-metrics-reported/blob/0bd0b59c4931c58773019e339516a086870cf967/internal/plugin/metrics_one.go#L81 but this can duplicate some information.
Maybe is better maintains a map of channels (by short channel id) and collect the information inside it?
For now, when a node where we have a channel with it is down, we identify it with a timestamp = 0, that it is invalid.
So, here we are missing the timestamp information, and we are required to the client to keep track of it.
It is not important, for the first release at the end of the month, but it is good to make the life easy to the user
time="2021-12-15T02:45:14Z" level=error msg="Error Status offered unexpected"
While the repository is so young, I would recommend rewriting the history and getting rid of the ~30MB blobs in there.
To find the blobs (and their filenames), save following script as git-find-large.sh
:
#!/bin/sh
git rev-list --objects --all |
git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' |
sed -n 's/^blob //p' |
sort --numeric-sort --key=2 |
cut -c 1-12,41- |
$(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest
Now run it and grep for big (MiB
) files:
$ sh git-find-large.sh | grep MiB
e18cca20fe54 2.9MiB go-lnmetrics
93d8fe172466 2.9MiB go-lnmetrics
96f8db4416d4 3.1MiB go-lnmetrics
085389b7d797 4.0MiB go-lnmetrics
a08d2baee46a 4.0MiB go-lnmetrics
67fdfa256be6 4.0MiB go-lnmetrics
809698f74644 4.6MiB go-lnmetrics
f42c5dcb17bb 4.6MiB go-lnmetrics
c9260359fd64 4.6MiB go-lnmetrics
ea783e785c3b 4.6MiB go-lnmetrics
8ab6dbfdca16 4.7MiB go-lnmetrics
bb87c77ee906 4.7MiB go-lnmetrics
de3b9e8d1a16 4.7MiB go-lnmetrics
152c6a8bb728 4.8MiB go-lnmetrics
39be3e928398 4.8MiB go-lnmetrics
c6d0b5bd5755 4.8MiB go-lnmetrics
eae5dd9b709b 4.8MiB go-lnmetrics
9a0f78a060b6 4.8MiB go-lnmetrics
d4b142804efe 4.8MiB go-lnmetrics
7e82588f9d75 4.8MiB go-lnmetrics
Now to remove those from history (changing all the following commits), install [git-filter-repo] first and then do this:
$ git-filter-repo --invert-paths --path go-lnmetrics --force
You shall be then left out with rewritten history where your current latest commit c04afd1 becomes 0ad5f11 and the repository ends up to be 333 KiB big (instead of almost 30 MiB).
For a reference, see samhocevar/wincompose#16
INFO plugin-paytest.py: RPC method 'testinvoice' does not have a docstring.
INFO plugin-paytest.py: RPC method 'paytest' does not have a docstring.
INFO connectd: Static Tor service onion address: "beatifulnewonionaddresskindofrandombutrequiresfantasyyes.onion:9735,127.0.0.1:9735" bound from extern port 9735
INFO plugin-bcli: bitcoin-cli initialized and connected to bitcoind.
INFO plugin-autoclean: autocleaning every 3600 seconds
INFO lightningd: --------------------------------------------------
INFO lightningd: Server started with public key 755746d3291fae3fa45461c990d4882470a04c0cf730ae8032c19fdf363d787e05, alias HAPPYGIRAFFE-v0.10.2-94-g5a5cf8c (color #a5cf8c) and lightningd v0.10.2-94-g5a5cf8c
panic: leveldb: not found
goroutine 23 [running]:
main.onInit(0xc000114930, 0xc00009b770, 0xc0000c4370)
/root/go-lnmetrics.reporter/cmd/go-lnmetrics.reporter/main.go:90 +0x813
github.com/vincenzopalazzo/glightning/glightning.InitMethod.Call(0xc0000dc240, 0x35, 0x40, 0xc0000c4370, 0xc000114930, 0x40eb5b, 0xc0000a8318, 0x18, 0x18)
/root/go/pkg/mod/github.com/vincenzopalazzo/[email protected]/glightning/plugin.go:1127 +0x2a7
github.com/vincenzopalazzo/glightning/jrpc2.Execute(0xc0000a8330, 0x83e680, 0xc00009b5c0, 0x83e680)
/root/go/pkg/mod/github.com/vincenzopalazzo/[email protected]/jrpc2/server.go:196 +0x35
github.com/vincenzopalazzo/glightning/jrpc2.processMsg(0xc0000e4440, 0xc00012a000, 0x1b7, 0x1b7)
/root/go/pkg/mod/github.com/vincenzopalazzo/[email protected]/jrpc2/server.go:192 +0x191
created by github.com/vincenzopalazzo/glightning/jrpc2.(*Server).listen
/root/go/pkg/mod/github.com/vincenzopalazzo/[email protected]/jrpc2/server.go:120 +0x18a
INFO plugin-go-lnmetrics: Killing plugin: exited before replying to init
INFO plugin-summary.py: Plugin summary.py initialized
Reported by @jsarenik
If we have an error during the on init phase, we do not storing the data. There is some error catch that we don't need to catch.
I'm running a node with ~180 channels. Normally my clightning process uses about 1% CPU. If I start running the metrics collector, things change to ~150% (equally split between the plugin and lightningd). Also (obviously?) interacting with lightningd becomes horribly slow, listpeers etc takes ~7s with the plugin running and 0.03s without.
Sadly that makes it unusable for me.
We can filter the list forwards and add only the last payments https://lightning.readthedocs.io/lightning-listforwards.7.html and not put inside all the payments pass through the channel
If we disable the deprecated api the new msat int filed break all the stuff
time="2022-08-12T16:24:46Z" level=info msg="Register one time event after 10s"
time="2022-08-12T16:24:56Z" level=debug msg="Calling on time function function"
time="2022-08-12T16:24:56Z" level=error msg="Error during the OnInit method; json: cannot unmarshal number into Go struct field NodeInfo.fees_collected_msat of type string"
time="2022-08-12T16:24:56Z" level=error msg="Error during on init call: json: cannot unmarshal number into Go struct field NodeInfo.fees_collected_msat of type string"
Now we need to find a solution for this to support both
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.