airbnb / nerve Goto Github PK
View Code? Open in Web Editor NEWA service registration daemon that performs health checks; companion to airbnb/synapse
License: MIT License
A service registration daemon that performs health checks; companion to airbnb/synapse
License: MIT License
The nerve Gem currently does not build on the arm64 platform (used by EC2 Graviton, among others) because of a dependency on an older version of the zk
Gem.
We've run into a few situations where hitless nerve reloads aren't so hitless because we have 5 registrations that take 5 seconds (zk timeout), and then our 30s sleep isn't enough.
Perhaps we should make it async.
Context: https://github.com/Yelp/hacheck/pull/4
We discovered one of our services (mistakenly) returning 3xx status codes; hacheck was following the redirects.
We want to fix that, but Nerve considers 3xx to be health check failures.
HAProxy, nginx, Marathon all consider 3xx status codes to be healthy. IMO nerve should be consistent with these other healthcheck consumers (Particularly HAProxy, which nerve is designed to work with).
Hi Igor,
Hope all is well. I am trying to configure Nerve, and get the following issue:
/usr/local/lib/ruby/gems/2.1.0/gems/nerve-0.5.2/bin/nerve:48:in `rescue in parseconfig': uninitialized constant Psych::ParseError (NameError)
from /usr/local/lib/ruby/gems/2.1.0/gems/nerve-0.5.2/bin/nerve:42:in `parseconfig'
from /usr/local/lib/ruby/gems/2.1.0/gems/nerve-0.5.2/bin/nerve:63:in `block in <top (required)>'
from /usr/local/lib/ruby/gems/2.1.0/gems/nerve-0.5.2/bin/nerve:63:in `each'
from /usr/local/lib/ruby/gems/2.1.0/gems/nerve-0.5.2/bin/nerve:63:in `<top (required)>'
from /usr/local/bin/nerve:23:in `load'
from /usr/local/bin/nerve:23:in `<main>'
I have read about some issues with Psych, so tried installing 0.5.3 from github (since gem install nerve installs 0.5.2), but am not quite sure how to do that, since I have no ruby experience whatsoever (did git clone git://github.com/airbnb/nerve.git and gem install bundler and bundle install, but would get /bin/bash: nerve: command not found)
Would really appreciate pointers.
Thanks!
The latest nerve version (0.3.0) is unavailable on rubygems.org. Could you publish it there?
Does Airbnb still use Synapse and Nerve for service discovery? There doesn't seem to be much activity on either project.
Is there any support for nerve getting port information dynamically from Marathon?
We are thinking a config item that points to a list of marathon master (or pulls this info from marathon's zk), and a starting list of marathon job ids to monitor.
Nerve would then query marathon/v1/endpoints and update zk as the jobs are moved around the mesos cluster. Or, it could watch zk marathon/tasks to get the tasks & ports as they are created/deleted.
The use case is using marathon to launch docker containers in mesos, using nerve + synapse for proxy configuration.
On Ruby 1.8, I had to add the following to nerve.gemspec
in order to satisfy the JSON dependency:
gem.add_runtime_dependency "json", "~> 1.8.1"
I see that you pin Bunny to 1.1.0
exactly. Not sure why that's necessary but that version is dozens of releases behind 1.6.3
. There should be no major breaking changes (for some apps, no breaking changes period) and I highly recommend you upgrade.
Bunny change log is fairly detailed. I'm happy to answer questions if you have any.
I'm upgrating my stack Ubuntu 18 => Ubuntu 20
Nerve :
ruby 2.7
gem -v => 3.0.3
Bundler version 2.3.14
zk (1.10.0)
Log
2022-05-20_15:35:37.36719 I, [2022-05-20T17:35:37.367135 #104748] INFO -- Nerve::Nerve: nerve: starting up!
2022-05-20_15:35:37.36720 bundler: failed to load command: ./bin/nerve (./bin/nerve)
Did somebody install nerve with Ubuntu 20?
Hi,
We are using smartstack for over a year and we love it, But we use the getyourguide fork that uses serf. And then we forked that and using our or branch.
I would like to avoid this fork nightmare. Are you open for serf reporter PR ?
First, apologies to raise this question/request here.
I am looking for Nerve implementation to report the service to Kubernetes API server. What kind of challenges and issues to keep in mind while doing such implementation. Any pointers to this topic is appreciated.
Thank You
Hi, In nerve, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
alabaster==0.7.12
aniso8601==8.0.0
Babel==2.8.0
bcrypt==3.1.7
beautifulsoup4==4.9.1
bs4==0.0.1
certifi==2020.6.20
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cryptography==3.0
decorator==4.4.2
dnspython==2.0.0
docutils==0.16
Flask==1.1.2
Flask-HTTPAuth==4.1.0
Flask-RESTful==0.3.8
html5lib==1.1
idna==2.10
imagesize==1.2.0
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
mysql-connector==2.2.9
packaging==20.4
paramiko==2.7.1
Pillow==7.2.0
psutil==5.7.2
psycopg2-binary==2.8.5
pycparser==2.20
Pygments==2.6.1
pymongo==3.11.0
PyNaCl==1.4.0
pyparsing==2.4.7
PyPDF2==1.26.0
python-nmap==0.6.1
pytz==2020.1
redis==3.5.3
reportlab==3.5.46
requests==2.24.0
simplejson==3.17.2
six==1.15.0
snowballstemmer==2.0.0
soupsieve==2.0.1
Sphinx==3.1.2
sphinx-rtd-theme==0.5.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==1.0.3
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.4
urllib3==1.25.9
validators==0.18.1
webencodings==0.5.1
Werkzeug==1.0.1
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency beautifulsoup4 can be changed to >=4.10.0,<=4.11.1.
The version constraint of dependency Jinja2 can be changed to >=2.7,<=3.1.2.
The version constraint of dependency paramiko can be changed to >=1.13.0,<=2.11.0.
The version constraint of dependency psutil can be changed to >=3.0.0,<=5.9.1.
The version constraint of dependency pymongo can be changed to >=2.4,<=4.1.1.
The version constraint of dependency python-nmap can be changed to >=0.3.4,<=0.7.1.
The version constraint of dependency redis can be changed to >=2.0.0,<=4.3.3.
The version constraint of dependency requests can be changed to >=2.4.0,<=2.15.1.
The version constraint of dependency urllib3 can be changed to >=1.9,<=1.26.9.
The version constraint of dependency validators can be changed to >=0.9,<=0.20.0.
The version constraint of dependency Werkzeug can be changed to >=0.6.1,<=2.1.2.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
bs4.BeautifulSoup
jinja2.Environment.get_template
paramiko.SSHClient.close paramiko.AutoAddPolicy paramiko.SSHClient paramiko.SSHClient.set_missing_host_key_policy paramiko.SSHClient.connect
psutil.net_if_addrs
pymongo.MongoClient
nmap.PortScanner
redis.ConnectionPool redis.Redis
requests.head requests.auth.HTTPBasicAuth urllib3.disable_warnings requests.post requests.put requests.get requests.options requests.delete
urllib3.disable_warnings
validators.domain
werkzeug.security.generate_password_hash werkzeug.security.check_password_hash
core.logging.logger.debug email.mime.multipart.MIMEMultipart.attach logging.getLogger.setLevel core.reports.generate_txt isinstance self.http_request paramiko.AutoAddPolicy core.parser.ConfParser.get_cfg_webhook socket.gethostname core.parser.Helper email.mime.multipart.MIMEMultipart self.contains_password_form flask.Flask.register_blueprint validators.domain ftplib.FTP.login run_rules core.port_scanner.Scanner core.redis.rds.store_json core.redis.rds.store core.security.verify_password flask_restful.Api threading.Thread.start socket.socket.close paramiko.SSHClient.connect self.r.sadd self.nmap.scan.items core.redis.rds.start_session urllib3.disable_warnings flask.request.get_json core.manager.rule_manager version.VERSION.replace.replace core.utils.Utils.is_string_email core.parser.ConfParser.get_cfg_networks core.redis.rds.store_topology core.utils.Charts.make_radar host.data.add jinja2.FileSystemLoader u_settings.get.route open core.redis.rds.create_session werkzeug.security.generate_password_hash self.netutils.is_valid_port ipaddress.ip_address threading.enumerate db.db_ports.database_ports.items core.reports.generate_csv pickle.loads resp.text.startswith self.is_file_ds_store self.utils.hash_sha1 resp.headers.get core.utils.Utils.generate_uuid core.parser.ConfParser.get_cfg_custom_ports core.utils.Utils.sev_to_human core.parser.ConfParser core.parser.ConfParser.get_cfg_usernames psycopg2.connect.close bs4.BeautifulSoup.find_all logging.StreamHandler.setLevel self.randomize_origin core.redis.rds.is_ip_blocked type smtplib.SMTP.sendmail core.redis.rds.get_vuln_data app.config.update core.parser.ConfParser.get_cfg_allow_bf core.redis.rds.store_vuln core.redis.rds.clear_session dns.resolver.query core.utils.Integration k.decode.decode self.r.dbsize core.parser.ConfParser.get_cfg_exc_networks flask.stream_with_context core.parser.ScanParser.get_module self.get_scan_progress os.remove core.parser.ConfParser.get_cfg_scan_threads os.environ.get smtplib.SMTP.login os.urandom self.rds.clear_session core.redis.rds.get_last_scan core.utils.Network requests.put copy.deepcopy.append flask.render_template core.utils.Integration.submit_webhook __import__ logging.StreamHandler.setFormatter core.triage.Triage.run_cmd f.write join __import__.Rule os.geteuid scanner.scan.items open.write flask.request.get_json.get format.decode smtplib.SMTP datetime.datetime.now.strftime self.r.get flask.Blueprint self.mongodb_attack k.decode.split socket.socket.connect_ex function_to_protect self.generate_str pickle.dumps format.encode resp.headers.get.lower sys.path.insert socket.socket.settimeout redis.ConnectionPool self.utils.is_string_url core.parser.ConfParser.get_cfg_max_ports requests.get requests.options self.rds.store_json core.redis.rds.get_exclusions core.parser.ScanParser self.utils.generate_uuid f.read dict core.triage.Triage.http_request paramiko.SSHClient.close core.utils.Utils.is_string_url len vulns.items flask.session.get self.r.scan_iter core.redis.rds.end_session json.dumps core.redis.rds.initialize core.utils.Charts requests.post self.is_scan_active core.redis.rds.store_sch conf.get_cfg_exc_networks.append core.utils.Network.get_primary_ip email.mime.multipart.MIMEMultipart.as_string str core.redis.rds.get_topology xml.etree.ElementTree.SubElement core.parser.ConfParser.get_cfg_netinterface core.redis.rds.get_inventory_data all redis.Redis p.get_module.lower re.findall core.register.Register flask_httpauth.HTTPBasicAuth self.store pymongo.MongoClient self.r.flushdb core.utils.Utils core.utils.Network.get_nics core.workers.start_workers requests.head core.reports.generate_xml logging.getLevelName join.keys ssl.create_default_context self.is_attack_active glob.glob mysql.connector.connect RedisManager copy.deepcopy socket.socket.sendall flask.make_response s.recv.decode csv.writer.writerow core.parser.ConfParser.get_cfg_passwords bs4.BeautifulSoup.findAll self.clear_session self.netutils.is_dns ipaddress.ip_network core.manager.rule_manager.values flask_restful.Api.add_resource text.encode struct.unpack_from sys.exit shlex.split core.parser.ConfParser.get_cfg_allow_inet xml.etree.ElementTree.Element any mysql.connector.connect.is_connected paramiko.SSHClient f.close flask.flash self.ssh_attack email.mime.text.MIMEText.add_header core.redis.rds.get_scan_data.items requests.delete logging.FileHandler.setLevel self.r.smembers sorted.items core.parser.ScanParser.get_cpe network.startswith self.r.delete re.match functools.wraps resp.url.startswith xml.etree.ElementTree.tostring.items schedule_domains i.attrs.get header.lower hashlib.sha1 os.path.basename subprocess.Popen flask.Flask core.utils.Integration.submit_slack datetime.datetime.now uuid.uuid4 psycopg2.connect self.mysql_attack self.generate_filename fields.append version.VERSION.replace core.parser.ConfParser.get_cfg_frequency core.parser.ConfParser.get_cfg_domains nmap.PortScanner flask.send_from_directory flask.Flask.run socket.socket.recv p.get_product.lower resp.headers.startswith flask.make_response.set_cookie core.redis.rds.store_inv core.redis.rds.get_slack_settings socket.socket self.utils.is_string_safe float flask.redirect rules.append core.redis.rds.get_ips_to_scan core.parser.ScanParser.get_product core.redis.rds.get_scan_config logging.Formatter core.redis.rds.get_scan_count self.psql_attack flask.request.form.get core.redis.rds.get_scan_progress socket.socket.connect psutil.net_if_addrs xml.etree.ElementTree.tostring generate core.parser.ScanParser.get_domain requests.auth.HTTPBasicAuth self.utils.get_datetime email.header.Header resp.headers.items core.reports.generate_html data.items flask.session.pop flask.request.values.get header.startswith smtplib.SMTP.starttls a.has_attr socket.socket.getsockname self.netutils.is_network_in_denylist core.mailer.send_email ftplib.FTP char.isdigit core.parser.SchemaParser.verify core.redis.rds.store_sca core.redis.rds.delete random.choices value.items core.utils.Charts.make_doughnut time.sleep threading.Thread jinja2.Environment format self.rule_match_string.items core.register.Register.scan core.redis.rds.log_attempt self.r.incr key.decode.split sorted resp.text.split.replace a.contents.split port.ip.MongoClient.list_database_names logging.getLogger.addHandler logging.StreamHandler i.name.startswith core.redis.rds.get_session_state core.redis.rds.get_email_settings logging.FileHandler flask.Blueprint.route core.parser.ConfParser.get_raw_cfg email.mime.text.MIMEText core.parser.SchemaParser core.parser.Helper.portTranslate csv.writer core.logging.logger.error mysql.connector.connect.close logging.getLogger core.triage.Triage self.utils.clear_log core.triage.Triage.string_in_headers flask.request.get_json.route self.netutils.is_network text.encode.hashlib.sha1.hexdigest get_rules re.search set bs4.BeautifulSoup self.rds.create_session threading.active_count self.ftp_attack k.decode schedule_ips templateEnv.get_template.render werkzeug.security.check_password_hash self.r.exists core.redis.rds.get_scan_data paramiko.SSHClient.set_missing_host_key_policy self.r.get.decode urllib.parse.urlparse subprocess.Popen.communicate resp.text.split smtplib.SMTP_SSL open.close os.path.exists core.logging.logger.info self.redis_attack xml.etree.ElementTree.Element.append core.redis.rds.get_vuln_by_id xml.etree.ElementTree.tostring.decode int core.triage.Triage.has_cves uuid.uuid4.str.split req.headers.get socket.getservbyport logging.FileHandler.setFormatter flask.Response email.utils.formataddr smtplib.SMTP.quit core.port_scanner.Scanner.scan r.split config.WEB_LOG.open.close self.utils.is_user_root res.items jinja2.Environment.get_template core.utils.Utils.is_version_latest core.utils.Utils.get_date self.r.set self.store_json self.nmap.scan core.redis.rds.is_session_active
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.
Setup:
6 nerve service watchers on the same instance connected to the same ZK pool
How to reproduce:
Nerve::Nerve: nerve: watcher service1 not alive; reaping and relaunching
Nerve::ServiceWatcher: nerve: stopping service watch service1
Nerve::Nerve: nerve: could not reap service1, got #<Zookeeper::Exceptions::NotConnected: Zookeeper::Exceptions::NotConnected>
Actual problem:
The problem is that in start() in zookeeper.rb there are no checks to see if the ZK connection is alive before re-using in.
Saw a similar topic somewhere, but since the fix has been apparently merged, decided to open a new issue.
I am using a simple mysql check to register the members of a Galera Cluster.
I, [2014-08-30T09:47:26.757807 #40790] INFO -- Nerve::Reporter::Zookeeper: nerve: successfully created zk connection to x.example.com:2181,x2.example.com:2181,x3.example.com:2181/services/database
I, [2014-08-30T09:47:26.776437 #40790] INFO -- Nerve::ServiceCheck::MySQLServiceCheck: nerve: service check [email protected] initial check returned true
I, [2014-08-30T09:47:26.803240 #40790] INFO -- Nerve::ServiceWatcher: nerve: service db is now up
I, [2014-08-30T13:58:51.491719 #40790] INFO -- Nerve::ServiceCheck::MySQLServiceCheck: nerve: service check [email protected] got error #<RuntimeError: failed to connect with mysql: ERROR 1047 (08S01) at line 1: WSREP has not yet prepared node for application use
>
I, [2014-08-30T14:00:08.381207 #40790] INFO -- Nerve::ServiceCheck::MySQLServiceCheck: nerve: service check [email protected] got error #<RuntimeError: failed to connect with mysql: ERROR 1047 (08S01) at line 1: WSREP has not yet prepared node for application use
>
I, [2014-08-30T17:22:00.684380 #40790] INFO -- Nerve::ServiceCheck::MySQLServiceCheck: nerve: service [email protected] got error #<RuntimeError: failed to connect with mysql: ERROR 1047 (08S01) at line 1: WSREP has not yet prepared node for application use
>
After which the checks stop, and although the node has already been restored, it fails to register in Zookeeper.
Your installation paragraph in the README describes two ways to install nerve: Via gem install nerve
or bundler
. Bot use rubygems.org as their backend, but it only has a 0.0.1
version available.
The tags in the repository are going up to 0.2.1, so there seem to be something missing.
I got the following error when running Nerve under Ruby 1.8:
nerve/lib/nerve/service_watcher/tcp.rb:53: uninitialized constant Nerve::ServiceCheck::CHECKS (NameError)
from nerve/lib/nerve/service_watcher.rb:1:in `require'
from nerve/lib/nerve/service_watcher.rb:1
from nerve/lib/nerve.rb:10:in `require'
from nerve/lib/nerve.rb:10
from bin/nerve:6:in `require'
from bin/nerve:6
I hacked around it by defining CHECKS = {}
in lib/nerve/service_watcher/base.rb
, but there might be a cleaner way?
I am noticing that the service_watcher sleep loop is not configured to check for $EXIT during long sleep loops, as per:
https://github.com/airbnb/nerve/blob/master/lib/nerve/service_watcher.rb#L72
This is causing me to experience timeouts when stopping the application. I currently have a single listener with a check_interval set to 10. I am able to reproduce this with the following method:
root@i-ef5077bc:~# for i in seq{1..10}; do (sv stop nerve > /dev/null && sv start nerve > /dev/null ) ; echo status is $?; sleep 1; done
status is 0
status is 1
status is 0
status is 1
status is 0
status is 1
status is 0
status is 1
status is 0
status is 1
This patch in this PR fixes the issue for me:
#53
root@i-ef5077bc:/opt/smartstack/nerve/src# for i in seq{1..10}; do (sv stop nerve > /dev/null && sv start nerve > /dev/null ) ; echo status is
$?; sleep 1; done
status is 0
status is 0
status is 0
status is 0
status is 0
status is 0
status is 0
status is 0
status is 0
status is 0
Are there any plans for a more dynamic configuration. I have looked at the PR #16 and its discussion. It doesn't seem like there is movement in this direction. I've been contemplating storing the service configuration in the ZK node. Nerve (and synapse) could watch the list of services and add new ones as they appear.
Does this sound reasonable? Is there another direction you guys are going? Currently I have a hacked-up version of your chef recipe modifying the config based on info in ZK.
Discovered while doing a general browse of some logfiles:
I, [2014-01-03T20:29:35.954755 #2900] INFO -- Nerve::Nerve: nerve: starting up!
I, [2014-01-03T20:29:35.954903 #2900] INFO -- Nerve::Nerve: nerve: starting run
I, [2014-01-03T20:29:35.955280 #2900] INFO -- Nerve::ServiceWatcher: nerve: starting service watch ministry-worker
I, [2014-01-03T20:29:35.955376 #2900] INFO -- Nerve::Reporter: nerve: waiting to connect to zookeeper at 1.zookeeper.qa1.climate.net:2181,0.zookeeper.qa1.climate.net:2181,2.zookeeper.qa1.climate.net:2181/nerve/services/qa1/ministry-worker/default
I, [2014-01-03T20:29:36.015104 #2900] INFO -- Nerve::Reporter: nerve: successfully created zk connection to 1.zookeeper.qa1.climate.net:2181,0.zookeeper.qa1.climate.net:2181,2.zookeeper.qa1.climate.net:2181/nerve/services/qa1/ministry-worker/default
I, [2014-01-03T20:29:36.020927 #2900] INFO -- Nerve::ServiceCheck::HttpServiceCheck: nerve: service check http-i-f76498d9.us-east-1.qa1.climate.net:13050/ initial check returned false
I, [2014-01-03T20:30:08.431404 #2900] INFO -- Nerve::ServiceCheck::HttpServiceCheck: nerve: service check http-i-f76498d9.us-east-1.qa1.climate.net:13050/ transitions to up after 3 successes
I, [2014-01-03T20:30:08.439403 #2900] INFO -- Nerve::ServiceWatcher: nerve: service ministry-worker is now up
E, [2014-01-03T20:44:13.056572 #2900] ERROR -- Nerve::ServiceWatcher: nerve: error in service watcher ministry-worker: response for meth: :exists, args: [425, "/", nil, nil], not received within 30 seconds
I, [2014-01-03T20:44:13.056680 #2900] INFO -- Nerve::ServiceWatcher: nerve: ending service watch ministry-worker
...and then nerve apparently stops probing the service state for as long as the nerve process remains up. Digging into this, the issue seems to be some sort of hard-coded timeout in the zookeeper gem, in continuation.rb:
107 @mutex.synchronize do
108 while true
109 now = Time.now
110 break if @rval or @error or (now > time_to_stop)
111
112 deadline = time_to_stop.to_f - now.to_f
113 @cond.wait(deadline)
114 end
115
116 if (now > time_to_stop) and !@rval and !@error
117 raise Exceptions::ContinuationTimeoutError, "response for meth: #{meth.inspect}, args: #{@args.inspect}, not received within #{OPERATION_TIMEOU 117 T} seconds"
118 end
119
120 case @error
121 when nil
122 # ok, nothing to see here, carry on
123 when :shutdown
124 raise Exceptions::NotConnected, "the connection is shutting down"
125 when ZOO_EXPIRED_SESSION_STATE
126 raise Exceptions::SessionExpired, "connection has expired"
127 else
128 raise Exceptions::NotConnected, "connection state is #{STATE_NAMES[@error]}"
129 end
130
131 case @rval.length
132 when 1
133 return @rval.first
134 else
135 return @rval
136 end
137 end
138 end
The good news, such as it is, is that if this happens while the node is healthy, nerve will "fail open" -- the node will continue to be registered in ZK, and synapse clients will still see it and route traffic to it if it passes /ping. The bad news is that if this happens while the node is unhealthy, it will stay that way forever from synapse's perspective, until the nerve process is manually restarted.
I am playing with a code change to simply have the nerve process abort if any of the service watchers die in this fashion (we run nerve under supervise here, so it would automatically be restarted in that case), but would like some feedback on what you think the proper way to handle this before submitting a pull request.
Hi,
I am thinking of using Smart Stack for service discovery. Nowadays, I am using monit to keep my services alive and do health checks.
As I would be using nerve to do the same health checks, it came to my mind if it make any sense to have to configure the same health check in 2 places, and do each one twice, or if it would have any sense on having nerve report on my alarm system and zk, at the same time for the same unsuccessful health check.
So, does it make any sense to have more than one reporter registered at the same time?
Wanted to hear your thoughts on this, we have two dc running and in ideal world they both run as individual pods which would mean that nerve/synapse are local to a dc which works out well.
Now on top of this we are thinking if we can use smartstack for cross dc discovery if you don't find any available services in your dc.
Did you guys think about this, do you face this issue at yelp or airbnb ? @jolynch @igor47 ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.