microsoft / applicationinsights-python Goto Github PK
View Code? Open in Web Editor NEWAzure Monitor Distro for OpenTelemetry Python
License: MIT License
Azure Monitor Distro for OpenTelemetry Python
License: MIT License
I have an RSS feed in my Django app that uses 'django.contrib.syndication.views import Feed' but when Application Insights is running in the app I get this error when visiting the feed url. Django version is 1.11.13, Python 2.7.12
If I remove the middleware line for insights the feed works again.
Request Method: | GET
-- | --
http://localhost:8000/latest/news/
1.11.13
AttributeError
'LatestNewsFeed' object has no attribute '__name__'
/usr/local/lib/python2.7/dist-packages/applicationinsights/django/middleware.py in process_view, line 175
/usr/bin/python
2.7.12
['/code', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
There seems to be no documentation for setting up operation id. It would be really helpful to have some with examples for setting up correlation.
tc = TelemetryClient(instrumentation_key=instrumentation_key)
tc.context.operation.id = <operation_id>
tc.track_trace('Test trace')
tc.flush()
TelemetryChannel.py line 104
for key, value in local_context.properties
if key not in properties:
properties[key] = value
Python will complain too many values to unpack
To help diagnose server performance issues, it's be great to add support for Application Insight's dependency tracking feature
General:
Django Middleware:
These changes are all prototyped here: twschiller@fdff3d8.
For those wondering how to track their SQL/cache dependencies when using Django:
CursorWrapper.execute
and CursorWrapper.executemany
methodsdjango_redis
): wrap the RedisCache.get
method, which includes both the network call and the deserializationI'm trying to set up a LoggingHandler with a few context properties but this is impossible currently:
import logging
from applicationinsights import channel
from applicationinsights.logging import LoggingHandler
channel = channel.TelemetryChannel()
channel.context.properties['foo'] = 'bar'
handler = LoggingHandler(key, telemetry_channel=channel)
logging.basicConfig(handlers=[handler], format='%(message)s', level=logging.DEBUG)
logging.info("foo") # trace does not include foo property
logging.shutdown()
The reason this is not working is that LoggingHandler constructs a new TelemetryClient instance which then ignores the existing context from the passed channel object:
class TelemetryClient(object):
def __init__(self, instrumentation_key, telemetry_channel=None):
...
self._context = channel.TelemetryContext()
self._context.instrumentation_key = instrumentation_key
self._channel = telemetry_channel or channel.TelemetryChannel()
@property
def context(self):
return self._context
Shouldn't this be instead:
class TelemetryClient(object):
def __init__(self, instrumentation_key, telemetry_channel=None):
self._channel = telemetry_channel or channel.TelemetryChannel()
self.context.instrumentation_key = instrumentation_key
@property
def context(self):
return self._channel.context
?
Add support synchronous mode for batch processing [1] in the Django Middleware.
Right now, django command can't push logs when the subsequent processing time is smaller than send_interval.
Hi,
When my response has a render() method (Django's TemplateResponse), it throws the following exception:
ApplicationInsightsMiddleware.process_template_response didn't return an HttpResponse object. It returned None instead.
If I return the response object in method process_template_response from django's middleware, it works fine.
Thanks
When I add the django middleware for application insights I get the following error:
django.core.exceptions.ImproperlyConfigured: WSGI application 'projectile.wsgi.application' could not be loaded; Error importing module: 'No module named django'
As soon as I comment out the middleware the exception goes away. Any ideas?
The module test.test_support
exists only in Python 2. This was renamed to test.support
in Python 3. I am shipping the following patch in the Debian package for this module:
Description: Use test.support if Python 3 to enable test suite on Python 3
Author: Iain R. Learmonth <[email protected]>
Last-Update: 2016-09-23
---
--- a/tests/applicationinsights_tests/TestTelemetryClient.py
+++ b/tests/applicationinsights_tests/TestTelemetryClient.py
@@ -1,10 +1,15 @@
+import sys
+import os
+import os.path
import unittest
import inspect
import json
-from test import test_support
+if sys.version_info < (3, 0):
+ from test import test_support
+else:
+ from test import support as test_support
-import sys, os, os.path
rootDirectory = os.path.join(os.path.dirname(os.path.realpath(__file__)), '..', '..')
if rootDirectory not in sys.path:
sys.path.append(rootDirectory)
On pypi the following sentence appears
Python 2.7 and Python 3.4 are currently supported by this module.
https://pypi.python.org/pypi/applicationinsights/0.8.0
whereas the README says
Python >=2.7 and Python >=3.4 are currently supported by this module.
https://github.com/Microsoft/ApplicationInsights-Python
The former sentence is rather misleading. I initially interpreted as implying that ApplicationInsights-Python is very out of date!
I've been investigating Azure/azure-cli#4649
The summary is that our outgoing internet proxy is blocking telemetry requests, and this is causing the upload processes to hang around in an infinite loop every time the az
CLI is executed.
I've traced this problem down to https://github.com/Microsoft/ApplicationInsights-Python/blob/master/applicationinsights/channel/SenderBase.py#L123-L146
Any time a request fails, it is re-queued for another attempt. There is no code which ever causes this to give up, which is especially problematic when a request will never succeed.
Any thoughts on how this could/should be resolved?
Perhaps a counter should be put on the queue items that can give up after a certain number of failures?
This looks like it could be tricky to implement, as I can't see an envelope on the queue item where this metadata could live.
The documentation for channel configuration is incorrect:
from applicationinsights import TelemetryClient
tc = TelemetryClient('<YOUR INSTRUMENTATION KEY GOES HERE>')
# flush telemetry every 30 seconds (assuming we don't hit max_queue_item_count first)
tc.channel.sender.send_interval_in_milliseconds = 30 * 1000
The client created by TelemetryClient is a synchronous client, so there's no notion of send_interval_in_milliseconds
. Additionally, send_interval_in_milliseconds
is not a property of SynchronousClient
.
Version: 0.11.3
Is it possible to pass custom data to Application Insights along with logging events? It seems like the parameters that get passed are fixed at the moment?
I'd be interested to get this functionality added and can try to submit a pull request if it is not already possible somehow.
The Django insights middleware crashes in process_response
when a middleware short-circuits when processing the request because the process_request
hasn't be run.
For example, the CommonMiddleware
will short-circuit to affix a missing trailing slash with a 301 Moved Permanently response if the client accesses /admin
when /admin/
is the registered URL
MIDDLEWARE_CLASSES = [
# ...
'django.middleware.common.CommonMiddleware',
# ...
'applicationinsights.django.ApplicationInsightsMiddleware'
]
The reasonable solution is probably to check for the presence of the appinsights
attribute in the process_response
method. The method can submit the event without the duration
field set, if the appinsights
attribute is missing.
Running something fairly basic on Python 2.7 which works on Windows but fails on Linux. Any ideas?
Basically, "Done" is never printed.
from applicationinsights import TelemetryClient
telemetry = ''
tc = TelemetryClient(telemetry)
tc.context.device.model = 'TestModel'
tc.context.application.id = 'SimpleTelemetryTest'
tc.channel.sender.send_interval_in_milliseconds = 30 * 1000
tc.channel.sender.max_queue_item_count = 10
r = tc.track_metric('linesSent', 1)
print r
tc.flush()
print 'Done'
In the documentation you set up the telemetry context in the following way and pass an id to the application object:
from applicationinsights import TelemetryClient
tc = TelemetryClient('<YOUR INSTRUMENTATION KEY GOES HERE>')
tc.context.application.id = 'My application'
tc.context.application.ver = '1.2.3'
But as only defined keys are written in the Utils._write_complex_object(...)
method and the id property is not defined there it will not be transmitted:
class Application(object):
"""Data contract class for type Application.
"""
_defaults = collections.OrderedDict([
('ai.application.ver', None)
])
...
def write(self):
"""Writes the contents of this object and returns the content as a dict object.
Returns:
(dict). the object that represents the same data as the current instance.
"""
return _write_complex_object(self._defaults, self._values)
...
def _write_complex_object(defaults, values):
output = collections.OrderedDict()
for key in defaults.keys():
default = defaults[key]
if key in values:
Is the documentation wrong or is the id property missing?
function "process_template_response" must return an HttpResponse. The code below returns None.
Error Point : https://github.com/Microsoft/ApplicationInsights-Python/blob/master/applicationinsights/django/middleware.py#L215
Ref django code : https://github.com/django/django/blob/master/django/core/handlers/base.py#L150
Here's what you can change:
def process_template_response(self, request, response):
if hasattr(request, 'appinsights') and hasattr(response, 'template_name'):
data = request.appinsights.request
data.properties['template_name'] = response.template_name
return response # FIX POINT
Thanks.
My understanding is that the following is all that is required to cause requests, traces (exceptions) and custom events (logs) to be sent to Application Insights:
applicationinsights.flask.ext.AppInsights(flask.Flask("foo"))
I'm not completely clear on whether I need to be manually calling .flush() or not - please advise.
Either way, I've tried:
app = flask.Flask("foo")
app_insights = applicationinsights.flask.ext.AppInsights(app)
app.logger.info("test")
app_insights.flush()
Requests are logged but no custom events appear in app insights. No errors.
Proposed:
The following examples from the README do not see to work:
There are no errors but no data is being logged on Application Insights.
The Django middleware uses get_short_name for the user id, which is generally the first name of the user:
Since first names aren't unique, this breaks user tracking.
Here is the documentation guidance for selecting user IDs: https://docs.microsoft.com/en-us/azure/application-insights/app-insights-usage-send-user-context#choosing-user-ids
By default, it probably makes sense to use the PKID for the user and expose a configuration option to allow a different ID generation function, i.e., a function with signature user -> string
. The user could then configure based on their needs (e.g., username, a hash id of the user id, etc.)
With the latest release, it is possible to set up context.properties for logging handler but it still does not allow setting other context attributes (user, operation etc) passed as channel.context.
This is because in the channel write code
local_context = context or self._context
for key, value in self._write_tags(local_context):
tags[key] = value
This ignores the attributes set in the channel itself. This should be
for prop_context in [self._context, context]:
for key, value in self._write_tags(prop_context):
tags[key] = value
We would like to set alerts on exception thresholds for a Python application with Application Insights.
On Application Insights, exceptions get logged as "browser exception". Is this correct?
Also, the data gets logged as "synthetic" or " bot" data and is therefore ignored by the alerts. I think this is a bug.
When connection to appinsights is down the client gets a stackoveflow exception because of recursive calls . Easily reproduced by calling to track_trace more than 500 (max_queue_size) times.
Happens in applicationinsights==0.11.1
See below the exception:
Fatal Python error: Cannot recover from stack overflow.
Current thread 0x00007fc3839fb700 (most recent call first):
File "/home/miniconda3/env/myenv/lib/python3.5/socket.py", line 93 in _intenum_converter
File "/home/miniconda3/env/myenv/lib/python3.5/socket.py", line 423 in family
File "/home/miniconda3/env/myenv/lib/python3.5/ssl.py", line 725 in init
File "/home/miniconda3/env/myenv/lib/python3.5/ssl.py", line 385 in wrap_socket
File "/home/miniconda3/env/myenv/lib/python3.5/http/client.py", line 1261 in connect
File "/home/miniconda3/env/myenv/lib/python3.5/http/client.py", line 877 in send
File "/home/miniconda3/env/myenv/lib/python3.5/http/client.py", line 934 in _send_output
File "/home/miniconda3/env/myenv/lib/python3.5/http/client.py", line 1103 in endheaders
File "/home/miniconda3/env/myenv/lib/python3.5/http/client.py", line 1152 in _send_request
File "/home/miniconda3/env/myenv/lib/python3.5/http/client.py", line 1107 in request
File "/home/miniconda3/env/myenv/lib/python3.5/urllib/request.py", line 1254 in do_open
File "/home/miniconda3/env/myenv/lib/python3.5/urllib/request.py", line 1297 in https_open
File "/home/miniconda3/env/myenv/lib/python3.5/urllib/request.py", line 444 in _call_chain
File "/home/miniconda3/env/myenv/lib/python3.5/urllib/request.py", line 484 in _open
File "/home/miniconda3/env/myenv/lib/python3.5/urllib/request.py", line 466 in open
File "/home/miniconda3/env/myenv/lib/python3.5/urllib/request.py", line 163 in urlopen
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SenderBase.py", line 134 in send
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SynchronousQueue.py", line 39 in flush
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/QueueBase.py", line 74 in put
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SenderBase.py", line 146 in send
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SynchronousQueue.py", line 39 in flush
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/QueueBase.py", line 74 in put
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SenderBase.py", line 146 in send
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SynchronousQueue.py", line 39 in flush
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/QueueBase.py", line 74 in put
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SenderBase.py", line 146 in send
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SynchronousQueue.py", line 39 in flush
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/QueueBase.py", line 74 in put
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SenderBase.py", line 146 in send
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SynchronousQueue.py", line 39 in flush
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/QueueBase.py", line 74 in put
File "/home/miniconda3/env/myenv/lib/python3.5/site-packages/applicationinsights/channel/SenderBase.py", line 146 in send
Python 3.X has urllib2 package, so the checking to use urllib does not works as expected.
Application Insights defaults the device type to "Bot" since the library does not provide a value. For servers, the value should be "PC".
We're still working on trying to get the events recognized as server events.
References:
[1] Schema https://github.com/Microsoft/ApplicationInsights-dotnet-server/blob/75373e57dcf8d646c54ee188461c373f2cc98939/Schema/PublicSchema/ContextTagKeys.bond
[2] Device configuration for the official windows server library: https://github.com/Microsoft/ApplicationInsights-dotnet-server/blob/master/Src/WindowsServer/WindowsServer.Shared/Implementation/DeviceContextReader.cs
Context objects (like Device
, Operation
, etc) should be revamped somewhat to read/write from a shared dictionary rather than being merged in from the TelemetryChannel. There are two reasons why we want this:
TelemetryContext
objects need to be easily copied. This is important to properly support operations, where we'll want a copy of the TelemetryClient
per operation, accessible via thread-local storage (which then allows actual dependency tracking).dict.update
is far more performant than the current copy loop that occurs for each telemetry item.Hi, I try to use the library to track event.
I just try following code with my instrumentation key.
from applicationinsights import TelemetryClient
tc = TelemetryClient('<YOUR INSTRUMENTATION KEY GOES HERE>')
tc.track_event('Test event')
tc.flush()
I won't see anything from azure portal.
I'm not sure whether the data was send to the portal.
All official AI SDKs allow setting arbitrary properties to the context either through TelemetryInititalizers or through TelemetryClient/TelemetryContext api.
E.g. in Node:
appInsights.defaultClient.commonProperties = {
"Scope": process.env.APPINSIGHTS_APP_CONTEXT_SCOPE
};
Additionally, I'd suggest following .NET SDK approach (whatever will be decided in microsoft/ApplicationInsights-dotnet#630)
It would be nice to be able to construct a TelemetryClient with just a key, rather than having to remember about setting it via context.
tc = TelemetryClient('12345678-1234-1234-1234567890')
Since the first parameter can currently only be a TelemetryChannel, it seems easy enough to test for that type.
Maybe there's some reason why it makes sense to disallow this, but I can't think of it myself.
Hi,
I have a flask app on Azure,
I opened a new generic application insights (on a different subscription)
I'm trying to send exceptions with this method
import logging from applicationinsights.logging import enable
enable(<YOUR INSTRUMENTATION KEY GOES HERE>) raise Exception('Some exception') logging.error('This is a message')
but i can't see anything on the dashboard
I tried also the other apporach
from applicationinsights import TelemetryClient tc = TelemetryClient('<YOUR INSTRUMENTATION KEY GOES HERE>') try: raise Exception('blah') except: tc.track_exception()
but still nothing.
I don't get any errors on my python script as well.
I'm using the vanilla defaults with Django, installed as shown in the docs. And every half hour, I get this trace in App Insight:
AI: Server telemetry channel was not initialized. So persistent storage is turned off. You need to call ServerTelemetryChannel.Initialize(). Currently monitoring will continue but if telemetry cannot be sent it will be dropped.
As well, this trace also shows up occasionally:
AI: Performance counter is not available in the web app supported list. Counter is \Memory\Available Bytes
I've poked around the portal but haven't seen anything that would address this. Getting these messages makes it seem like something is improperly configured, but like I said, I'm just using the vanilla Django defaults installed per instruction. Please advise. Thanks.
Hi, I tried to using this library to send logging, it never shown in the azure portal.
But if I invoke LoggingHandler.flush() explicitly, the logging will be shown in portal.
The question is how to make the logging sending to server without explicitly invoking flush()?
From code, looks like there is 1 minute timer in AsynchronousSender.py, but I tried to wait for 1 minute too, still not work.
Thanks
The PyPI page is not rendering the long description correctly. This normally happens because of a markup error somewhere in the description, and PyPI does not help.
The distutils docs suggest a way to check for markup errors.
The idea is to replace and port all data collectors to https://github.com/census-instrumentation/opencensus-python
OpenCensus repo will have an exporter that calls into API developed in this repository.
This SDK should write log data around telemetry submission so that we can debug issues like #67 and other customer complaints about data not showing up in the portal.
Currently there doesn't seem to be an easy way to use applicationinsights.exceptions.enable()
while also sending context properties for those unhandled exceptions. Looking at the code it seems that one has to do something like the following currently (not tested):
from applicationinsights import channel, TelemetryClient
from applicationinsights.exceptions import enable
channel = channel.TelemetryChannel()
client = TelemetryClient("my-key", channel)
client.context.properties["foo"] = "bar"
enable("my-key", telemetry_channel=channel) # must be kw arg
I think #56 would help for this case as well so that the user doesn't have to deal with TelemetryChannel objects.
Given bug #102 I think that the above code currently wouldn't work anyway since the track_exception
call that happens in the background doesn't get properties
which triggers the bug where the context properties are ignored as well.
Is there a way to set the timestamp when tracking a custom event?
This is not in the context of a request so start_time is not being set in that context.
AI support this. Setting a property "timestamp" doesn't do it.
In the constructor of TelemetryChannel we have
def __init__(self, instrumentation_key, telemetry_channel=None):
if instrumentation_key:
if isinstance(instrumentation_key, channel.TelemetryChannel):
telemetry_channel = instrumentation_key
instrumentation_key = None
else:
raise Exception('Instrumentation key was required but not provided')
This doesn't seem right. If the instrumentation_key
argument is set to a value of TelemetryChannel
then the instrumentation key is missing and we shoud error out, shouldn't we? Is it ever valid to set instrumentation_key
to None
? What's going on here?
I've been trying out WSGIApplication
and I'm not seeing any request messages being tracked. If I replace the call to track_request
with track_event
and make a string out of all the contents, then it comes through fine.
It may just be that AppInsights isn't running well right now (neither is Azure storage...), but simple events seems to be coming through eventually. Is there possibly something failing during serialization or not matching what the service expects and being rejected?
I would like to install all of my dependencies through conda.
(base) >conda install applicationinsights
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- applicationinsights
Current channels:
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/free/win-64
- https://repo.anaconda.com/pkgs/free/noarch
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/pro/win-64
- https://repo.anaconda.com/pkgs/pro/noarch
- https://repo.anaconda.com/pkgs/msys2/win-64
- https://repo.anaconda.com/pkgs/msys2/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
Currently the only conda package I found was this one, but it wasn't published by Microsoft.
https://anaconda.org/clinicalgraphics/applicationinsights/files
Right now, only the tar.gz
file is being distributed on PyPI for applicationinsights, which isn't that much of an issue, but it does add some time to installing the package.
This package should be compatible with the Wheel format, considering it doesn't appear to have any C dependencies and it is compatible with both Python 2 and 3. As a result, you should only need to generate a universal wheel and then everyone (on all systems) will get the ability to install applicationinsights with just the wheel, without having to do any extra work.
Scenario
I am writing several Django based APIs that utilize Application Insights for logging. These APIs do not have Django's built-in authentication, so the authentication middleware is not used.
The problem
This package does not take this into account, so an exception is thrown when processing the request because request.user
is not set.
Solution
I made the necessary code change to allow for this behavior. Here is the pull request: #58
Add support for Correlation headers [1] in the Django Middleware.
Basic support would involve reading Request-Id
in the header, and outputting as operation_Id
in the telemetry.
The API docs are at https://microsoft.github.io/ApplicationInsights-Python/ but they are not discoverable, I had to craft the URL myself. It would be good if a link to that page is added somewhere, e.g. in the README.
I've been trying the unhandled exception filter and it doesn't seem to be sending events.
My code is roughly:
>>> from applicationinsights.exceptions import enable
>>> enable("my-key-goes-here")
>>> 1 / 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
There's a visible pause before the exception message appears, which I assume is the flush() call in the handler, but the messages never appear in the portal. (Adding trace messages also shows that the handler is being called, but I didn't look any deeper than that.)
By contrast, if I create a client with the same key and call tc.track_exception()
within an except block, the message appears in the portal in less than a minute.
(Using Python 3.4.2)
Hi Expert,
We encountered App-insights SDK problems in our environments.
We conduct our product on Application Insight service through ApplicationInsights-Python Python SDK.
I discovered three types ERROR from this SDK which we cannot handle.
Could you help me to clarify those message? Thank you.
1.)
app_insights.py:134 - ERROR - sending to app insights https://dc.services.visualstudio.com/v2/track err: <class 'socket.timeout'> The read operation timed out
2.)
app_insights.py:134 - ERROR - sending to app insights https://dc.services.visualstudio.com/v2/track err: <class 'urllib.error.URLError'> <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)>
3.)
app_insights.py:134 - ERROR - sending to app insights https://dc.services.visualstudio.com/v2/track err: <class 'ConnectionResetError'> [WinError 10054] An existing connection was forcibly closed by the remote host
If you have any concern, please let me know. Thank you.
Using applicationinsights==0.11.1
Docker container running on Windows Nano.
When my python code runs outside of a docker container, I can use tc.flush() and everything works fine and messages are received by AI.
Take the same Python app and run it inside a Windows docker container, tc.flush() causes the app to hang. No messages are received by AI.
Commenting out tc.flush() causes the Python app to run fine inside the docker container, but no messages are received by AI. E.g.
tc.track_event('Solver start', { 'jobid': jobid })
#tc.flush()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.