Giter Club home page Giter Club logo

ibm / pixiedust-facebook-analysis Goto Github PK

View Code? Open in Web Editor NEW
43.0 18.0 64.0 8.18 MB

A Jupyter notebook that uses the Watson Visual Recognition and Natural Language Understanding services to enrich Facebook Analytics and uses Cognos Dashboard Embedded to explore and visualize the results in Watson Studio

Home Page: https://developer.ibm.com/patterns/discover-hidden-facebook-usage-insights/

License: Apache License 2.0

Jupyter Notebook 100.00%
watson-visual-recognition watson-natural-language jupyter-notebook data-science ibmcode ibm-developer-technology-cognitive notebook watson-studio enriched-data watson-services

pixiedust-facebook-analysis's Introduction

Build Status

Uncover insights from Facebook data with Watson services

WARNING: This repository is no longer maintained.

This repository will not be updated. The repository will be kept available in read-only mode.

In this code pattern, we will use a Jupyter notebook with Watson Studio to glean insights from a vast body of unstructured data. We'll start with data exported from Facebook Analytics. We'll use Watson’s Natural Language Understanding and Visual Recognition to enrich the data.

We'll use the enriched data to answer questions like:

What emotion is most prevalent in the posts with the highest engagement?

What sentiment has the higher engagement score on average?

What are the top keywords, entities or images measured by total reach?

These types of insights are especially beneficial for marketing analysts who are interested in understanding and improving brand perception, product performance, customer satisfaction, and ways to engage their audiences.

It is important to note that this code pattern is meant to be used as a guided experiment, rather than an application with one set output. The standard Facebook Analytics export features text from posts, articles, and thumbnails, along with standard Facebook performance metrics such as likes, shares, and impressions. This unstructured content was then enriched with Watson APIs to extract keywords, entities, sentiment, and emotion.

After the data is enriched with Watson APIs, we'll use the Cognos Dashboard Embedded service to add a dashboard to the project. Using the dashboard you can explore our results and build your own sophisticated visualizations to communicate the insights you've discovered.

This code pattern provides mock Facebook data, a notebook, and comes with several pre-built visualizations to jump start you with uncovering hidden insights.

When the reader has completed this code pattern, they will understand how to:

  • Read external data in to a Jupyter Notebook via Object Storage and pandas DataFrames.
  • Use a Jupyter notebook and Watson APIs to enrich unstructured data.
  • Write data from a pandas DataFrame in a Jupyter Notebook out to a file in Object Storage.
  • Visualize and explore the enriched data.

Flow

architecture

  1. A CSV file exported from Facebook Analytics is added to Object Storage.
  2. Generated code makes the file accessible as a pandas DataFrame.
  3. The data is enriched with Watson Natural Language Understanding.
  4. The data is enriched with Watson Visual Recognition.
  5. Use a dashboard to visualize the enriched data and uncover hidden insights.

Included components

  • IBM Watson Studio: Analyze data using RStudio, Jupyter, and Python in a configured, collaborative environment that includes IBM value-adds, such as managed Spark.
  • IBM Watson Natural Language Understanding: Natural language processing for advanced text analysis.
  • IBM Watson Visual Recognition: Understand image content.
  • IBM Cognos Dashboard Embedded: The IBM Cognos Dashboard Embedded lets you, the developer, painlessly add end-to-end data visualization capabilities to your application.
  • IBM Cloud Object Storage: An IBM Cloud service that provides an unstructured cloud data store to build and deliver cost effective apps and services with high reliability and fast speed to market.
  • Jupyter Notebooks: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.
  • pandas: A Python library providing high-performance, easy-to-use data structures.
  • Beautiful Soup: Beautiful Soup is a Python library for pulling data out of HTML and XML files.

Steps

Follow these steps to setup and run this code pattern. The steps are described in detail below.

  1. Clone the repo
  2. Create a new Watson Studio project
  3. Add services to the project
  4. Create the notebook in Watson Studio
  5. Add credentials
  6. Add the CSV file
  7. Run the notebook
  8. Add a dashboard to the project
  9. Analyze the results

1. Clone the repo

Clone the pixiedust-facebook-analysis repo locally. In a terminal, run the following command:

git clone https://github.com/IBM/pixiedust-facebook-analysis.git

2. Create a new Watson Studio project

  • Log into IBM's Watson Studio. Once in, you'll land on the dashboard.

  • Create a new project by clicking New project + and then click on Create an empty project.

  • Enter a project name.

  • Choose an existing Object Storage instance or create a new one.

  • Click Create.

  • Upon a successful project creation, you are taken to the project Overview tab. Take note of the Assets and Settings tabs, we'll be using them to associate our project with any external assets (datasets and notebooks) and any IBM cloud services.

    studio-project-overview

3. Add services to the project

  • Associate the project with Watson services. To create an instance of each service, go to the Settings tab in the new project and scroll down to Associated Services. Click Add service and select Watson from the drop-down menu. Add the service using the free Lite plan. Repeat for each of the services used in this pattern:

    • Natural Language Understanding
    • Visual Recognition (optional)
  • Once your services are created, copy the credentials and save them for later. You will use them in your Jupyter notebook.

    • Use the upper-left menu, and select Services > My Services.
    • Use the 3-dot actions menu to select Manage in IBM Cloud for each service.
    • Copy each API key and URL to use in the notebook.

4. Create the notebook in Watson Studio

  • Go back to your Watson Studio project by using your browser's back button or use the upper-left menu, and select Projects and open your project.

  • Select the Overview tab, click Add to project + on the top right and choose the Notebook asset type.

    add_notebook.png

  • Fill in the following information:

    • Select the From URL tab. [1]
    • Enter a Name for the notebook and optionally a description. [2]
    • For Select runtime select the Default Python 3.6 Free option. [3]
    • Under Notebook URL provide the following url [4]:
    https://raw.githubusercontent.com/IBM/pixiedust-facebook-analysis/master/notebooks/pixiedust_facebook_analysis.ipynb

    new_notebook

  • Click the Create notebook button.

    TIP: Your notebook will appear in the Notebooks section of the Assets tab.

5. Add credentials

Find the notebook cell after 1.5. Add Service Credentials From IBM Cloud for Watson Services.

Set the API key and URL for each service.

add_credentials

Note: This cell is marked as a hidden_cell because it will contain sensitive credentials.

6. Add the CSV file

Add the CSV file to the notebook

Use Find and Add Data (look for the 01/00 icon) and its Files tab. From there you can click browse and add a .csv file from your computer.

add_file

Note: If you don't have your own data, you can use our example by cloning this git repo. Look in the data directory.

Insert to code

Find the notebook cell after 2.1 Load data from Object Storage. Place your cursor after # **Insert to code > Insert pandas DataFrame**. Make sure this cell is selected before inserting code.

Using the file that you added above (under the 01/00 Files tab), use the Insert to code drop-down menu. Select pandas DataFrame from the drop-down menu.

insert_to_code

Note: This cell is marked as a hidden_cell because it contains sensitive credentials.

inserted_pandas

Fix-up df variable name

The inserted code includes a generated method with credentials and then calls the generated method to set a variable with a name like df_data_1. If you do additional inserts, the method can be re-used and the variable will change (e.g. df_data_2).

Later in the notebook, we set df = df_data_1. So you might need to fix the variable name df_data_1 to match your inserted code or vice versa.

Add file credentials

We want to write the enriched file to the same container that we used above. So now we'll use the same file drop-down to insert credentials. We'll use them later when we write out the enriched CSV file.

After the df setup, there is a cell to enter the file credentials. Place your cursor after the # insert credentials for file - Change to credentials_1 line. Make sure this cell is selected before inserting credentials.

Use the CSV file's drop-down menu again. This time select Insert Credentials.

insert_file_credentials

Note: This cell is marked as a hidden_cell because it contains sensitive credentials.

Fix-up credentials variable name

The inserted code includes a dictionary with credentials assigned to a variable with a name like credentials_1. It may have a different name (e.g. credentials_2). Rename it or reassign it if needed. The notebook code assumes it will be credentials_1.

7. Run the notebook

When a notebook is executed, what is actually happening is that each code cell in the notebook is executed, in order, from top to bottom.

Each code cell is selectable and is preceded by a tag in the left margin. The tag format is In [x]:. Depending on the state of the notebook, the x can be:

  • A blank, this indicates that the cell has never been executed.
  • A number, this number represents the relative order this code step was executed.
  • A *, this indicates that the cell is currently executing.

There are several ways to execute the code cells in your notebook:

  • One cell at a time.
    • Select the cell, and then press the Play button in the toolbar.
  • Batch mode, in sequential order.
    • From the Cell menu bar, there are several options available. For example, you can Run All cells in your notebook, or you can Run All Below, that will start executing from the first cell under the currently selected cell, and then continue executing all cells that follow.
  • At a scheduled time.
    • Press the Schedule button located in the top right section of your notebook panel. Here you can schedule your notebook to be executed once at some future time, or repeatedly at your specified interval.

8. Add a dashboard to the project

Add the enriched data as a project data asset

  • Go to the Assets tab in the your Watson Studio project click on the 01/00 (Find and add data) icon.
  • Select the enriched_example_facebook_data.csv file and use the 3-dot pull-down to select Add as data asset.

Associate the project with a Dashboard service

  • Go to the Settings tab in the new project and scroll down to Associated Services.
  • Click Add service and select Dashboard from the drop-down menu.
  • Create the service using the free Lite plan.

Load the provided dashboard.json file

  • Click the Add to project + button and select Dashboard.
  • Select the From file tab and use the Select file button to open the file dashboards/dashboard.json from your local repo.
  • Select your Cognos Dashboard Embedded service from the list.
  • Hit Create.
  • If you are asked to re-link the data set, select your enriched_example_facebook_data.csv asset.

9. Analyze the results

If you walk through the cells, you will see that we demonstrated how to do the following:

  • Install external libraries from PyPI
  • Create clients to connect to Watson cognitive services
  • Load data from a local CSV file to a pandas DataFrame (via Object Storage)
  • Do some data manipulation with pandas
  • Use BeautifulSoup
  • Use Natural Language Understanding
  • Use Visual Recognition
  • Save the enriched data in a CSV file in Object Storage

When you try the dashboard, you will see:

  • How to add a dashboard to a Watson Studio project
  • How to import a dashboard JSON file
  • Linking a dashboard to data saved in Cloud Object Storage
  • An example with tabs and a variety of charts
  • A dashboard tool that you can use to explore your data and create new visualizations to share

Sample output

The provided dashboard uses four tabs to show four simple charts:

  • Emotion
  • Sentiment
  • Entities
  • Keywords

The enriched data contains emotions, sentiment, entities, and keywords that were added using Natural Language Understanding to process the posts, links, and thumbnails. Combining the enrichment with the metrics from Facebook gives us a huge number of options for what we could show on the dashboard. The dashboard editor also allows you great flexibility on how you arrange your dashboard and visualize your data. The example demonstrates the following:

  • A word-cloud showing the keywords sized by total impressions and using color to show the sentiment

    keywords.png

  • A pie chart showing total reach by emotion

    emotion.png

  • A stacked bar chart showing likes, shares, and comments by post sentiment

    sentiment.png

  • A bar chart with a line overlay, showing total impressions and paid impressions by mentioned entity

    entities.png

License

This code pattern is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.

Apache License FAQ

pixiedust-facebook-analysis's People

Contributors

dolph avatar imgbot[bot] avatar jamaya2001 avatar jessietheace avatar kant avatar kyokonishito avatar ljbennett62 avatar markstur avatar rhagarty avatar sanjeevghimire avatar scottdangelo avatar stevemar avatar stevemart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pixiedust-facebook-analysis's Issues

thumbnail descriptions and image URLs fail with certificate errors

During the steps in the cell proceeded by this description:

Pull thumbnail descriptions and image URLs using requests and beautiful soup.

We get an exception due to certificate errors:

Skipping url http://ibm.co/1VdSQDU: HTTPSConnectionPool(host='www.ibmchefwatson.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))
Skipping url http://ibm.co/1VdSQDU: HTTPSConnectionPool(host='www.ibmchefwatson.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))
Skipping url http://ibm.co/1FdAhfn: HTTPSConnectionPool(host='asmarterplanet.com', port=443): Max retries exceeded with url: /blog/2015/09/watson-developer-cloud-new-platform-startup-innovation.html (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))
Skipping url http://ibm.co/1h2b0Jr: HTTPSConnectionPool(host='asmarterplanet.com', port=443): Max retries exceeded with url: /blog/2015/09/watson-developer-cloud-new-platform-startup-innovation.html (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))

Ethics?

Does this tool allow one to analyse and understand things that can be traced back to individual Facebook user?

Change storage to IBM Cloud Object Storage

Looks like Notebooks on DSX are being pushed to use COS over Object Storage(Swift API)

to comply we'd have to change the cells under Enrichment is now COMPLETE! and replace the put_file function cell to:

cos = ibm_boto3.client(service_name='s3',
    ibm_api_key_id=credentials['IBM_API_KEY_ID'],
    ibm_service_instance_id=credentials['IAM_SERVICE_ID'],
    ibm_auth_endpoint=credentials['IBM_AUTH_ENDPOINT'],
    config=Config(signature_version='oauth'),
    endpoint_url=credentials['ENDPOINT'])

and the cell below to:

# Build the enriched file name from the original filename.
localfilename = 'enriched_' + credentials['FILE']

# Write a CSV file from the enriched pandas DataFrame.
df.to_csv(localfilename, index=False)

# Use the above put_file method with credentials to put the file in Object Storage.
cos.upload_file(localfilename, Bucket=credentials['BUCKET'],Key=localfilename)

Can leave both depending COS and OS(Swift API) and let user comment out either one.

The skipping messages are not pretty

The "skipping" messages might concern people. Maybe do a one-liner to show how many were skipped and hide the details in a commented out print for maintainers (might need some day).

Insert pandas Dataframe code - df_data_2

When inserting code for pandas dataframe of example facebook data the dataframe is given the name df_data_2 and in the next cell we try to change df = df_data_1

Same with inserting credentials code. Sometimes is entered with number other than 1.

Unicode Decode error in pd.read_csv(body)

In cell 2.1 we load a Pandas dataframe from our object store.
Using the data/example_input/example_facebook_data.csv file with new IBM Cloud Object store and Python 3.5 we get an error[1] :

UnicodeDecodeError
...<snip>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 46: invalid continuation byte

The problem does not occur with python 2
The error is fixed with:

df_data_1 = pd.read_csv(body, encoding='latin-1')

I'll file a bug with the Cloud Storage team, as this is generated code and should do some testing so that a common CSV file in latin-1 does not break things.
Meanwhile, we can document the workaround.

[1]

---------------------------------------------------------------------------
UnicodeDecodeError                        Traceback (most recent call last)
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_with_dtype()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._string_convert()

pandas/_libs/parsers.pyx in pandas._libs.parsers._string_box_utf8()

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 46: invalid continuation byte

During handling of the above exception, another exception occurred:

UnicodeDecodeError                        Traceback (most recent call last)
<ipython-input-12-76d663a0cce8> in <module>()
     21 if not hasattr(body, "__iter__"): body.__iter__ = types.MethodType( __iter__, body )
     22 
---> 23 df_data_1 = pd.read_csv(body)
     24 df_data_1.head()
     25 

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
    703                     skip_blank_lines=skip_blank_lines)
    704 
--> 705         return _read(filepath_or_buffer, kwds)
    706 
    707     parser_f.__name__ = name

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    449 
    450     try:
--> 451         data = parser.read(nrows)
    452     finally:
    453         parser.close()

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
   1063                 raise ValueError('skipfooter not supported for iteration')
   1064 
-> 1065         ret = self._engine.read(nrows)
   1066 
   1067         if self.options.get('as_recarray'):

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
   1826     def read(self, nrows=None):
   1827         try:
-> 1828             data = self._reader.read(nrows)
   1829         except StopIteration:
   1830             if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_column_data()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_with_dtype()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._string_convert()

pandas/_libs/parsers.pyx in pandas._libs.parsers._string_box_utf8()

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 46: invalid continuation byte

Failed to read csv file

When executing the generated code to read CSV file, it failed.

The code are

insert pandas DataFrame

import sys
import types
import pandas as pd
from botocore.client import Config
import ibm_boto3

def iter(self): return 0

@hidden_cell

The following code accesses a file in your IBM Cloud Object Storage. It includes your credentials.

You might want to remove those credentials before you share your notebook.

client_c0fe4b0610144a049d60e22 = ibm_boto3.client(service_name='s3',
ibm_api_key_id='',
ibm_auth_endpoint="https://iam.ng.bluemix.net/oidc/token",
config=Config(signature_version='oauth'),
endpoint_url='https://s3-api.us-geo.objectstorage.service.networklayer.com')

body = client_c0fe4b0610144a049d60e22.get_object(Bucket='leee5ac7151a0774a31aae95eada44af3e0',Key='example_facebook_data.csv')['Body']

add missing iter method, so pandas accepts body as file-like object

if not hasattr(body, "iter"): body.iter = types.MethodType( iter, body )

df_data_2 = pd.read_csv(body)
df_data_2.head()

The error messages are

ParserError Traceback (most recent call last)
in ()
21 if not hasattr(body, "iter"): body.iter = types.MethodType( iter, body )
22
---> 23 df_data_2 = pd.read_csv(body)
24 df_data_2.head()
25

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
703 skip_blank_lines=skip_blank_lines)
704
--> 705 return _read(filepath_or_buffer, kwds)
706
707 parser_f.name = name

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
449
450 try:
--> 451 data = parser.read(nrows)
452 finally:
453 parser.close()

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
1063 raise ValueError('skipfooter not supported for iteration')
1064
-> 1065 ret = self._engine.read(nrows)
1066
1067 if self.options.get('as_recarray'):

/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
1826 def read(self, nrows=None):
1827 try:
-> 1828 data = self._reader.read(nrows)
1829 except StopIteration:
1830 if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 36, saw 2

Add AutoAI/WML Component

  • Cluster data into high medium low engagement

  • Predict engagement class based on new features from NLU/WVR

image links return 404

In the cell for Visual Recognition to classify thumbnail images, there are 2 bad links:


Skipping url http://www.ibm.com/watson/assets/img/cloud/hero_cloud.jpg: Error: Unknown error, Code: 400
Skipping url http://www-03.ibm.com/press/img/ibmpos_blu_feed.jpg: Error: Unknown error, Code: 400

These look to be bad from the IBM web URLs, so I don't think there is much that can be done in the notebook.

Use Python visualization alternatives to pixiedust

It has been requested that we show other visualizations instead of relying on pixiedust for all the hard work. Perhaps Brunel or possibly just whatever is most popular at present.

Note: This would be a significant change in the published pattern (notice the title of the repo). So it is more than just considering a few new notebook cells/charts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.