Giter Club home page Giter Club logo

benjamin-maynard / kubernetes-cloud-mysql-backup Goto Github PK

View Code? Open in Web Editor NEW
109.0 6.0 88.0 120 KB

kubernetes-cloud-mysql-backup is a Docker Image based on Alpine Linux that automatically performs backups of MySQL databases, before uploading them to Amazon S3 or Google Cloud Storage. It is designed to be run as a Cronjob in Kubernetes for scheduled database backups. It also features Slack integration.

Home Page: https://benjamin.maynard.io

License: Apache License 2.0

Dockerfile 16.87% Shell 83.13%
aws docker kubernetes mysql backup cronjob k8s database mariadb mysql-database

kubernetes-cloud-mysql-backup's Introduction

kubernetes-cloud-mysql-backup

kubernetes-cloud-mysql-backup is a container image based on Alpine Linux. This container is designed to run in Kubernetes as a cronjob to perform automatic backups of MySQL databases to Amazon S3 or Google Cloud Storage. It was created to meet my requirements for regular and automatic database backups. Having started with a relatively basic feature set, it is gradually growing to add more and more features.

Currently, kubernetes-cloud-mysql-backup supports the backing up of MySQL Databases. It can perform backups of multiple MySQL databases from a single database host. When triggered, a full database dump is performed using the mysqldump command for each configured database. The backup(s) are then uploaded to an Amazon S3 Bucket or a Google Cloud Storage Bucket. kubernetes-cloud-mysql-backup features Slack Integration, and can post messages into a channel detailing if the backup(s) were successful or not.

Over time, kubernetes-cloud-mysql-backup will be updated to support more features and functionality.

All changes are captured in the changelog, which adheres to Semantic Versioning.

Environment Variables

The below table lists all of the Environment Variables that are configurable for kubernetes-cloud-mysql-backup.

Environment Variable Purpose
BACKUP_CREATE_DATABASE_STATEMENT (Optional - default false) Adds the CREATE DATABASE and USE statements to the MySQL backup by explicitly specifying the --databases flag (see here).
BACKUP_ADDITIONAL_PARAMS (Optional) Additional parameters to add to the mysqldump command.
BACKUP_PROVIDER (Optional) The backend to use for storing the MySQL backups. Supported options are aws (default) or gcp
AWS_ACCESS_KEY_ID (Required for AWS Backend) AWS IAM Access Key ID.
AWS_SECRET_ACCESS_KEY (Required for AWS Backend) AWS IAM Secret Access Key. Should have very limited IAM permissions (see below for example) and should be configured using a Secret in Kubernetes.
AWS_DEFAULT_REGION (Required for AWS Backend) Region of the S3 Bucket (e.g. eu-west-2).
AWS_BUCKET_NAME (Required for AWS Backend) The name of the S3 bucket.
AWS_BUCKET_BACKUP_PATH (Required for AWS Backend) Path the backup file should be saved to in S3. E.g. /database/myblog/backups. Do not put a trailing / or specify the filename.
AWS_S3_ENDPOINT (Optional) The S3-compatible storage endpoint (for MinIO/other cloud storage) bucket.
GCP_GCLOUD_AUTH (Required for GCP Backend) Base64 encoded service account key exported as JSON. Example of how to generate: base64 ~/service-key.json
GCP_BUCKET_NAME (Required for GCP Backend) The name of GCP GCS bucket.
GCP_BUCKET_BACKUP_PATH (Required for GCP Backend) Path the backup file should be saved to in GCS. E.g. /database/myblog/backups. Do not put a trailing / or specify the filename.
TARGET_DATABASE_HOST (Required) Hostname or IP address of the MySQL Host.
TARGET_DATABASE_PORT (Optional) Port MySQL is listening on (Default: 3306).
TARGET_DATABASE_NAMES (Required unless TARGET_ALL_DATABASES is true) Name of the databases to dump. This should be comma seperated (e.g. database1,database2).
TARGET_ALL_DATABASES (Optional - default false) Set to true to ignore TARGET_DATABASE_NAMES and dump all non-system databases.
TARGET_DATABASE_USER (Required) Username to authenticate to the database with.
TARGET_DATABASE_PASSWORD (Required) Password to authenticate to the database with. Should be configured using a Secret in Kubernetes.
BACKUP_TIMESTAMP (Optional) Date string to append to the backup filename (date format). Leave unset if using S3 Versioning and date stamp is not required.
BACKUP_COMPRESS (Optional) (true/false) Enable or disable gzip backup compression - (Default False).
BACKUP_COMPRESS_LEVEL (Optional - default 9) Set the gzip level used for compression.
AGE_PUBLIC_KEY (Optional) Public key used to encrypt backup with FiloSottile/age. Leave blank to disable backup encryption.
SLACK_ENABLED (Optional) (true/false) Enable or disable the Slack Integration (Default False).
SLACK_USERNAME (Optional) (true/false) Username to use for the Slack Integration (Default: kubernetes-cloud-mysql-backup).
SLACK_CHANNEL (Required if Slack enabled) Slack Channel the WebHook is configured for.
SLACK_PROXY (Optional) Proxy URL if Slack is behind proxy.
SLACK_WEBHOOK_URL (Required if Slack enabled) What is the Slack WebHook URL to post to? Should be configured using a Secret in Kubernetes.

Slack Integration

kubernetes-cloud-mysql-backup supports posting into Slack after each backup job completes. The message posted into the Slack Channel varies as detailed below:

  • If the backup job is SUCCESSFUL: A generic message will be posted into the Slack Channel detailing that all database backups successfully completed.
  • If the backup job is UNSUCCESSFUL: A message will be posted into the Slack Channel with a detailed error message for each database that failed.

In order to configure kubernetes-cloud-mysql-backup to post messages into Slack, you need to create an Incoming WebHook. Once generated, you can configure kubernetes-cloud-mysql-backup using the environment variables detailed above.

S3 Backend Configuration

The below subheadings detail how to configure kubernetes-cloud-mysql-backup to backup to an Amazon S3 backend.

S3 - Configuring the S3 Bucket & AWS IAM User

By default, kubernetes-cloud-mysql-backup performs a backup to the same path, with the same filename each time it runs. It therefore assumes that you have Versioning enabled on your S3 Bucket. A typical setup would involve S3 Versioning, with a Lifecycle Policy.

If a timestamp is required on the backup file name, the BACKUP_TIMESTAMP Environment Variable can be set.

An IAM User should be created, with API Credentials. An example Policy to attach to the IAM User (for a minimal permissions set) is as follows:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<BUCKET NAME>"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::<BUCKET NAME>/*"
        }
    ]
}

S3 - Example Kubernetes Cronjob

An example of how to schedule this container in Kubernetes as a cronjob is below. This would configure a database backup to run each day at 01:00am. The AWS Secret Access Key, Target Database Password and Slack Webhook URL are stored in secrets.

apiVersion: v1
kind: Secret
metadata:
  name: my-database-backup
type: Opaque
data:
  aws_secret_access_key: <AWS Secret Access Key>
  database_password: <Your Database Password>
  slack_webhook_url: <Your Slack WebHook URL>
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-database-backup
spec:
  schedule: "0 01 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: my-database-backup
            image: ghcr.io/benjamin-maynard/kubernetes-cloud-mysql-backup:v2.6.0
            imagePullPolicy: Always
            env:
              - name: AWS_ACCESS_KEY_ID
                value: "<Your Access Key>"
              - name: AWS_SECRET_ACCESS_KEY
                valueFrom:
                   secretKeyRef:
                     name: my-database-backup
                     key: aws_secret_access_key
              - name: AWS_DEFAULT_REGION
                value: "<Your S3 Bucket Region>"
              - name: AWS_BUCKET_NAME
                value: "<Your S3 Bucket Name>"
              - name: AWS_BUCKET_BACKUP_PATH
                value: "<Your S3 Bucket Backup Path>"
              - name: TARGET_DATABASE_HOST
                value: "<Your Target Database Host>"
              - name: TARGET_DATABASE_PORT
                value: "<Your Target Database Port>"
              - name: TARGET_DATABASE_NAMES
                value: "<Your Target Database Name(s)>"
              - name: TARGET_DATABASE_USER
                value: "<Your Target Database Username>"
              - name: TARGET_DATABASE_PASSWORD
                valueFrom:
                   secretKeyRef:
                     name: my-database-backup
                     key: database_password
              - name: BACKUP_TIMESTAMP
                value: "_%Y_%m_%d"
              - name: SLACK_ENABLED
                value: "<true/false>"
              - name: SLACK_CHANNEL
                value: "#chatops"
              - name: SLACK_WEBHOOK_URL
                valueFrom:
                   secretKeyRef:
                     name: my-database-backup
                     key: slack_webhook_url
          restartPolicy: Never

GCS Backend Configuration

The below subheadings detail how to configure kubernetes-cloud-mysql-backup to backup to a Google GCS backend.

GCS - Configuring the Service Account

By default, kubernetes-cloud-mysql-backup performs a backup to the same path, with the same filename each time it runs. It therefore assumes that you have Object Versioning enabled on your GCS Bucket. A typical setup would involve GCS Object Versioning, with Object Lifecycle Management configured.

If a timestamp is required on the backup file name, the BACKUP_TIMESTAMP Environment Variable can be set.

In order to backup to a GCS Bucket, you must create a Service Account in Google Cloud Platform that contains the necessary permissions to write to the destination bucket (for example the Storage Object Creator role).

Once created, you must create a key for the Service Account in JSON format. This key should then be base64 encoded and set in the GCP_GCLOUD_AUTH environment variable. For example, to encode service_account.json you would use the command base64 ~/service-key.json in your terminal and set the output as the GCP_GCLOUD_AUTH environment variable.

GCS - Example Kubernetes Cronjob

An example of how to schedule this container in Kubernetes as a cronjob is below. This would configure a database backup to run each day at 01:00am. The GCP Service Account Key, Target Database Password and Slack Webhook URL are stored in secrets.

apiVersion: v1
kind: Secret
metadata:
  name: my-database-backup
type: Opaque
data:
  gcp_gcloud_auth: <Base64 encoded Service Account Key>
  database_password: <Your Database Password>
  slack_webhook_url: <Your Slack WebHook URL>
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-database-backup
spec:
  schedule: "0 01 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: my-database-backup
            image: ghcr.io/benjamin-maynard/kubernetes-cloud-mysql-backup:v2.6.0
            imagePullPolicy: Always
            env:
              - name: GCP_GCLOUD_AUTH
                valueFrom:
                   secretKeyRef:
                     name: my-database-backup
                     key: gcp_gcloud_auth
              - name: BACKUP_PROVIDER
                value: "gcp"
              - name: GCP_BUCKET_NAME
                value: "<Your GCS Bucket Name>"                
              - name: GCP_BUCKET_BACKUP_PATH
                value: "<Your GCS Bucket Backup Path>"
              - name: TARGET_DATABASE_HOST
                value: "<Your Target Database Host>"
              - name: TARGET_DATABASE_PORT
                value: "<Your Target Database Port>"
              - name: TARGET_DATABASE_NAMES
                value: "<Your Target Database Name(s)>"
              - name: TARGET_DATABASE_USER
                value: "<Your Target Database Username>"
              - name: TARGET_DATABASE_PASSWORD
                valueFrom:
                   secretKeyRef:
                     name: my-database-backup
                     key: database_password
              - name: BACKUP_TIMESTAMP
                value: "_%Y_%m_%d"
              - name: SLACK_ENABLED
                value: "<true/false>"
              - name: SLACK_CHANNEL
                value: "#chatops"
              - name: SLACK_WEBHOOK_URL
                valueFrom:
                   secretKeyRef:
                     name: my-database-backup
                     key: slack_webhook_url
          restartPolicy: Never

kubernetes-cloud-mysql-backup's People

Contributors

adamdecaf avatar benjamin-maynard avatar cablespaghetti avatar jasondew avatar mwienk avatar rayl15 avatar sreesanpd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-cloud-mysql-backup's Issues

Library "six" not installed

I am getting the following error message when running the container:

Database backup failed to upload for *** at 26-11-2020 23:30:08. Error: Traceback (most recent call last):
  File "/usr/bin/aws", line 19, in <module>
    import awscli.clidriver
  File "/usr/lib/python3.8/site-packages/awscli/clidriver.py", line 17, in <module>
    import botocore.session
  File "/usr/lib/python3.8/site-packages/botocore/session.py", line 29, in <module>
    import botocore.configloader
  File "/usr/lib/python3.8/site-packages/botocore/configloader.py", line 19, in <module>
    from botocore.compat import six
  File "/usr/lib/python3.8/site-packages/botocore/compat.py", line 26, in <module>
    from dateutil.tz import tzlocal
  File "/usr/lib/python3.8/site-packages/dateutil/tz/__init__.py", line 2, in <module>
    from .tz import *
  File "/usr/lib/python3.8/site-packages/dateutil/tz/tz.py", line 19, in <module>
    import six
ModuleNotFoundError: No module named 'six'
kubernetes-cloud-mysql-backup encountered 1 or more errors. Exiting with status code 1.

Upload to GCS not working

Hi Everyone, I wanted to try this tool and I followed the instructions, but unfortunately I have already tried everything, but unfortunately I do not know where the error is.

ERROR: (gcloud.auth.activate-service-account) Could not read json file /root/gcloud.json: Expecting value: line 1 column 1 (char 0)

According to the instructions, I downloaded the json key and used base64 to encrypt and paste it into the secret. But unfortunately it doesn't work.

Thank you for your help

BACKUP_COMPRESS fail

BACKUP_COMPRESS don't work, still upload uncompress file to GC

spec: template: metadata: annotations: cattle.io/timestamp: "2020-07-10T13:36:12Z" creationTimestamp: null spec: containers: - env: - name: BACKUP_COMPRESS value: "true" - name: BACKUP_PROVIDER value: gcp ... image: gcr.io/maynard-io-public/kubernetes-cloud-mysql-backup imagePullPolicy: Always

kubernetes yaml bad

error from server (Invalid): error when creating "backups.yaml": Secret "AWS_SECRET_ACCESS_KEY" is invalid: metadata.name: Invalid value: "AWS_SECRET_ACCESS_KEY": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end wit


YOU MUST SEPARATE each module with "---" in your example you put "--"

SignatureDoesNotMatch - when calling the PutObject operation

An error occurred (SignatureDoesNotMatch) when calling the PutObject operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.

I am using minio; access_key and secret_key are correct.

Add option to add custom parameters to mysqldump

I'm working on moving from our in-house scripts to yours, as they looked much further developed. However we're missing the ability to add custom parameters like --single-transaction. I'll raise a PR for this in the next few days.

encrypt backups

Thanks for this project. It's handy! Are there any plans to offer encryption prior to upload? I know the buckets can be encrypted, but uploading encrypted files helps with our compliance requirements.

Is there any reason why retention is not considered?

Thank you for the great tooling here already, including the documentation. Worked our right of the box.

One thing i wonder is, why is there no retention configuration - seems like this is somehow not important to you?
Could you explain a bit how you deal with keeping only a hand full of backups over time?

Are you using Lifecycle management on those s3 storage?

Thanks!

Connection issue

I have setup cloud mysql backup in our environment which consists of develop (on our own k8s), staging/production on digital ocean.

On our k8s, it works like a treat!
On digital ocean, I am getting this error: "Database backup FAILED for at . Error: mysqldump: Got error: 2002: 'Can't connect to MySQL server on '' (115)' when trying to connect".

Have you run into an issue like this before?

Thank you!

Missing caching_sha2_password plugin

Hey! ๐Ÿ‘‹ Thanks for this project. It looks like a really simple solution to backing up mysql instances. I've ran into a problem when triying to use this though..

Database backup FAILED for corteza at 03-01-2020 22:25:29. Error: mysqldump: Got error: 1045: 'Plugin caching_sha2_password could not be loaded: Error loading shared library /usr//usr/lib/mariadb/plugin/caching_sha2_password.so: No such file or directory' when trying to connect
kubectl describe
$ kubectl describe cronjobs -n sales mysql-backup 
Name:                          mysql-backup
Namespace:                     sales
Labels:                        <none>
Annotations:                   kubectl.kubernetes.io/last-applied-configuration:
                                 {"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"mysql-backup","namespace":"sales"},"spec":{"jobTemplat...
Schedule:                      * * * * *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
  Containers:
   mysql-backup:
    Image:      gcr.io/maynard-io-public/kubernetes-cloud-mysql-backup:v2.2.0
    Port:       <none>
    Host Port:  <none>
    Environment:
      GCP_GCLOUD_AUTH:           <set to the key 'gcp_gcloud_auth' in secret 'mysql-secrets'>  Optional: false
      BACKUP_PROVIDER:           gcp
      GCP_BUCKET_NAME:           moov-production-mysql-backups
      GCP_BUCKET_BACKUP_PATH:    sales/mysql/
      TARGET_DATABASE_HOST:      mysql.sales.svc.cluster.local
      TARGET_DATABASE_PORT:      3306
      TARGET_DATABASE_NAMES:     <set to the key 'database' in secret 'mysql-secrets'>  Optional: false
      TARGET_DATABASE_USER:      <set to the key 'username' in secret 'mysql-secrets'>  Optional: false
      TARGET_DATABASE_PASSWORD:  <set to the key 'password' in secret 'mysql-secrets'>  Optional: false
      BACKUP_TIMESTAMP:          _%Y_%m_%d
      SLACK_ENABLED:             true
      SLACK_CHANNEL:             #alerts
      SLACK_WEBHOOK_URL:         <set to the key 'slack_webhook_url' in secret 'mysql-secrets'>  Optional: false
    Mounts:                      <none>
  Volumes:                       <none>
Last Schedule Time:              <unset>
Active Jobs:                     <none>
Events:                          <none>

Support for PostgresQL

Hi Benjamin,

I like your tool and will use it on my gke-k8s to backup my gcp-mysql server to gcs.
If you hadn't written the script, I probably would have had to....
I use postgresql on gcp in addition to mysql.

Would you accept a patch that adds support for Postgres?

Best regards
Volker

Does not seem to work at All as secrets and basically anything is not getting processed ...

Seems I'm unable to upload the created backup using a non-direct aws service like wasabi.com ... :( can you help.
'm always running into the following issue as secrets and access keys are not processed within your script(s), this container is pretty much useless:

Database backup successfully completed for app at 21-11-2020 08:02:19.
Database backup failed to upload for app at 21-11-2020 08:02:20. Error: upload failed: tmp/app_2020_11_21.sql to s3://dbbackup//app_2020_11_21.sql An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
kubernetes-s3-mysql-backup encountered 1 or more errors. Exiting with status code 1.


apiVersion: v1
kind: Secret
metadata:
  name: database-backup
type: Opaque
data:
  aws_secret_access_key: $somehiddenvalue
  database_password: $somehiddenvalue
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: database-backup
spec:
  schedule: "*/2 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          initContainers:
          - name: wait-for-mysql
            image: busybox:latest
            command: [ 'sh', '-c', "until nslookup mysql.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mysql; sleep 2; done" ]
          containers:
          - name: database-backup
            image: gcr.io/maynard-io-public/kubernetes-cloud-mysql-backup
            imagePullPolicy: Always
            env:
              - name: AWS_ACCESS_KEY_ID
                value: "somehiddenvalue"
              - name: AWS_SECRET_ACCESS_KEY
                valueFrom:
                   secretKeyRef:
                     name: database-backup
                     key: aws_secret_access_key
              - name: AWS_S3_ENDPOINT
                value: "s3.eu-central-1.wasabisys.com"
              - name: AWS_DEFAULT_REGION
                value: "eu-central-1"
              - name: AWS_BUCKET_NAME
                value: "dbbackup"
              - name: AWS_BUCKET_BACKUP_PATH
                value: "/"
              - name: TARGET_DATABASE_HOST
                value: "mysql"
              - name: TARGET_DATABASE_PORT
                value: "3306"
              - name: TARGET_DATABASE_NAMES
                value: "app"
              - name: TARGET_DATABASE_USER
                value: "root"
              - name: TARGET_DATABASE_PASSWORD
                valueFrom:
                   secretKeyRef:
                     name: mysql-root-password
                     key: password
              - name: BACKUP_TIMESTAMP
                value: "_%Y_%m_%d"
          restartPolicy: OnFailure

Missing caching_sha2_password plugin

Hey ! I use this project since 2 years and this is very helpful.
I recently upgrade my database to mysql 8 with caching_sha2_password Authentication Plugin to improve security.

And now i have error on backup :

Database backup FAILED for api_hobbies at 26-11-2022 10:54:46. Error: mysqldump: Got error: 1045: 'Plugin caching_sha2_password could not be loaded: Error loading shared library /usr/lib/mariadb/plugin/caching_sha2_password.so: No such file or directory' when trying to connect

Is there any plan to support caching_sha2_password authentication plugin ?

Thanks a lot ! ๐Ÿ™

any plans for housekeeping?

Hi there,
your tool sounds very promising! Until now, I'm missing any kind of housekeeping. For example, keeping only the last X backups. Are there any plans to integrate that soon?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.