Giter Club home page Giter Club logo

chef's Introduction

OpenStreetMap chef cookbooks

Cookstyle Test Kitchen

This repository manages the configuration of all the servers run by the OpenStreetMap Foundation's Operations Working Group. We use Chef to automated the configuration of all of our servers.

OSMF Operations Working Group

Roles

We make extensive use of roles to configure the servers. In general we have:

Server-specific roles (e.g. faffy.rb)

These deal with particular setup or quirks of a server, such as its IP address. They also include roles representing the service they are performing, and the location they are in and any particular hardware they have that needs configuration. All our servers are named after dragons.

Hardware-specific roles (e.g. hp-g9.rb)

Covers anything specific to a certain piece of hardware, like a motherboard, that could apply to multiple machines.

Location-specific roles (e.g. equinix-dub.rb)

These form a hierarchy of datacentres, organisations, and countries where our servers are located.

Service-specific roles (e.g. web-frontend)

These cover the services that the server is running, and will include the recipes required for that service along with any specific configurations and other cascading roles.

Cookbooks

We use the 'Organization Repository' approach, where we have all our cookbooks in this repository (as opposed to one repository per cookbook). Additionally we don't make use of external cookbooks so every cookbook required is in this repository.

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for more details. The guide also includes details on how to run the tests locally.

Contact Us

chef's People

Contributors

datendelphin avatar dependabot[bot] avatar firefishy avatar gravitystorm avatar grischard avatar harry-wood avatar hbogner avatar hdevine825 avatar jburgess777 avatar joto avatar lonvia avatar migurski avatar mmd-osm avatar mnalis avatar mtmail avatar natrius avatar nebulon42 avatar nyurik avatar pnorman avatar polarbearing avatar russss avatar saerdnaer avatar simon04 avatar simonpoole avatar tas50 avatar thomersch avatar tigerfell avatar tomhughes avatar vicpopov avatar zerebubuth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chef's Issues

OpenStreetMap Carto changes

OpenStreetMap Carto's next release with v3.0.0, which brings some changes to dependencies. The OSMF tile servers meet the difficult ones (Mapnik 3), but some of the others might need minor changes

  • Mapnik 3 is required
  • CartoCSS 0.16.0 is required and 0.16.3 is suggested
  • project.mml is no longer a generated file, but passed directly to CartoCSS.

I believe the CartoCSS version change is the only one which the OSMF servers might not already meet.

c.f. gravitystorm/openstreetmap-carto#2473

wiki per-site recaptcha keys

We currently use recaptcha on the wiki sites, but this leads to errors on non-wiki.openstreetmap.org sites. We should have per-site credential pairs via databags.

New openstreetmap-carto release v3.3.0

A new version of openstreetmap-carto, v3.3.0, has been released.

The only deployment-related change is that the Hanazono font is now used for some characters outside the BMP. This can be obtained from the fonts-hanazono package on Ubuntu and Debian.

Improve tile cache logging

The request logging we do on the tile caches has a number of problems and could do with some improvement.

Each request is actually logged two, or sometimes three times, which is wasteful of I/O time and disk space on the caches. On top of which the logs that we recover to our central store are missing some important details.

The logs we currently generate are:

  • squid/access.log standard squid access log with no UA or referer
  • squid/zere.log added for @zerebubuth's analysis stuff has UA but no referer and is recovered to ironbelly
  • nginx/access.log for https requests only and generally more detailed than squid logs with the UA and referer included

I would like to change the squid access log to include the UA and referer and whatever else @zerebubuth needs and drop the special zere log and potentially drop the nginx logs as well so long as it passes through the real IP and squid can be made to log that.

Needs a licence

The repository needs a licence file. I'm not sure what licence actually applies though, are there any constraints? If not then Apache 2.0 is commonly used for chef cookbooks.

Clarify or close the private chef repo

There's a private OSMF Chef repository with various roles and/or cookbooks, for hysterical raisins. I don't have any access to this so I'm not sure about the details.

I'm interested in knowing if we can shut this down yet? I assume there are still cookbooks or roles in there with secrets hard-coded, or are there other reasons why some aspects can't be made public? Is there any other reasons to the private repo?

Can't install the correct version of squid on xenial

Our squid cookbook depends on squid 2.7, but that's no longer available in Ubuntu so we package our own version in a PPA:

https://launchpad.net/~osmadmins/+archive/ubuntu/ppa/+packages

The package claims to replace "squid3" which I guess is how the package "squid" part of the recipe is supposed to work. However, I can't get it to install on a xenial image. I've got a WIP test-kitchen config at gravitystorm@40da20e . When it runs, it install squid 3.5.12 . Uninstalling, apt-get update, reinstalling leads to the same place. I can only get it working by uninstalling squid, then running:

sudo apt-get install squid=2.7.STABLE9-4ubuntu10 squid-common=2.7.STABLE9-4ubuntu10

Can anyone shine a light onto this problem? What steps need to be taken to get the correct version of squid installed?

New openstreetmap-carto release, v2.41.0

A new version of openstreetmap-carto, v2.41.0, has been released

The list of font packages has changed.

On Ubuntu 16.04 the list is

fonts-dejavu-core fonts-droid-fallback ttf-unifont \
fonts-sipa-arundina fonts-sil-padauk fonts-khmeros \
fonts-beng-extra fonts-gargi fonts-taml-tscu fonts-tibetan-machine

On Ubuntu 14.04 the list is

fonts-dejavu-core fonts-droid ttf-unifont \
fonts-sipa-arundina fonts-sil-padauk fonts-khmeros \
fonts-beng-extra fonts-gargi fonts-taml-tscu fonts-tibetan-machine

Decouple munin_plugin from munin service definition

Many cookbooks use the munin_plugin provider, but this has a compile-time dependency on the service[munin-node] declaration. This makes it hard to test cookbooks (e.g. squid) independently.

The basic workaround is to add "include_recipe[munin::default]" to every cookbook that uses the munin_plugin resource. But that's not ideal since it slows down all the tests (and squid doesn't actually depend on munin being installed) and feels a bit icky.

Instead it would be better to allow cookbooks to call munin_plugin without having the whole of munin::default pulled in too. This could be achieved by inverting the notification, i.e. make service[munin] subscribe to the munin_plugin, but that's not straightforward due to the restart_munin attribute, and also subscribing to a particular munin_plugin invocation.

I propose using a dummy resource, to decouple the munin_plugin notifications from the service[munin] subscriptions, something like:

change the after_created in munin/resources/plugin.rb:

def after_created
  notifies :run , "execute[plugin-requires-munin-restart]" if restart_munin
end

somewhere:

# This is a dummy resource for other resources to subscribe to
execute 'plugin-requires-munin-restart' do
  command 'date'
  action :nothing
end

in munin/recipes/default.rb:

service[munin] do
   [...]
   subscribes :run, "execute[plugin-requires-munin-restart", :delayed
end

Thoughts? Is there an easier way to decouple the plugins from the service definition?

New openstreetmap-carto release, v3.0.0

A new version of openstreetmap-carto, v3.0.0, has been released.

Deployment-related this release are

  • Mapnik 3 is required
  • The shapefile download script is now a python script
  • CartoCSS >= 0.16.0 is required, and the Mapnik version needs to be specified, e.g. carto -a "3.0.0"

New openstreetmap-carto release, v2.44.1

A new version of openstreetmap-carto, v2.44.1, has been released.

Deployment related changes are a new recommendation for a minimum freetype version and listing font packages separately rather than relying on a metapackage and recommends. Both are already done on the OSMF servers.

tile logs are being served as text/plain

$ curl -I http://planet.openstreetmap.org/tile_logs/tiles-2015-02-03.txt.xz
HTTP/1.1 200 OK
Date: Thu, 05 Feb 2015 19:44:58 GMT
Server: Apache/2.4.7 (Ubuntu)
Last-Modified: Thu, 05 Feb 2015 07:30:10 GMT
ETag: "4d74e0-50e52461df99e"
Accept-Ranges: bytes
Content-Length: 5076192
Vary: Accept-Encoding
Access-Control-Allow-Origin: *
Content-Type: text/plain; charset=utf-8

should be application/x-xz

Reduce duplication of rendering effort

From #78 (comment)

The rendering machines are, currently, completely independent. This is great for redundancy and fail-over, as they are effectively the same. However, it means duplication of tiles stored on disk and tiles rendered. Duplication of tiles on disk is somewhat desirable in the case of fail-over, but duplicating the renders is entirely pointless.

Adding a 3rd server, therefore, is unlikely to reduce load by 1/3rd on the existing servers from rendering. However, a lot of the load comes from serving still-fresh tiles off disk to "back-stop" the CDN, which would be split amongst the servers (sort of evenly).

What would be great, as @pnorman and I were discussing the other day, is a way to "broadcast" rendered tiles in a PUB-SUB fashion amongst the rendering servers so that they can opportunistically fill their own caches with work from other machines. At the moment, it's no more than an idea, but it seems like a feasible change to renderd.

Currently the two servers are independent, and clients go to one based on geoip. This means that the rendering workload is not fully duplicated between the two servers, as users in the US tend to view tiles in the US and users in Germany tend to view tiles in Germany. This has been tested by swapping locations and seeing an increase in load.

Unfortunately, this doesn't scale well to higher numbers of servers.

Postgres server started before all configuration available

For setting up the nominatim DB slave, I have to remove the data in postgres' data dir including server.crt/.key and recovery.conf. It would be nice to be able to recover these files with chef after the base backup is done. However, starting chef at this point goes horribly wrong because chef starts up the postgres server before these files are copied back, generally destroying the database replica in the process.

Anything we can do about this?

Upgrade tileservers to mapnik3

On behalf of the openstreetmap-carto project, we'd like to see the OSMF tileservers upgraded to using mapnik3. We are confident that our stylesheet works on both mapnik2 and mapnik3 and are happy to make any tweaks that are uncovered.

Most importantly, there's a step-change improvement in text rendering quality for non-latin scripts just by upgrading.

Set a default for nameserver attributes?

<% node[:networking][:nameservers].each do |nameserver| -%>
nameserver <%= nameserver %>
<% end -%>

<% node[:networking][:nameservers].each do |nameserver| -%>
nameserver <%= nameserver %>
<% end -%>

This means that by default, any recipe that depends on networking gets no nameservers and generally fails on the first attempt to apt-get. This makes testing the cookbooks tedious since you need to explicitly set nameserver attributes for any cookbook that has networking::default somewhere in its dependencies.

There are (at least) a couple of options:

  • Default to something sensible (e.g. 8.8.8.8) if no nameservers are specified. I'm not sure what the downsides of this would be though.
  • Unpick the dependencies so that e.g. munin::default doesn't depend on networking::default.

Increase track_activity_query_size on rendering databases

The PostgreSQL GUC track_activity_query_size specifies the number of bytes reserved to track the currently executing command for each active session for pg_stat_activity, with a default of 1024.

Queries in most Mapnik stylesheets exceed this, with about a quarter of OpenStreetMap Carto's being over this limit. In gravitystorm/openstreetmap-carto#2316 I'm looking at ways to better identify the layer and zoom, but to get the full query this GUC needs to be increased.

I recommend increasing it to 16384, so if we have a slow query or stuck query we can see what it is to EXPLAIN it or debug it locally. The longest current query is 9818 bytes before Mapnik inserts additional text.

This would cost 15kb more memory per connection slot.

Recluster rendering servers

According to @cquest the OSM FR servers gain over 25%-50% rendering throughput when they recluster them after about a year by reducing table and index bloat. This can be done without a full outage and only stopping updates, but we probably want to wait until #79 is done to increase capacity. The reclustering depends on database IO and CPU, which are not maxed out.

The overall pan is to create a new copy of tables, build new indexes, then replace old tables. Because update frequency is more important for osm.org than other hosts, I'd recommend doing it slightly differently. Instead of reclustering all the tables, do one table, resume updates and let them catch up, do another, etc.

Starting with the points table and progressing by table size minimizes the disk usage. I think there's enough free space that it doesn't matter, but this is a best practice.


Process

My recommendation is the following is done on both servers, starting with whichever has gone the longest since the initial import

  1. Record the results of \dt+ and \di+. It would also be useful to have the results of the following SQL for future planning purposes

    SELECT CORR(page,geohash)
      FROM (
        SELECT 
            (ctid::text::point)[0] AS page,
            rank() OVER (ORDER BY St_GeoHash(st_transform(way,4326))) AS geohash
          FROM planet_osm_point
        ) AS s; -- area server result .93, takes 461s
    SELECT CORR(page,geohash)
      FROM (
        SELECT
            (ctid::text::point)[0] AS page,
            rank() OVER (ORDER BY St_GeoHash(st_transform(way,4326))) AS geohash
          FROM planet_osm_roads
        ) AS s; -- area server result .58, takes 119s
    SELECT CORR(page,geohash)
      FROM (
        SELECT
            (ctid::text::point)[0] AS page,
            rank() OVER (ORDER BY St_GeoHash(st_transform(way,4326))) AS geohash
          FROM planet_osm_line
        ) AS s;
    SELECT CORR(page,geohash)
      FROM (
        SELECT 
            (ctid::text::point)[0] AS page,
            rank() OVER (ORDER BY St_GeoHash(st_transform(way,4326))) AS geohash
          FROM planet_osm_polygon
        ) AS s;
  2. Stop updates and make a backup of the state file.

  3. Start by creating a schema to do work in

    CREATE SCHEMA IF NOT EXISTS recluster;
  4. Starting with the smallest table, recluster it into the new schema.

    \timing
    SET search_path TO recluster,"$user",public;
    CREATE TABLE planet_osm_point AS
      SELECT * FROM public.planet_osm_point
        ORDER BY ST_GeoHash(ST_Transform(ST_Envelope(way),4326),10) COLLATE "C";
  5. Create indexes. The indexes here are the recommended ones for OpenStreetMap Carto. If you want to use others you can.

    \timing
    SET search_path TO recluster,"$user",public;
    CREATE INDEX planet_osm_point_place
      ON planet_osm_point USING GIST (way)
      WHERE place IS NOT NULL AND name IS NOT NULL;
    CREATE INDEX planet_osm_point_index
      ON planet_osm_point USING GIST (way);
    
    CREATE INDEX planet_osm_point_pkey
      ON planet_osm_point (osm_id);
  6. Replace the table in the public schema in a transaction, keeping the old one

    CREATE SCHEMA IF NOT EXISTS backup;
    BEGIN;
    ALTER TABLE public.planet_osm_point
      SET SCHEMA backup;
    ALTER TABLE recluster.planet_osm_point
      SET SCHEMA public;
    COMMIT;
  7. Verify that tiles are still rendering

  8. Drop the old table

    DROP TABLE backup.planet_osm_point;
  9. Resume updates. When updates are done, repeat for the other three rendering tables

  10. For planet_osm_roads

    \timing
    SET search_path TO recluster,"$user",public;
    CREATE TABLE planet_osm_roads AS
      SELECT * FROM public.planet_osm_roads
        ORDER BY ST_GeoHash(ST_Transform(ST_Envelope(way),4326),10) COLLATE "C";
    
    CREATE INDEX planet_osm_roads_admin
      ON planet_osm_roads USING GIST (way)
      WHERE boundary = 'administrative';
    CREATE INDEX planet_osm_roads_roads_ref
      ON planet_osm_roads USING GIST (way)
      WHERE highway IS NOT NULL AND ref IS NOT NULL;
    CREATE INDEX planet_osm_roads_admin_low
      ON planet_osm_roads USING GIST (way)
      WHERE boundary = 'administrative' AND admin_level IN ('0', '1', '2', '3', '4');
    CREATE INDEX planet_osm_roads_index
      ON planet_osm_roads USING GIST (way);
    
    CREATE INDEX planet_osm_roads_pkey
      ON planet_osm_roads (osm_id);
    
    BEGIN;
    ALTER TABLE public.planet_osm_roads
      SET SCHEMA backup;
    ALTER TABLE recluster.planet_osm_roads
      SET SCHEMA public;
    COMMIT;

    Test, then

    \timing
    DROP TABLE backup.planet_osm_roads;
  11. For planet_osm_line, resume updates, wait for updates to catch up, then

    \timing
    SET search_path TO recluster,"$user",public;
    CREATE TABLE planet_osm_line AS
      SELECT * FROM public.planet_osm_line
        ORDER BY ST_GeoHash(ST_Transform(ST_Envelope(way),4326),10) COLLATE "C";
    
    CREATE INDEX planet_osm_line_ferry
      ON planet_osm_line USING GIST (way)
      WHERE route = 'ferry';
    CREATE INDEX planet_osm_line_river
      ON planet_osm_line USING GIST (way)
      WHERE waterway = 'river';
    CREATE INDEX planet_osm_line_name
      ON planet_osm_line USING GIST (way)
      WHERE name IS NOT NULL;
    CREATE INDEX planet_osm_line_index
      ON planet_osm_line USING GIST (way);
    
    CREATE INDEX planet_osm_line_pkey
      ON planet_osm_line (osm_id);
    
    BEGIN;
    ALTER TABLE public.planet_osm_line
      SET SCHEMA backup;
    ALTER TABLE recluster.planet_osm_line
      SET SCHEMA public;
    COMMIT;

    Test then

    DROP TABLE backup.planet_osm_line;
  12. Polygons will take the longest. Resume updates and let them catch up, then stop them and

    \timing
    SET search_path TO recluster,"$user",public;
    CREATE TABLE planet_osm_line AS
    SELECT * FROM public.planet_osm_line
      ORDER BY ST_GeoHash(ST_Transform(ST_Envelope(way),4326),10) COLLATE "C";
    
    CREATE INDEX planet_osm_polygon_military
      ON planet_osm_polygon USING GIST (way)
      WHERE landuse = 'military';
    CREATE INDEX planet_osm_polygon_nobuilding
      ON planet_osm_polygon USING GIST (way)
      WHERE building IS NULL;
    CREATE INDEX planet_osm_polygon_name
      ON planet_osm_polygon USING GIST (way)
      WHERE name IS NOT NULL;
    CREATE INDEX planet_osm_polygon_way_area_z6
      ON planet_osm_polygon USING GIST (way)
      WHERE way_area > 59750;
    
    CREATE INDEX planet_osm_polygon_index
      ON planet_osm_polygon USING GIST (way);
    
    CREATE INDEX planet_osm_polygon_pkey
      ON planet_osm_polygon (osm_id);
    
    BEGIN;
    ALTER TABLE public.planet_osm_polygon
      SET SCHEMA backup;
    ALTER TABLE recluster.planet_osm_polygon
      SET SCHEMA public;
    COMMIT;

    Test then

    \timing
    DROP TABLE backup.planet_osm_polygon;
    
  13. Resume rendering and clean up with

  DROP SCHEMA recluster;
  DROP SCHEMA backup;
  1. Record the results of \dt+ and \di+ again.
  2. Verify that there is a speed increase then do the other server

Ref: http://paulnorman.ca/blog/2016/06/improving-speed-with-reclustering/

Other options

  • All the tables could be done in parallel. This increase IO load and the maximum time without updates is longer, but is shorter overall.
  • Slim tables could be reindexed. I'd hold off on that as it only impacts update performance and load, and it's possible to do that by stopping updates and issuing a REINDEX statement which is much simpler.
  • maintainance_work_mem should probably be increased

Rollback

In case of a problem a rollback can be done by restoring the table from the backup schema.

If diffs are mistakenly restarted early the state file needs to be reset and diffs re-run.

Why not wait for a reimport?

The OpenStreetMap Carto Lua branch which will require a reimport with hstore is not out of development. We have a few open issues before we can merge and are lacking in deveoper time for these issues. Once the lua branch is merged we will still be releasing 2.x releases which will work with the old database to allow time to change over.

Doing a reimport with the current settings is in some ways better, but requires either a full outage of the server, a fair amount of database disk space, or the possibility of updates being down for an extended time[1], and the certainty that updates will be stopped for about a day.

[1] If the old DB slim tables are dropped this saves room, but stops any updates on the old DB

Time required

I'm running a test on the server for testing old-style multipolygons. It's got faster single-threaded performance and absurdly faster drives, but it should give an indication. I'll add times when it's done.

Cookbook testing

It's easier to contribute when you're confident that your PR won't fail spectacularly.

From experience elsewhere I can recommend the following:

  • Rubocop for linting the ruby code
  • Foodcritic for checking for cookbook antipatterns
  • chefspec for testing logic in recipes without actually executing them
  • Test Kitchen for actually running the recipes in vms/containers

They all cover different aspects of the cookbooks so I'd suggest using all of them.

To start I'd suggest making rubocop and foodcritic config files to turn off things that we're currently relying on. Then would be to get test-kitchen working for various cookbooks.

Use openstreetmap.style config file for tileservers

We've noticed that the tileservers use the default .style file from osm2pgsql when processing updates (by not specifying one, it defaults to the default.style)

https://github.com/openstreetmap/chef/blob/master/cookbooks/tile/templates/default/replicate.erb#L69-L74

The openstreetmap-carto project includes a style file for use with osm2pgsql. This ensures that the database layout matches what the stylesheets expect, as well as allowing use of arbitrary osm2pgsql versions instead of forcing people to upgrade to the latest if columns get changed.

I believe there's no substantial differences between the master version in osm2pgsql and that in the latest openstreetmap-carto release, so this is not currently a major issue. But it might be worth changing the update script (and the import script, if there is one).

sources.list template fails unless node.country is set

The apt::default recipe expects all nodes to have a country attribute set, and will fail without it.

deb http://<%= node.country %>.archive.ubuntu.com/ubuntu/ <%= node.lsb.codename %> main restricted

It would be better if either:

  • The cookbook set a default country attribute, in case none are defined elsewhere (e.g. gb)
  • The cookbook used the global ubuntu servers (i.e. http://archive.ubuntu.com/ ) if node.country isn't set

New openstreetmap-carto release v2.36.0

A new version of openstreetmap-carto has been released, v2.36.0 and it contains significant changes to the road colours, among the 135 changes since v2.35.0

downloading populated_places not needed anymore for the tiles

I am quite sure, that after
gravitystorm/openstreetmap-carto#1461
downloading ne_10m_populated_places.zip is not needed anymore
But it is still downloaded:

chef/roles/tile.rb

Lines 65 to 70 in 650d244

:populated_places => {
:url => "http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_populated_places.zip",
:directory => "ne_10m_populated_places",
:original => "ne_10m_populated_places.shp",
:processed => "ne_10m_populated_places_fixed.shp"
},

This change is live since a few month.
@gravitystorm can you confirm this?

Generalise Nominatim recipe

The chef script which installs Nominatim currently has hard-coded URLs and a few other parameters:
https://github.com/openstreetmap/chef/blob/master/cookbooks/nominatim/recipes/base.rb

It would be excellent if this could be generalised so that someone wanting to install their own Nominatim could set the relevant values for their installation (e.g. domain name) and then run the recipe.

We currently have a bash script for installing our Nominatim:
https://github.com/cyclestreets/nominatim-install/blob/master/run.sh
but if a standard chef recipe were available we would probably be able to deprecate that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.