Giter Club home page Giter Club logo

webpackage's Introduction

Packaging Websites

Not to be confused with webpack, this repository holds a collection of specifications aimed at packaging websites. These specifications replace the W3C TAG's Web Packaging Draft and will allow people to bundle together the resources that make up a website, so they can be shared offline, either with or without a proof that they came from the original website. A full list of use cases and resulting requirements is available in draft-yasskin-wpack-use-cases (IETF draft).

Explainers

The explainers walk through how to use these specs to achieve the use cases.

Use cases

Maintaining security and privacy constraints

Specifications

The specifications come in several layers:

  1. Signed HTTP exchanges (a.k.a. SXG) (IETF draft): These allow a browser to trust that a single HTTP request/response pair was generated by the origin it claims.

  2. Web Bundles (previously called Bundled HTTP exchanges): A collection of HTTP resources, each of which could be signed or unsigned, with some metadata describing how to interpret the bundle as a whole. This specification has an initial draft in a PR, but isn't finished yet. This work may proceed through either the IETF or the W3C/WHATWG.

    Update: This work was moved to the wpack-wg/bundled-responses repository (Web Bundles (IETF draft)) .

  3. Loading: A description of how browsers load signed exchanges. This is initially specified here, and will eventually merge into the appropriate specs, e.g. Fetch, that live in either the W3C or WHATWG. Currently this only covers signed exchanges.

  4. Subresource Loading (Explainer): A description of how browsers load a large number of resources efficiently with Web Bundles. This is initially specified here, and will eventually merge into the appropriate specs.

A previous draft of the format combined layers 1 and 2 into a single format for signed packages: draft-yasskin-dispatch-web-packaging (IETF draft). The DISPATCH WG at IETF99 recommended the current split.

Building this repository

Building the Draft

Formatted text and HTML versions of the draft can be built using make.

$ make

This requires that you have software installed as described in https://github.com/martinthomson/i-d-template/blob/main/doc/SETUP.md.

Packaging tools

Signed HTTP Exchanges

Install this with go install github.com/WICG/webpackage/go/signedexchange/cmd/... (Golang 1.18+).

See go/signedexchange for the usage of the tool.

Web Bundles

There are several tools.

  • Go (Reference Implementation)

    Install this with go install github.com/WICG/webpackage/go/bundle/cmd/....

    See go/bundle for the usage of the tool.

  • Node

    There is a npm package, wbn.

  • Plugin for bundlers (Experimental)

  • Rust (Experimental)

Isolated Web Apps (signing with integrity block)

webpackage's People

Contributors

bmeck avatar dependabot[bot] avatar dimich-g avatar everlastingbugstopper avatar hajimehoshi avatar hardie avatar hayatoito avatar hiroshige-g avatar horo-t avatar ibnesayeed avatar ioggstream avatar irori avatar isaacs avatar josephrocca avatar jyasskin avatar kinu avatar marcoscaceres avatar mrdewitt avatar myrzakereyms avatar nyaxt avatar peletskyi avatar reillyeon avatar rz avatar sisidovski avatar sonkkeli avatar timvdlippe avatar tomokinat avatar twifkak avatar yo-shun avatar yoavweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webpackage's Issues

Explain why we're not using ZIP

We have some hints in https://w3ctag.github.io/packaging-on-the-web/#intro, but it's not complete, and it needs to appear in the local explainer, not something remote.

Other considerations against zip:

  • We've seen vulnerabilities caused by validating one copy of a resource but using a different one. #41 will avoid this for the CBOR-based format.
  • Zip resources are identified by filename, which isn't the primary key of web resources.
  • Zip resources don't include response headers.
  • Lots of details in the format are archaic and wouldn't be used.

We should probably also list reasons in favor of re-using zip so that proponents know we've considered their arguments:

  • A huge number of other formats are based on zip, so we're unlikely to run into something we can't express.
  • Existing tools would be able to extract packages.

Noting open W3C TAG legacy issues

Hello web packaging fans -

We're just cleaning up issues in possibly obsolete repos under our organization (W3CTAG). One of these documents was the original packaging on the web which has since been obsoleted by your work. However there are many open issues on our document and before we summarily closed them we wanted to alert you to these issues in case you wanted to bring any of them over into your issues list. Can you please have a look through these issues and leave a note on each issue to let us know that it's OK to close it? If you do take one of them on as an issue, please leave a note including a link to your new issue.

Thanks,
Dan Appelquist
Co-Chair, W3CTAG

Figure out how to identify signing algorithms

Options:

  1. TLS 1.3 SignatureScheme codes.
  2. JWS algorithm names. (I have advice to avoid these.)
  3. RFC5280's AlgorithmIdentifier used in certificates.
  4. Define a mapping from the certificate's public key to a signature algorithm. This is easy for ECDSA and EdDSA keys, but would need work for RSA keys. Straw man: <=3072 bits gets rsa_pss_sha256, and >=3073 gets rsa_pss_sha384.

In some cases, like TLS' ecdsa_secp256r1_sha256 vs JWS' ES256, even though the algorithm is the same, the signature appears in different formats.

Any of these options is sufficient to let us sign with any TLS certificate, but TLS doesn't yet have identifiers for post-quantum signature schemes that counter-signatures might want to use.

Consider switching to DER-encoded ASN.1

I received feedback from some of our security people that DER is easier to parse securely than CBOR. I'm not 100% sure that's the right choice yet, so here are some pros and cons.

DER Pro's:

  1. Much older, with security-hardened implementations in many languages. e.g. BoringSSL includes a library for memory-safe DER parsing. CBOR has many implementations, but they aren't hardened.
  2. Prefixing with the length in bytes makes it easier to skip unneeded items. CBOR can work around this by embedding things into bytestrings.
  3. Generic parsing has only 2 cases: Primitive, where the interpretation of the bytes depends on the tag, and Constructed, where the content bytes encode a sequence of items. CBOR has 8 cases: integer, string, array, map, tag, simple value, simple value in next byte, and float. This is primarily used to skip unknown fields in structures.

CBOR Pro's:

  1. ~8 primitive types instead of 30+. We can subset ASN.1 to only use INTEGER, OCTET STRING, UTF8String, and SEQUENCE, but we can also subset CBOR to use integers, byte strings, text strings, arrays, and maps.
  2. Extensibility based on maps from strings to values is easier to understand and possibly to manage than ASN.1's extensibility based on appending to sequences. However, it's possible I've missed a better way to extend structures in ASN.1.
  3. It's easy to find the latest specification.
  4. Combining the length into the type saves a byte for smaller items.
  5. Serializing canonical CBOR, which is needed for signature checking, is significantly easier to implement.
  6. All DER integers and floats have to deal with the complexity of encoding bignums. For our purposes, where bignums aren't needed, CBOR's 8,16,32, and 64-bit integers are simpler. Even if bignums were needed, it's likely that a fixed length would be simpler to specify and implement than DER's variable-length encoding.

If we stick with CBOR, we'll mandate Canonical CBOR, which includes minimal integer encodings. That means offsets will need to measure a range that doesn't include the offset itself, but that's doable for both the section list and the resource index.

I'll update this list as more considerations come up.

Describe how to check for revocation and downgrades when loading a package

The sketch, which I've run by @sleevi, is:

  1. Check the manifest's signature and that the signing certificate is for the package's claimed origin and that it chains to a trusted root, using intermediate certificates included in the package.
  2. Each package also has an accompanying set of OCSP responses and anti-downgrade information.
  3. The UA checks that the OCSP responses are signed by the certificate's signer and were generated less than 7 days ago.
  4. The UA checks that the anti-downgrade information is signed by a certificate trusted for package's origin (potentially taking advantage of the package's intermediate certs), hasn't expired yet and was signed less than 7 days ago, and says that the package's date is sufficiently recent.
  5. If any of this isn't recent enough, the package identifies a URL from which to fetch newer validity information. This validity information is small (<1kb?), so it should be cheap enough to fetch even when full packages are too expensive.

There's some evidence that we'll still have a significant number of UAs that are fully offline and won't be able to fetch new validity information every week. Depending on the existing HTTP cache's behavior, maybe we don't need to revalidate once we've opened a package once. Maybe it's ok to trust older validations as long as the package doesn't make any network requests, and have revalidation block the package's first network request when the UA gets back online. We'll need to decide what to do about local storage if that validation fails.

I need to add this to the main explainer.

Figure out unsigned sub-packages

From @dimich-g's #36 (comment):

Without a signed manifest, there's no structure to identify sub-packages.

However, it might just work to lump all the resources together at the top level. Do we need to identify the sub-package structure?

In links to packages, consider srcset-type mechanism

In whatever mechanism we use to offer a package to a browser, we should think about allowing browsers to choose optimized packages for their situations. There could be a fully-general package that includes many sizes of images that'll appear optimally on any device, but also a small-screen version of the same package that omits the high-resolution images entirely. There could also be other axes than screen size, along the lines of https://github.com/spanicker/device-ram.

This introduces some complications around having multiple different resources signed for the same URL. Normally, we could resolve such conflicts by just using the latest such resource, but it's possible that's one optimized for smaller screens than the current device.

This is not a v1 consideration.

Test each kind of private key

example.com.key is currently a secp256r1 key, but we should also test creating packages for each of:

  • RSA <= 3072 bits
  • RSA with 3073-7680 bits (probably 4096)
  • secp384r1
  • secp521r1

I'll probably create the keys for rsa2048.example.com, secp521r1.example.com, etc.

Remove references to urn:uuid URLs

With the switch to CBOR, we no longer need urn:uuid URLs for special parts. Some of the example sections still refer to that type of URL.

Links to resource within a Package

For some use cases like Source Maps packages may wish to not include the full asset within themselves but have another package containing debugging symbols. Currently fragment URLs are proposed for removal, however I don't see a clear way to have a situation like:

minified.webpackage
\- index.min.js
source-maps.webpackage
\- index.min.js.map

And have index.min.js able to point to index.min.js.map.

Signatures not in compliance with EU law

As described in Mandate M/460, only certain signature format and standards can be legally used in the EU - and this proposal does not comply with any. It would be better to align with at least one of them - most likely CAdES.

Loading in a secure or insecure context?

While there is a good discussion about the need to handle separate (and unique) origins here, it's not clear whether such content should be loaded in a secure (https-like) or insecure (http) context - and whether it being signed influences that choice.

Capture sources of non-determinism

Some of the suggested uses cases seem to require that sources of non-determinism (such as RNG seeds and the current time) are captured to ensure that, for example, saving and sharing a web page can occur with greater fidelity.

Index should use a common structured format (eg. JSON)

Having the index file be in a text-like format that (a) doesn't specify or mandate an encoding and (b) no provisions for proper data typing, (c) requires writing a custom parser and (d) will need a newly defined MIME type (see issue #9 ) - doesn't seem like a good idea.

I recommend instead that the index be a well defined JSON grammar. That way standard parsers can be used and there is no question about encodings, data types, etc.

Look at using CMS or S/MIME for signatures

In response to https://tools.ietf.org/html/draft-yasskin-dispatch-web-packaging-00, Paul Hoffman suggested, if I'm understanding his email correctly, that we replace the signed-manifest item that wraps the manifest, with a CMS or possibly S/MIME document that wraps the manifest.

Because CMS is an ASN.1 format, I think we'd want to accept #47, switching the overall format to DER-encoded ASN.1, first.

CMS is fairly complicated, so Adam Langley suggests that we pick a precise subset and mandate that in the packaging format, rather than allowing all the flexibility in the original standard. For example, instead of allowing arbitrary RevocationInfoChoices, we might specify that the SignedData.crls field be filled with exactly one OCSP response. The result could still be parsed with CMS tools, but it could also be parsed with much simpler package-specific tools.

I suspect we'll run into some trouble around the SignerInfo.*algorithm fields: the current format specifies exactly one digest+signature algorithm for each possible certificate type, and doesn't encode that algorithm into the serialized bytes. CMS appears to allow the attacker to select the algorithms, just like JOSE. A package-specific parser would enforce the restriction, but a generic CMS tool wouldn't, and so would be vulnerable to the same attacks as JOSE. That's a reason to use a non-CMS format, to disallow generic tools from parsing package signatures.

In nested packages, what are the offsets relative to?

It's not clear from the document whether offsets specified in the header or index of a nested package are to be taken from the start of the containing/primary package data or from the start of the nested package header.

Validation cannot necessarily take place offline

In the discussion on signing, it says

The inclusion of certificate makes it possible to validate the package offline (certificate revocation aside, this can be done out-of-band when device is actually online).

This is only true in very few cases as you need to build an entire trust chain from that cert up to the root. Most processors will not have local access to every cert - in which case, it will need to fetch them.

will support packaging only subresource ?

in ES modules, we need to fetch a lot of module files.
increasing requests cause a performance issue. (10k 1kB load is 10 times slower than single 10MB load)

h2 will reduce connection handshake/establish, but doesn't reduce request/response itself which browser handles inside.

but reducing files via bundling like webpack is less merit of es modules(ex caching etc).

same things happens for image. so we sometimes concatting them aka Sprite.

current spec seems packaging main-resource + sub-resource in 1 file.
so if webpackage support or have candidate to package lot of resources into 1 package(not include main-resoruce) seems solve fetching performance problem.
expected behavior is fetching single response but unpackaging in browser and handling/caching like separated ESModules/ImageIcons/StyleSheets etc.

Inclusion of binary data into a text-based format

The section on the index says that compression may be permitted, but that may/will yield binary data. However, because the header of the is document is text - most sniffers will consider this a text file and treat it accordingly (eg. screwing up line endings).

Either require that all data be text-encoded (eg. BASE64) or use a binary format.

Switch to binary format and more.

Hi,

Capturing recent conversations with @jyasskin and @mrdewitt:

  1. We think it makes sense to switch to 100% binary format since the header compression and parts body compression makes it mostly binary anyways. It also lets us to replace MIME boundary strings with direct chunk-size kind of encoding.
  2. The Explainer and samples in it will be gradually updated to reflect switch to binary encoding. While in update, there can be some inconsistency in the doc.

Also discussed:

  • Binary alignment at 32 bit boundaries (decided not to)
  • Use Self-Delimiting Numeric Values (rfc 6256) (decided not to)
  • Split content index and offset table (so the resources may be rearranged)
  • Multiple hashes (to support deprecation of old ones)
  • Cross-signing

Optimize the storage of the resource URLs in the manifest, index, and main content

From @dimich-g's #36 (comment):

The resource-key structure (which holds a URL and possibly a set of request headers) is used as the key for the index, manifest, and main content. URLs can be fairly long, so we may want to optimize or compress that redundancy.

One idea for avoiding this duplication: Add a section of abbreviations that assigns each resource-key an index, and use those indices as the keys in the other three sections.

  1. We could re-use the index as that section of abbreviations.
  2. We probably don't want to re-use the manifest as that section, so that aggregating packages can build their own abbreviations.
  3. The abbreviations need to be expanded before hashing or checking signatures.
  4. We could shorten the abbreviation section even more by:
    1. Resolving each URL/resource-key relative to the previous one.
    2. Using the array index of the URL as its abbreviation instead of making the section name an abbreviation.

Another possibility would be to use a shared brotli dictionary to compress all three sections. The main downside here is that shared brotli dictionaries don't exist yet, and when they do, we may want to use a URL to identify which shared dictionary to use, which we'd again want to compress.

Interfacing with Resource Hints and link rel=preload.

Something unclear reading through the current proposal is how a developer would hint that some resources in a web package have a higher priority to fetch (e.g via resource hints) than others - the entire package would be fetched with a single priority. If I want to prefetch or preload certain assets, I'm unsure how those systems interface with packages.

Are there any thoughts about this?

HTML integration

Hello, is it within the scope to define (or speculate) on how a person might load a webpackage in an HTML document, and what the browser should do when it unpacks a package?

Or is the idea that a document might request GET /app.js (from a script tag) and the server would reply with a webpackage, making any other resources that are later requested come from that package?

Thanks!

Optionally check hashes of externally included packages

Scenario: Developer A includes a package from Developer B (e.g. a specific version of jQuery, or a hypothetical wasm implementation of python3) to serve the purpose of a shared library. This is loaded from a version-specific URL, say packages.jquery.com/v3-2-1, to ensure that A's app is running with the right version of jQuery or python that it was developed for. However, A has to trust the integrity of B and B's server to actually keep the resource static. B or any attacker on B's server has the power to inject malicious code into A's web app. This makes shared libraries impractical for security-critical applications.

To solve this problem, A links B's package with both a URL and a hash, which is checked by the client as soon as B's package is loaded (libraries are unlikely to need streaming). So A can have absolute confidence that their app uses the exact same version of the resource that it has been developed for, while keeping the benefits of shared library use.

Self Extracting Binaries

For self extracting executables, the common implementation is to produce a file which is a concatenated binary executable head, and an archive tail.

Node and Electron are looking at reusing webpackage possibly as things progress and need a way:

  • Have a signature for the concatenated binary+archive
  • Have a length pointer to let the runtime jump to the start of the archive

Self extracting zip files, ASAR from electron, and NODA are examples of this from the past. These all work by having a directory at end of file, but for all use cases I know a pointer to start of archive on the tail would be sufficient. Doing so would most likely just be a series of magic bytes and kength that exists at the end of the file. It could be optional but must be easy to add to an existing webpackage.

For a signature of the entire binary + archive this is important so people don't perform a secondary concatenation and load a different archive than expected, it most likely can just be a well known header that optionally exists on the package headers.

Streaming not supported

One of the reasons that the packaging on the web spec chose to avoid ZIP in favor of something new & different was due to the (perceived) lack of streaming support.

However, this proposal suffers from the same problem. You cannot create it entirely in stream, due to (a) the way that offsets are used and (b) the index file needing to list all other files.

If streaming is not a requirement for this format - that's fine. But then, that should also be called out in the spec.

Clarify relationship with Publishing WG’s Packaged Web Publications

Several days ago, the W3C Publishing WG published three working drafts, including Packaged Web Publications (WPub/PWP), which defines a packaging format for combining collections of resources into a single portable WPUB file.

This WPub standard seem to be quite similar to this WICG–IETF Web Packages (WPK) standard. But the relationship between the two is unclear.

I could find some references to the Publishing WG here. Web Packaging Format’s use-cases note has a “Packaged Web Publications” in its “Nice-to-have” section; it refers to the Publishing WG and discusses several abstract use cases provided by the WG, without actual reference to WPub as a specification. Several issues here (#3 “Relationship to DPUB”, #32 “Document Adobe’s use cases”, #37 “Add a list of goals and non-goals”, #71 “Start an internet-draft”) also refer to the Publishing WG. Some Publishing WG members (@lrosenthol, @iherman, @dauwhe) have participated in these issues.

None of these references talk about WPub as a separate specification, and none of them elaborate on its relationship with Web Packaging Format. Neither the Web Packaging specification itself nor the obsolete TAG Packaging on the Web draft make any mention of WPub either.

This is triply confusing. There is clear overlap between the two specifications’ use cases. For instance, both initiatives appear to desire addressing Google AMP’s use cases. Even their names are confusingly similar (WPK vs. PWP). Both are being actively developed—there were even announcements regarding both on this same week (the WPub public-draft announcement and the Google AMP Project’s announcement on its transition to WPK). And some of the same people have participated in both discussions on one standard and discussions on the other.

For readers of both specifications, it would be useful if their authors clarified this uncertainty. “What is the relationship between the two standards?” “Why are there two separate standards?” “How are their use cases and file formats similar and different?” Further active collaboration between the two specs’ authors might also be worth pursuing. Though it might be too late for unification, it would be a shame if there was fragmentation of the same goal into two formats. Browser vendors are probably less likely to implement two formats than just one. On the other hand, if their use cases are irreconcilable, then that should be made explicit in the specifications.

(Also cf. w3c/wpub#5 (comment), w3c/wpub#5 (comment) by @HadrienGardeur; as well as w3c/wpub#10, w3c/wpub#90, and w3c/wpub#111.)

Archived / immutable content

While some packages need to be kept up to date, and expired when they become too old; there are also things that should never expire/change.

For example the content of a webpage a user wants to keep, a report or invoice created by a website, etc (this is instead of using a PDF or MS Word document).

So would it be possible to skip the Expires header, and have some way of marking the package as immutable?

That immutable flag could also instruct the browser to never download any resources, and for the JS Date object to return a date/time based on the Date header from the package (not the current date/time)... may be related to issue #104?

Index offset may not survive some processes

However, because the header of the is document is text - most sniffers will consider this a text file and treat it accordingly (eg. screwing up line endings). If this happens, then all byte offsets will be incorrect and the file will be unable to load.

Consider finding a way to fool the sniffer or move to a binary format.

Electron user stories

Hi all, I saw some of you at Blink On, and I promised to come up with a few user stories. Rather than trying help shape the implementation, I thought I could just outline why electron is looking at this format. Here's my first pass at some user stories.

App Installer Story

Developer Kim builds a new electron app, and wants to host it on her site. She wants to make the install as easy as possible, and a new hot installation tool some people have downloaded knows how to take a web package, and install it like a desktop app. Right now, every electron app comes with its own chromium. A web package format distribution could link out to the required chromium, which may already be installed on the users desktop.

The end-user downloads the .electron bundle, double-clicks it, and the installer installs a new app.

Both the installer, and the operating system should be validating the package and checking code signatures. (Open question if the dynamic installer messes up the signature story)

A new icon is placed wherever there operating system installs applications, and double-clicking it opens the app.

Lifecycle with shared resources

Kim is at it again, but this time her users have demanded an even slimmer after install experience. Kim builds the next web package bundle, but instead of including everything in node_modules the web package simply links out to indicate that a requires these and PM modules at specific versions. To complicate matters, some of these modules require compilation.

Kim knows that not all of her users have developer tools installed, but also doesn't want to set up CI on numerous different platforms. That's okay, we have your back. The top 100 electron modules that require compilation have been precompiled on common platforms packaged into a web package format.

User Angela downloads Kim's latest web package, which only includes the core app code. The installer understands the outbound links in the web package, and first attempts to satisfy them with a local cache on disk. One of the compiled dependencies is missing however, and the installer has to downloaded fresh. Instead of pulling from npm directly, the installer is able to resolve the location of a precompiled version and use that.

BigCorp

Big Corp once control over every last bit of their installed application. They want to block every last piece down. They want a self-contained packaging format, where everything gets shipped altogether.

This is the situation we have now, which is why it's worth putting down.

BigCrop Internal

Internally at big Corp, they have a lot of internal tools developers that wish to distribute quick one-off applications. Because everything is running behind their firewall, they only want vetted code. Developers love the hotlinking that the web package can do, but all links should resolve to an internal npm mirror and an internal blob store for precompiled modules.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.