Giter Club home page Giter Club logo

event-model's Introduction

event-model's People

Contributors

abbygi avatar arkilic avatar awalter-bnl avatar callumforrester avatar cj-wright avatar coretl avatar cryos avatar danielballan avatar dependabot[bot] avatar diamondjoseph avatar dylanmcreynolds avatar ericdill avatar evalott100 avatar garryod avatar gilesknap avatar gwbischof avatar jklynch avatar jorgediazjr avatar jrmlhermitte avatar ke-zhang-rd avatar ksunden avatar licode avatar maffettone avatar mikehart85 avatar mrakitin avatar ronpandolfi avatar stuartcampbell avatar tacaswell avatar tizayi avatar untzag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

event-model's Issues

Add an InvalidDocuments exception

Durring the testing of databroker on NSLS2 data we found that occasionally we get a KeyError when filling an event because event['data'].keys() != descriptor['data_keys'].keys(). This is invalid.

I think this we should add an InvalidDocuments exception, so that we have a more meaningful explanation of the problem.

def fill_event(self, doc, include=None, exclude=None, inplace=None):
        try:
            filled = doc['filled']
        except KeyError:
            # This document is not telling us which, if any, keys are filled.
            # Infer that none of the external data is filled.
            descriptor = self._descriptor_cache[doc['descriptor']]
            filled = {key: 'external' in val
                      for key, val in descriptor['data_keys'].items()}
        for key, is_filled in filled.items():
            if exclude is not None and key in exclude:
                continue
            if include is not None and key not in include:
                continue
            if not is_filled:
>               datum_id = doc['data'][key]
E               KeyError: 'Synced_saxs_image'

RunRouter filling skips whole Event if any specs are unknown

The functionality of RunRouter's filling is confusing. By default, it tries to fill and silently gives up if it encounters a spec it doesn't know. You can opt in to stricter behavior. So far, so good.

fill_or_fail: boolean, optional
By default (False), if a document with a spec not in
``handler_registry`` is encountered, let it pass through unfilled. But
if set to True, fill everything and `raise
``UndefinedAssetSpecification`` if some unknown spec is encountered.

But when it fails to resolve a spec, it bails on filling the entire document. That is, if an Event has fields from two detectors, and we know how to fill Detector A but we don't know how to fill Detector B, neither will be filled.

I haven't given thought yet to what we should do about this, just making sure it is documented.

def event_page(self, doc):
descriptor_uid = doc['descriptor']
start_uid = self._descriptor_to_start[descriptor_uid]
try:
doc = self._fillers[start_uid].event_page(doc)
except UndefinedAssetSpecification:
if self.fill_or_fail:
raise

resource uids are not unique


In [2]:     oldclient = pymongo.MongoClient("mongodb://rsoxs-ca:27017/")             
   ...:     old_assets_db = oldclient["rsoxs-assets-store"]                              
   ...:     old_meta_db = oldclient["rsoxs-metadata-store"]                                                                                   

In [4]: list(old_assets_db.resource.find({'uid': '1c43af30-27db-437e-83c0-38b1cc528f06'}))                                                    
Out[4]: 
[{'_id': ObjectId('5d540d4cfb414042494a3a0e'),
  'spec': 'AD_TIFF',
  'resource_path': 'data/2019/08/14',
  'root': '/DATA/images',
  'resource_kwargs': {'template': '%s%s_%6.6d.tiff',
   'filename': 'e1f402f1-10bd-4685-a4d4',
   'frame_per_point': 1},
  'path_semantics': 'posix',
  'uid': '1c43af30-27db-437e-83c0-38b1cc528f06',
  'run_start': '4fa42813-0b56-4eee-a577-2125cd9f6c24'},
 {'_id': ObjectId('5d540d7bfb414042494a3a5e'),
  'spec': 'AD_TIFF',
  'resource_path': 'data/2019/08/14',
  'root': '/DATA/images',
  'resource_kwargs': {'template': '%s%s_%6.6d.tiff',
   'filename': 'e1f402f1-10bd-4685-a4d4',
   'frame_per_point': 1},
  'path_semantics': 'posix',
  'uid': '1c43af30-27db-437e-83c0-38b1cc528f06',
  'run_start': '4fa42813-0b56-4eee-a577-2125cd9f6c24'},
 {'_id': ObjectId('5d540db9fb414042494a3aa2'),
  'spec': 'AD_TIFF',
  'resource_path': 'data/2019/08/14',
  'root': '/DATA/images',
  'resource_kwargs': {'template': '%s%s_%6.6d.tiff',
   'filename': 'e1f402f1-10bd-4685-a4d4',
   'frame_per_point': 1},
  'path_semantics': 'posix',
  'uid': '1c43af30-27db-437e-83c0-38b1cc528f06',
  'run_start': 'd1fc0b67-0fcc-4239-b1c5-cdbc15a27e2b'}]```

Move compare into event-model.

These functions introduced in bluesky/databroker#392 should be moved into event-model, and #97 should be incorporated.

The useful bit is compare(a, b) which takes in two document streams and compares the documents with gradually increasing specificity. Rather than getting a dump like "These two lists of many documents aren't the same---good luck spotting the difference!" you get much more actionable advice.

I suspect we can update 100+ tests through the projects, which individually implement some fraction of this functionality, to make them more robust, succinct, and useful.

Add 'dims' to data_key

Proposal

Add an optional dims property to the definition of data_key in Event Descriptors. The value of dims should be an array where each item is a string. For a very relevant example of how to specify this in jsonschema, see shape in the linked section of code.

Motivation

Numpy arrays have a shape with N entries corresponding to ndim the number of dimensions in the array. Event Descriptors have a corresponding shape property, declaring the shape of the data in some field of an Event.

Xarrays add string labels to the array axes, called dims ("dimensions"). It would be good if ophyd Devices that return array day could suggest names for the dimensions, like dims=('exposure', 'y', 'x') for AreaDetector. It would would natural to put this in the data_key next to shape.

Then databroker can use this information to label the dims in an xarray correspondingly and make the data easier for the scientist to interpret.

Projection Schema could provide stronger validation

Several parts of the projection schema a mutually exclusive. For example, you can either have type=computed or a type=linked. Currently, the schema allows for linked types with computed attributes. It would be nice if the schema provided stronger validation for this case. There might be additional opportunities for stronger validation.

Revisit schemas to make validation stricter.

Our jsonschemas had never been actually tested on our real Documents either on the ophyd side or the metadatastore side, and I took some liberties to get things going because less-than-perfectly-strict validation is better than no validation.

Here's what I remember removing:

  • Validate the type of shape to be an array or null. If I understood correctly, it seems that tuples don't get interpreted as arrays. @tacaswell
  • Validate the details of some of the nested Dynamic Documents.

There might be others that I forgot, so to be thorough we should diff these against the documented ones. I also updated a bunch of names. Our docs should be updated there.
#128

Proposed Schema Changes

This is a long-term proposal, unrelated to databroker 1.0 or any of the upcoming release.

The following changes have been previously proposed and discussed at various times. Many are mutually un-coupled and could be considered separately. At some point we should decide which ones we want to do and execute them all in one step, tagging event-model 2.0.0.

Datum

  1. Add a time key.
  2. Add an index key with a unique monotonically increasing integer.
  3. If (2) is accepted, remove datum_id which would no longer be needed because (resource_uid, index) would be a unique key. Event documents would still refer to a Datum via a construction like the current datum_id, i.e. a string like {resource_uid}/{index}
  4. Remove some generality is favor of simplicity and efficiency: assume Datums are 1D slices. (All the ones we current have are, and it's hard to imagine a case that wouldn't be. Even if the external asset is "paragraphs in a Word document", you can slice on that.) Drop datum_kwargs and replace them with slice fields: start, stop, and step. All Datum documents would now have the same fields, and handlers could be simplified.

Change (4) might justify creating a new document (called "Partition", in view of its role as a 1-D slice?) and deprecating Datum rather than making major breaking modifications to Datum.

Resource

  1. Add version, referring the version of the spec, with an associated schema maintained with the handler. This will get a lot of use if (4) is accepted because all the handlers will be simplified.

Event

  1. Similar to (2), add an index key with a unique monotonically increasing integer.
  2. Similar to (3), if (6) is accepted, remove uid which would no longer be needed because (descriptor, index) would be a unique key.

add interlace_event_pages functions

Planning on moving these functions over from intake-bluesky.
There are two functions:

  • interlace_event_pages: which interlaces event_page generators by yielding events in timestamp order.
  • interlace_event_page_chunks: which interlaces event_page generators by yielding, event_pages of chunk_size in first timestamp order.

Make RunRouter pass 'start' and 'descriptor' to callbacks

When constructing a callback in a factory function for the RunRouter, such as

from event_model import DocumentRouter, RunRouter

class SomeCallback(DocumentRouter):
    def start(name, doc):
        self._start_doc = doc

    def event(self, doc):
        # Do something with self._start_doc.

def factory(name, doc):
    cb(name, doc)
    cb = SomeCallback()
    return [cb], []

rr = RunRouter([factory])

it is easy to forget cb(name, doc), i.e.

def factory(name, doc):
    cb = SomeCallback()
    return [cb], []

which typically leads to a secondary error (AttributeError: 'SomeCallback' object has not attribute '_start_doc' in this example) that does not make it at all clear that the mistake lies in factory.

The same sort of problem can occur for descriptor in RunRouter sub-factories.

I have made this mistake myself many times. Should we try to provide better errors? We could flip a flag when the start doc goes through and keep a cache of descriptor uids we have seen to provide better errors when we are missing a start or descriptor doc. It's not obvious to me where would be the best place to do it. Options:

  • Put a check in DocumentRouter.__call__
  • Put a check in DocumentRouter._dispatch.
  • Put in a check in the base methods of start and descriptor. This is complicated by the fact that for some methods (event, event_page, datum, datum_page) subclasses must not return super() because the base class returns NotImplemented. Thus, it be would be strange to advise, "Be sure to call super() in some document methods but not in others."

Helpful warning makes development difficult

A helpful warning was added to RunRouter.start(...) when its behavior was changed to pass the start document to each factory. Exceptions resulting from the new calls to callback('start', doc) trigger the warning and are squashed.

As I work on a new DocumentRouter subclass I wish the legitimate exceptions I am generating in start(...) would propagate all the way out and whack the interpreter just like all the other exceptions I generate.

Desired Behavior

I would like exceptions described above to be re-raised in addition to triggering the warning.

Add option to *not* fill in place, and later make it default.

In early days of databroker, we made the design to decision that filling should mutate Event documents in place, replacing datum ID with the referenced data. We now view this as a mistake. Consider the situation where we have three consumers subscribed to the RunEngine:

  1. Writes documents to MongoDB
  2. Fills documents (mutates in place)
  3. Writes image data to TIFF files

Note that (1) assumes unfilled documents and (3) assumes filled documents. If these are subscribed in any other order, these expectations will not be met, and things will fail badly. It would be safer for the filling process to return a copy that can be explicitly passed on to downstream consumers that expect filled data.

Filling in place does have an advantage: by avoiding the copying, it can be faster. But it would be safer to make this an opt in behavior.

I propose adding an inplace argument to Filler.__init__. In the first release, the default value will be None, a signal that the user has not told us what they want. Then, if it is found to be None, a warning will advise the user to explicit specify True or False, but it will behave in a behavior-compatible way. (That is, it will still fill in place.) In a later release, we will change the default from None to False. This will break any existing code that has not been updated---hence warning for a release or two.

Forbid forward slash in names.

Rationale: Data keys with forward slashes would make exporting to HDF5 files difficult. As far as we know no user has actually tried to violate this constraint, but it seems worth putting it into the specification defensively. We already forbid dots so this should be a simple change. Can you tackle this, @licode?

Add key validation

We need to make sure that keys, when we allow them to be user supplied, are valid keys from mongo (no '.'). This should be extended to also ban ',' for sql / csv related reasons.

The json schema logic for this currently lives in event_descriptor and should be propagated out to event, event_page, and datum_page (and maybe bulk_event and bulk_datum).

attn @prjemian .

Add convenience methods to DocumentRouter

I propose some convenience methods for DocumentRouter along these lines:

  • DocumentRouter.start_doc()
    return the start document or raise an Exception if the start document has not been seen yet
  • DocumentRouter.descriptor_for_event(event_doc)
    return the descriptor document associated with the given event document from a dictionary of uid --> descriptor
  • DocumentRouter.stream_name(event_doc)
    given an event document return its stream name by looking up the event's descriptor in a dictionary of uid --> descriptor

I find myself reimplementing this functionality virtually every time I subclass DocumentRouter.

Document design strategy behind Tomography docstream

At the coalition call today, we discussed creating a documentation issue and PR to lay out the proposed design that we came up with for tiling / mosaic use cases in tomography. (FWIW, there's no documentation label available, so marked as enhancement.)

Document assets

The assets documentation in databroker has become so out of date that it's borderline anti-helpful. It should be updated and moved here, now that minting and inserting Resource documents is a more general concern across the suite of libraries, not just a databroker thing.

How to wrap analyzed data back to event model

Hi,
I am trying to figure out a way to wrap analysed data (lets say a new xarray with its own attrs that is based on data from an original event).
Currently I am unwrapping (using databroker) msgpack, manipulating data and saving a new msgpack / netCDF (xarray) file, and it seems better to add to an existing event-model msgpack (also to maintain the format at which data move around).

Thanks,
Yevgeny

Add 'chunks' to data_key, beside 'shape' and 'dims'

Just as devices can now provide descriptive names for each of their dimensions (#26) it might make sense for them to be able to suggest a reasonable number of chunks to chunk the data into along each dimension. We'll need to inject a chunk size somewhere whenever data is returned in dask objects, and it seems plausible that detectors might be able to suggest a better default than some generic code in intake/databroker could.

Of course, the best chunk size depends on the algorithm being applied, and consumer code can always override this default by re-chunking before pulling the data.

Discussion: Handlers that take datum pages?

Currently, if you say, "I want all the data for this vector of N datum_ids," we have to do N function calls to handler.__call__, which only knows how to process one datum at a time. (We do at least avoid N database hits by pre-fetching all the Datum documents for a given Resource when you ask for one of them.)

Do you need to think about extending the "Handler API" to deal with a vector of datum kwargs?

Use consistent terms: run_start -- there is no such document

During review of PR #94, I raised this point

When this was all new to me, the two terms "Event Descriptor" and "descriptor" as a document schema did not jump out as being the same thing. Run Start/start and Run Stop/stop similar but somehow easier to grasp. Gradually, this struck home.

Is this the time to make this crystal clear for the neophytes? The [bluesky project applies new definitions to otherwise familiar terms and this] is daunting to new users.

The suggestion was to make a new PR which should be created after #94 is merged.

Expected Behavior

If there is a document schema such as start, it should be referred to consistently as such, not as run_start or Run Start. There is no run_start document.

Current Behavior

The documentation is inconsistent in how it refers to the various document types (schemas).

Possible Solution

Editing and revision through PR process. All the document types should be reviewed for consistent reference. The exact name of the document type should be used, such as run 'start' (rather than 'run_start').

Steps to Reproduce (for bugs)

n/a

Context

As stated, the inconsistent references to the same document schema created confusion and delay for me when this material was all new, when trying to write my first callback routine.

Seeing the revisions of PR #94 (especially when gathering the resource and datum descriptions with the others) gave me the idea this was the time to note the inconsistent handling.

Your Environment

n/a

Add validate_order function.

This work was already started in #97 but in keeping with our new process of opening issues first to nail down the design, I'd like to move discussion here for the moment.

Scope should include:

  • All foreign keys are preceded by the documents they reference. That means:

    • RunStart before the EventDescriptors that reference them
    • EventDescriptors for the Events that reference them
    • Datum before the Events hat reference them
    • Resources before the Datum that reference them

    Note that this does not mean that a RunStart must be first. For example, Resource is allowed to precede a RunStart.

  • If a RunStop doc is present, it is the last doc. This is important because we sometimes use it as a signal to clean up resources (like Serializer.close()).

  • Event[Page]s within each stream are in time order.

  • Event[Page]s across streams are in time order up to the time resolution of a Page. That is, if we denote event_page['time'][0] as a_i ("a initial") and event_page['time'][-1] as a_f ("a final") for a given EventPage a, if b follows a and then b_f >= a_i. In English, each EventPage's highest time must be greater than or equal to the preceding EventPages' lowest times.

In terms of implementation, it would be good to avoid caching the entire run. It is possible to enforce these constraints without retaining every document in memory at once.

Mixed path styles when operating cross-platform (windows)

Expected Behavior

Event model should resolve joining paths of different styles when it joining the root to the resource.

Current Behavior

Currently, mixed styles result, where, for example, a windows root append to a posix resource results in a path with both / and \\.

Possible Solution

Inspiration from ophyd (bluesky/ophyd#703) seems relevant; likely some usage of PathLib to compose paths better.

Steps to Reproduce (for bugs)

  1. Have data collected from EPICS AD on linux
  2. Access data from databroker on windows
  3. Check path composition by breaking at
    resource_path = os.path.join(root, resource_path)

Context

This issue may affect operation of databroker on windows, depending on if the handler accepts mixed path styles.

Your Environment

Windows; databroker prerelease 1.0.0b2

(discussed in coalition call on 11/8)

Identify mismatched keys in "event_model.EventModelValidationError: These sets of keys must match:"

It would be nice if this error message included a list of the mismatched keys:

File "/opt/bluesky_workers/suitcase_worker.py", line 107, in event
event = descriptor_bundle.compose_event(**new_doc)
File "/opt/conda_envs/analysis-2019-3.0-rsoxs/lib/python3.7/site-packages/event_model/init.py", line 1293, in compose_event
data.keys(), timestamps.keys(), descriptor['data_keys'].keys()))
event_model.EventModelValidationError: These sets of keys must match:
event['data'].keys(): dict_keys(['Wide Angle CCD Detector_cam_acquire_time', 'Wide Angle CCD Detector_cam_bin_x', 'Wide Angle CCD Detector_cam_bin_y', 'Wide Angle CCD Detector_cam_min_x', 'Wide Angle CCD Detector_cam_min_y', 'Wide Angle CCD Detector_cam_model', 'Wide Angle CCD Detector_cam_shutter_close_delay', 'Wide Angle CCD Detector_cam_shutter_open_delay', 'Wide Angle CCD Detector_cam_temperature', 'Wide Angle CCD Detector_cam_temperature_actual', 'Wide Angle CCD Detector_cam_trigger_mode', 'Wide Angle CCD Detector_cam_adc_speed', 'Wide Angle CCD Detector_cam_hot_side_temp', 'Wide Angle CCD Detector_cam_sync', 'Wide Angle CCD Detector_image', 'Wide Angle CCD Detector_stats1_total'])
event['timestamps'].keys(): dict_keys(['Wide Angle CCD Detector_cam_acquire_time', 'Wide Angle CCD Detector_cam_bin_x', 'Wide Angle CCD Detector_cam_bin_y', 'Wide Angle CCD Detector_cam_min_x', 'Wide Angle CCD Detector_cam_min_y', 'Wide Angle CCD Detector_cam_model', 'Wide Angle CCD Detector_cam_shutter_close_delay', 'Wide Angle CCD Detector_cam_shutter_open_delay', 'Wide Angle CCD Detector_cam_temperature', 'Wide Angle CCD Detector_cam_temperature_actual', 'Wide Angle CCD Detector_cam_trigger_mode', 'Wide Angle CCD Detector_cam_adc_speed', 'Wide Angle CCD Detector_cam_hot_side_temp', 'Wide Angle CCD Detector_cam_sync', 'Wide Angle CCD Detector_image', 'Wide Angle CCD Detector_stats1_total'])
descriptor['data_keys'].keys(): dict_keys(['Wide Angle CCD Detector_cam_acquire_time', 'Wide Angle CCD Detector_cam_bin_x', 'Wide Angle CCD Detector_cam_bin_y', 'Wide Angle CCD Detector_cam_min_x', 'Wide Angle CCD Detector_cam_min_y', 'Wide Angle CCD Detector_cam_model', 'Wide Angle CCD Detector_cam_shutter_close_delay', 'Wide Angle CCD Detector_cam_shutter_open_delay', 'Wide Angle CCD Detector_cam_temperature', 'Wide Angle CCD Detector_cam_temperature_actual', 'Wide Angle CCD Detector_cam_trigger_mode', 'Wide Angle CCD Detector_cam_adc_speed', 'Wide Angle CCD Detector_cam_hot_side_temp', 'Wide Angle CCD Detector_cam_sync', 'Wide Angle CCD Detector_image', 'Wide Angle CCD Detector_stats1_total', 'Wide Angle CCD Detector_image_is_background_subtracted'])

Considering a 'fill_item' API

For some work over on intake-bluesky, it has been proposed to add something like Filler.fill_item(datum_id) to the Filler API. This is reminiscent of filestore.retrieve(datum_id) in the original "file store" implementation of filling, which proved problematic. I will summarize some thoughts on this from conversations with @gwbischof and @tacaswell:

Resource Management

One problem with filestore.retrieve(datum_id) is that the caches involved were global to the filestore module. The Filler is a class (and a context manager to boot). It would not have this problem.

Keeping Context Next to Data

Another problem with filestore.retrieve(datum_id) is that is returns a bare array, rather than a dict that keeps labels next to their values. This is usability issue---it's not unsolvable by any means, but it puts some burden on the caller to keep track of what is what.

As noted by @gwbischof, in the context of a dask-based filler, dask would track this state for us. Filling is generally such a low-level thing that we probably shouldn't worry about this too much.

Laying a Path For Batched Filling

Because the handler API only accepts one Datum at a time, filling blocks of data unavoidably involved a hot loop over a function call handler(**datum_kwargs). At some point we will need to extend the handler API to enable more efficient filling, along the lines of handler.fill_a_bunch_of_stuff_in_one_call(???). There is not an obvious way to do that yet, but we'll need to figure it out eventually.

In @tacaswell's judgement, adding new API with a signature like fill_item(datum_id) is likely to make such future changes harder. We should focus on filling pages/chunks as a unit and then accessing the specific data we need by indexing into the page/chunk. I find that argument convincing.

Alternatives to fill_item

Add fill_event and fill_event_page.

This makes it possible to use a Filler to specifically fill certain columns in certain events, without breaking its utility as a normal DocumentRouter-based callback.

class Filler(...):
    def fill_event(self, doc, include=None, exclude=None):
        # Move the guts of `Filler.event` into here.

    def fill_event_page(self, doc, include=None, exclude=None):
        # Loop over fill_event. In the future, when the handler API is extended,
        # we may be able to actually fill event pages efficiently without unpacking.

We can then rewrite event and event_page to use these methods.

Clear handler_cache when the handler class for a spec is changed.

While we were debugging a beamline issue together, @mrakitin pointed out that registering a handler for some spec should cause any entries in the Filler._handler_cache associated with that spec to be cleared. Otherwise, if the user tried to update the Filler.handler_registry and then re-access data, they will confusingly be handed instances of the old class.

One way to implement this would be to make handler_registry, which is currently just a dict, a special dict subclass that can do extra things on __setitem__.

Root slashes in resource data prevents complete file handling

This is a little bit of pilot error, but it was easy to do and took a long time to debug.

I'm writing an ingestor to import rsoxs scattering data into databroker. When I compose the resource, I accidentally placed a forward slash in front of my resource_path below:

        ccd_resource = run_bundle.compose_resource(
            spec='FITS',
            root=images_root,
            resource_path="/foo/bar.fits",
            resource_kwargs={})

This prevents the file from being found, but os.path.lib here ignores the first parameter if it sees the forward slash in the second parameter:

resource_path = os.path.join(root, resource_path)

Should this be documented? Should there be check for a forward slash in resource_path and remove it? I dunno. But even if we don't do anything this issue might help someone in the future.

Reinstate compatibility with jsonschema 2.x

In #79 we made bluesky compatible with the new jsonschema 3.x API. We also chose to require that API and become incompatible with 2.x.

Unfortuantely, pip's dependency solver (unlike conda's) does not guarantee satisfying all the dependencies across steps; it just warns when it is going to ignore a pin. It often happens that our jsonschema >=3 pin gets ignored.

It would be simple enough to do a version check on jsonschema at import time and use old-style validation code for jsonschema 2.x. It adds some complexity to our codebase, but given that this issue has come up a couple times and can be puzzling if you don't know what the problem is, I think it's worth considering.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.