Giter Club home page Giter Club logo

geocombine's Introduction

GeoCombine

CI | Coverage Status | Gem Version

A Ruby toolkit for managing geospatial metadata, including:

  • tasks for cloning, updating, and indexing OpenGeoMetadata metadata
  • library for converting metadata between standards

Installation

Add this line to your application's Gemfile:

gem 'geo_combine'

And then execute:

$ bundle install

Or install it yourself as:

$ gem install geo_combine

Usage

Converting metadata

# Create a new ISO19139 object
> iso_metadata =  GeoCombine::Iso19139.new('./tmp/opengeometadata/edu.stanford.purl/bb/338/jh/0716/iso19139.xml')

# Convert ISO to GeoBlacklight
> iso_metadata.to_geoblacklight

# Convert that to JSON
> iso_metadata.to_geoblacklight.to_json

# Convert ISO (or FGDC) to HTML
> iso_metadata.to_html

Migrating metadata

You can use the GeoCombine::Migrators to migrate metadata from one schema to another.

Currently, the only migrator is GeoCombine::Migrators::V1AardvarkMigrator which migrates from the GeoBlacklight v1 schema to the Aardvark schema

# Load a record in geoblacklight v1 schema
record = JSON.parse(File.read('.spec/fixtures/docs/full_geoblacklight.json'))

# Migrate it to Aardvark schema
GeoCombine::Migrators::V1AardvarkMigrator.new(v1_hash: record).run

Some fields cannot be migrated automatically. To handle the migration of collection names to IDs when migrating from v1 to Aardvark, you can provide a mapping of collection names to IDs to the migrator:

# You can store this mapping as a JSON or CSV file and load it into a hash
id_map = {
  'My Collection 1' => 'institution:my-collection-1',
  'My Collection 2' => 'institution:my-collection-2'
}

GeoCombine::Migrators::V1AardvarkMigrator.new(v1_hash: record, collection_id_map: id_map).run

OpenGeoMetadata

Logging

Some of the tools and scripts in this gem use Ruby's Logger class to print information to $stderr. By default, the log level is set to Logger::INFO. For more verbose information, you can set the LOG_LEVEL environment variable to DEBUG:

$ LOG_LEVEL=DEBUG bundle exec rake geocombine:clone

Clone OpenGeoMetadata repositories locally

$ bundle exec rake geocombine:clone

Will clone all edu.*, org.*, and uk.* OpenGeoMetadata repositories into ./tmp/opengeometadata. Location of the OpenGeoMetadata repositories can be configured using the OGM_PATH environment variable.

$ OGM_PATH='my/custom/location' bundle exec rake geocombine:clone

You can also specify a single repository:

$ bundle exec rake geocombine:clone[edu.stanford.purl]

Note: If you are using zsh, you will need to use escape characters in front of the brackets:

$ bundle exec rake geocombine:clone\[edu.stanford.purl\]

Update local OpenGeoMetadata repositories

$ bundle exec rake geocombine:pull

Runs git pull origin master on all cloned repositories in ./tmp/opengeometadata (or custom path with configured environment variable OGM_PATH).

You can also specify a single repository:

$ bundle exec rake geocombine:pull[edu.stanford.purl]

Note: If you are using zsh, you will need to use escape characters in front of the brackets:

$ bundle exec rake geocombine:pull\[edu.stanford.purl\]

Index GeoBlacklight documents

To index into Solr, GeoCombine requires a Solr instance that is running the GeoBlacklight schema:

$ bundle exec rake geocombine:index

If Blacklight is installed in the ruby environment and a solr index is configured, the rake task will use the solr index configured in the Blacklight application (this is the case when invoking GeoCombine from your GeoBlacklight installation). If Blacklight is unavailable, the rake task will try to find a Solr instance running at http://localhost:8983/solr/blacklight-core.

You can also set a the Solr instance URL using SOLR_URL:

$ SOLR_URL=http://www.example.com:1234/solr/collection bundle exec rake geocombine:index

Harvesting and indexing documents from GeoBlacklight sites

GeoCombine provides a Harvester class and rake task to harvest and index content from GeoBlacklight sites (or any site that follows the Blacklight API format). Given that the configurations can change from consumer to consumer and site to site, the class provides a relatively simple configuration API. This can be configured in an initializer, a wrapping rake task, or any other ruby context where the rake task our class would be invoked.

bundle exec rake geocombine:geoblacklight_harvester:index[YOUR_CONFIGURED_SITE_KEY]

Harvester configuration

Only the sites themselves are required to be configured but there are various configuration options that can (optionally) be supplied to modify the harvester's behavior.

GeoCombine::GeoBlacklightHarvester.configure do
  {
    commit_within: '10000',
    crawl_delay: 1, # All sites
    debug: true,
    SITE1: {
      crawl_delay: 2, # SITE1 only
      host: 'https://geoblacklight.example.edu',
      params: {
        f: {
          dct_provenance_s: ['Institution']
        }
      }
    },
    SITE2: {
      host: 'https://geoportal.example.edu',
      params: {
        q: '*'
      }
    }
  }
end
Crawl Delays (default: none)

Crawl delays can be configured (in seconds) either globally for all sites or on a per-site basis. This will cause a delay for that number of seconds between each search results page (note that Blacklight 7 necessitates a lot of requests per results page and this only causes the delay per page of results)

Solr's commitWithin (default: 5000 milliseconds)

Solr's commitWithin option can be configured (in milliseconds) by passing a value under the commit_within key.

Transforming Documents

You may need to transform documents that are harvested for various purposes (removing fields, adding fields, omitting a document all together, etc). You can configure some ruby code (a proc) that will take the document in, transform it, and return the transformed document. By default the indexer will remove the score, timestamp, and _version_ fields from the documents harvested. If you provide your own transformer, you'll likely want to remove these fields in addition to the other transformations you provide.

GeoCombine::GeoBlacklightIndexer.document_transformer = -> (document) do
  # Removes "bogus_field" from the content we're harvesting
  # in addition to some other solr fields we don't want
  %w[_version_ score timestamp bogus_field].each do |field|
    document.delete(field)
  end

  document
end

Tests

To run the tests, use:

$ bundle exec rake spec

Contributing

  1. Fork it ( https://github.com/[my-github-username]/GeoCombine/fork )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

geocombine's People

Contributors

akharris avatar cbeer avatar dl-maura avatar drhardy avatar eliotjordan avatar ewlarson avatar hackartisan avatar hudajkhan avatar jkeck avatar jrgriffiniii avatar karenmajewicz avatar kgjenkins avatar kimdurante avatar mejackreed avatar mnyrop avatar thatbudakguy avatar tpendragon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

geocombine's Issues

Support ingesting metadata using either geoblacklight.json or <ID>.json

Currently, OGM allows for two styles of naming. One is to have all of your metadata in folders (usually some sort of pair tree), and the file itself is just named geoblacklight.json (see Stanford). However, a fair amount of people don't use this naming convention and instead just have a few directories of files with .json (see Harvard).

This involves allowing any '*.json', but excluding explicitly layers.json (which is a standard file under the Stanford model)

GeoCombine should be able to convert from metadata schemas to a Solr schema

Ingest ISO19139 or FGDC and validate them.
Enable to_geoblacklight and validate for ISO19139 and FGDC

# Input an ISO 19139 record
iso_record = GeoCombine::Record.new(iso19139_metadata)
# Validate the record
iso_record.validate # returns true
# Convert the record
iso_record.to_geoblacklight # Converts a record to GeoBlacklight-Schema

# Input a FGDC record
fgdc_record = GeoCombine::Record.new(fgdc_metadata)
#Validate the record
fgdc_record.validate # returns true
# Convert the record
fgdc_record.to_geoblacklight # Returns an object with formatted fields, could do to_json or to_xml

Don't know how to build task 'geocombine:index'

Hello GeoCombine Dev

Summary

Sitting at the Geo4LibCamp 2017 working through a tutorial on getting bulk data from geocombine, I ran into the issue above.

  • Downloaded zip file from Stanford
  • Unzipped the file
  • Added gem geo_combine to my Gemfile
  • Ran bundle install
  • Tried running bundle exec rake geocombine:index and hit the Don't know how to build task 'geocombine:index' error.
  • grepped for geocombine rake tasks and did not find any
be rake -T| grep geo_combine
be rake -T| grep geocombine

Skip harvesting of archived repositories

Some OpenGeoMetadata repositories contain out-of-date records and have no current contact person, so they will be archived on GitHub to indicate they're no longer in use.

We want GeoCombine to avoid harvesting these repositories entirely, since the records should no longer be used.

Search dc_subject for geom_types

Should be implemented as a GeoBlacklight md enhancement

Pseudo code:

if !'layer_geom_type_s'
  search dc_subject for terms containing "point", "polygon" etc
end

Enhance the fgdc2HTML XSLT

  • There are a few errors in the fgdc2HTML file
  • Transform some additional elements
  • More completely test the output

GBL Harvest and Index - Fails due to solr_bboxtype__ fields

Since GeoBlacklight v2.0.0, and the introduction of our bbox overlap ratio feature, Solr will automatically generate fields and values for these four coordinates:

  • solr_bboxtype__minX
  • solr_bboxtype__minY
  • solr_bboxtype__maxX
  • solr_bboxtype__maxY

These fields are returned during the GeoBlacklightHarvester's harvesting. But, you cannot pass these fields/values back into Solr during the index, without seeing Solr errors like:

RSolr::Error::Http (RSolr::Error::Http - 400 Bad Request)
Error: {
  "responseHeader":{
    "status":400,
    "QTime":869},
  "error":{
    "metadata":[
      "error-class","org.apache.solr.common.SolrException",
      "root-error-class","java.lang.IllegalArgumentException"],
    "msg":"Exception writing document id stanford-cg357zz0321 to the index; possible analysis error: DocValuesField \"solr_bboxtype__minX\" appears more than once in this document (only one value is allowed per field)",
    "code":400}}

The fix is simple. We just need to delete these fields via the document_transformer.

      def document_transformer
        @document_transformer || ->(document) do
          document.delete('_version_')
          document.delete('score')
          document.delete('timestamp')
          document.delete('solr_bboxtype__minX')
          document.delete('solr_bboxtype__minY')
          document.delete('solr_bboxtype__maxX')
          document.delete('solr_bboxtype__maxY')
          document
        end
      end

Invalid JSON output

JSON output generated by iso2geoBL.xsl is invalid as there is mixed XML content.

Specify arbitrary solr location for "geocombine index"

I've installed the "full" stack using the docker images provided here running in a single Amazon EC2 instance. I've found the address of the container running solr (listed in the geoblacklight container's hosts file) but don't know how to tell geocombine that I'm not running solr locally at 127.0.0.1:8983, and provide a different location instead. I can see that "geo_combine.rake" specifies 127.0.0.1, but I don't have the file locally after following the install and can't change the IP.

Apologies if the question is overly simplistic; I have little experience with ruby or docker & am just trying to get a feel for how the pieces fit together. Any guidance would be appreciated.

Better handling for records that fail to index

When a record fails to index, the cause could be one of a number of things:

  • the record is invalid, and solr rejects it for not matching the schema
  • the http request failed or was interrupted
  • we made too many requests to solr and got blocked or throttled

In all of these cases, it'd be better to log the failure and continue on โ€“ currently indexing just stops. In some of them, it might also be useful to retry the request (possibly after a delay) using faraday-retry or similar.

Aardvark Support

Identify Aardvark implications for GeoCombine.

  • update geocombine rake task to account for aardvark (The geocombine:index rake task has a hard-coded layer_id_s field statement. That field is changed to gbl_wxsIdentifier_s in Aardvark.) (addressed by f05d344)
  • update GeoblacklightHarvester to account for aardvark (layer_slug_s and dc_source_s is present in spec/lib/geo_blacklight_harvester_spec.rb)
  • #121
  • #142
  • #156
  • #163

Harvester does not have correct parameters on paginated results

Some GeoBlacklight instances expect harvester requests to include a format=json parameter. This is added as a default param to the first request:

https://github.com/OpenGeoMetadata/GeoCombine/blob/master/lib/geo_combine/geo_blacklight_harvester.rb#L199

However, subsequent requests are taken from the next link of the previous document and the parameter has been removed:
https://earthworks.stanford.edu/?f%5Baccess%5D%5B%5D=public&f%5Bdct_provenance_s%5D%5B%5D=MassGIS&format=json&per_page=100&q=%2A&page=1

This breaks harvesting functionality on some sites.

Ensure that this parameter is appended to all requests.

Possible XSL FGDC and ISO to HTML

So I found this XSL which actually seems to do a decent semantic job of going to HTML for both FGDC and HTML.

https://github.com/Esri/geoportal-server/blob/0b925c81ceb0d48474d0e78276c7e9546266e9fd/geoportal/src/gpt/search/profiles/metadata_to_html_full.xsl

See two sample conversion here:

https://gist.github.com/mejackreed/2f9d78df58c61bf5c392

For ISO md I would like some better dt's but its not that terrible

Thoughts @gravesm , @drh-stanford , @eliotjordan , @kimdurante ??

Update logic for 'all repos' cloning/indexing tasks

Currently, the logic for indexing when no specific repo is specified is to pull all repositories with edu., uk., or org.* and anything in the allowlist (currently only big10). If orgs have made their repositories and didn't follow this, they are being excluded. The proposal is to either

  1. Default to indexing everything unless it is on a deny list (don't do the explicit allow check)
  2. Implement a tagging system where a repository whose owner would like it included uses the tag 'datasource'
    • Note that if we use tags we'll need to ask everyone to tag their repositories to maintain existing functionality

Create class to convert from schema version 1 to Aardvark

Invalid date string in iso2geoBL.xsl

Running the iso2geoBL.xsl transform on metadata from GeoNetwork produces a Solr doc that is rejected with the following response:

RSolr::Error::Http - 400 Bad Request
Error: {'responseHeader'=>{'status'=>400,'QTime'=>1},'error'=>{'msg'=>'Invalid Date String:\'2010-11-08\'','code'=>400}}

This appears to be in the layer_modified_dt field. I believe this requires a full timestamp.

https://github.com/geoblacklight/geoblacklight-schema/blob/master/conf/schema.xml#L72

https://cwiki.apache.org/confluence/display/solr/Working+with+Dates

May need to update https://github.com/OpenGeoMetadata/GeoCombine/blob/master/lib/xslt/iso2geoBL.xsl#L218-L229

So the fallback can generate a fake timestamp (maybe with xs:time?).

Bug - Repositories API call is missing several Big Ten members?

Maybe this is by design... but I'm not seeing several OGM repos when I clone all repositories:

ewlarson@beanburrito .internal_test_app % bundle exec rake geocombine:clone --trace
** Invoke geocombine:clone (first_time)
** Execute geocombine:clone
Cloned https://github.com/OpenGeoMetadata/shared-repository.git
Cloned https://github.com/OpenGeoMetadata/edu.stanford.purl.git
Cloned https://github.com/OpenGeoMetadata/edu.princeton.arks.git
Cloned https://github.com/OpenGeoMetadata/edu.virginia.git
Cloned https://github.com/OpenGeoMetadata/edu.nyu.git
Cloned https://github.com/OpenGeoMetadata/edu.harvard.git
Cloned https://github.com/OpenGeoMetadata/edu.umn.git
Cloned https://github.com/OpenGeoMetadata/edu.tufts.git
Cloned https://github.com/OpenGeoMetadata/edu.columbia.git
Cloned https://github.com/OpenGeoMetadata/edu.lclark.git
Cloned https://github.com/OpenGeoMetadata/gov.data.git
Cloned https://github.com/OpenGeoMetadata/geobtaa.git
Cloned https://github.com/OpenGeoMetadata/edu.uarizona.git
Cloned https://github.com/OpenGeoMetadata/edu.berkeley.git
Cloned https://github.com/OpenGeoMetadata/edu.cornell.git
Cloned https://github.com/OpenGeoMetadata/edu.vt.git
Cloned https://github.com/OpenGeoMetadata/edu.upenn.git
Cloned https://github.com/OpenGeoMetadata/edu.mit.git
Cloned https://github.com/OpenGeoMetadata/ca.frdr.geodisy.git
Cloned https://github.com/OpenGeoMetadata/edu.wisc.git
Cloned 20 repositories

Perhaps all of the missing repositories are Big Ten affiliates?

  • unl
  • uchicago
  • purdue
  • illinois
  • indiana
  • msu
  • osu
  • psu
  • uiowa
  • umd
  • umich
  • umn
  • rutgers

@karenmajewicz @thatbudakguy -- Does the main geobtaa repo supplant these? or is the API call that lists all the repos missing these repos for whatever reason:

Method

https://github.com/OpenGeoMetadata/GeoCombine/blob/main/lib/geo_combine/harvester.rb#L97-L103

API Endpoint

https://api.github.com/orgs/opengeometadata/repos

Discrepancy in publisher element between iso2geoBL.xsl and GBL Metadata schema

The iso2geoBL.xsl transforms the publisher field to the multi-valued element dc_publisher_sm.

However, the official GBL Metadata Schema indicates the singular dc_publisher_s.

Since the Big Ten project uses both the XSLT with GIS records, and NYU's plugin for Omeka with maps, we discovered that we had a mix of dc_publisher_s and dc_publisher_sm in our Solr. We would prefer to use dc_publisher_sm. Any reason why we wouldn't want to enable having multiple publishers?

Update Coverage

  • Generate HTML Coverage docs
  • Show coverage with each rspec run
  • Set a minimum coverage percentage - currently the repo has 97% coverage
  • Remove Coveralls

Use GitHub's official ruby client

OctoKit offers a nice ruby wrapper around the GitHub API, which would let us handle things like pagination, filtered queries for repos, etc. with a little more flexibility. And ideally if the API changed, all we would need to do is update the gem.

Support filtering records to index based on schema version

related to #105.

for folks not using aardvark, we want to support the possibility of harvesting scoped to earlier schema versions. the "simple" way to do this is just e.g. looking for /aardvark/ in the path and asking folks to put their records in an "aardvark" folder, but ideally we would do it the "smart" way and actually look at the schema version in the record at the point we see the record. if this isn't possible for performance reasons, we can fall back to the "simple" strategy.

use rake rather than CLI

currently, the CLI is only calling out to rake tasks without much logic. can we just use the rake tasks directly? then we won't have to maintain the CLI code, or do we want to migrate the rake logic into the CLI class?

FGDC to HTML XSLT simplification

Trying to account for all the possibilities of the FGDC is a losing battle. At least, I've made my retreat, and we've been using a simplified XSL on our site.

Rather than relying on custom code for each element of the FGDC metadata structure, our stylesheet preserves the nesting structure of the XML elements, labeling each with the human-readable label found in a separate "fgdc_labels.xml" file. The result is a much-simplified XSL.

If you want to take a look, I cleaned out our institution-specific stuff, and put a copy in this gist:
https://gist.github.com/kgjenkins/fa8b46d619f1dd1d90befaadd1e793c3

What would you think about using an approach like that?

JSON object for Solr

As the iso2geoBL.xsl file stands, it generates the Solr doc content, but the default update handler (at least in the version of Solr) expects the same add and doc as the XML.

{ 
"add": {
  "doc": {
            "uuid": ""
            ...
        }
    }
}

Support harvesting OGM records based on an allowed list of repositories

The current behavior is to download all OGM repositories that aren't on the configured denylist:

# Non-metadata repositories that shouldn't be harvested
def self.denylist
[
'GeoCombine',
'aardvark',
'metadata-issues',
'ogm_utils-python',
'opengeometadata.github.io',
'opengeometadata-rails'
]
end

Not sure if other folks do it the same way, but at Stanford we have code to instead only harvest repositories on an allowlist. This avoids accidentally harvesting our own metadata from OGM (and possibly duplicating it) and also ensures that adding new institutional metadata is an intentional process via pull request.

It would be nice to have GeoCombine support this behavior without any additional logic.

add support for https in geocombine:clone

Right now in the geocombine:clone task, git clone works using git:// protocol URLs. It should support https:// for secure connections and for firewalls that block git protocol traffic.

Bug - rake geocombine:clone errs out

I've been playing around with GeoCombine for Aardvark metadata harvesting.

Harvesting individual institutions has been working well:

bundle exec rake geocombine:clone\[edu.umn\]
=> 5480 docs

But cloning all the repos, fails:

First run

ewlarson@beanburrito GeoDiscovery % bundle exec rake geocombine:clone
rake aborted!
SystemStackError: stack level too deep
/Users/ewlarson/.rbenv/versions/3.2.1/bin/bundle:25:in `load'
/Users/ewlarson/.rbenv/versions/3.2.1/bin/bundle:25:in `<main>'
Tasks: TOP => geocombine:clone
(See full trace by running task with --trace)
ewlarson@beanburrito GeoDiscovery % cd tmp/opengeometadata 
ewlarson@beanburrito opengeometadata % ls -la
total 0
drwxr-xr-x  10 ewlarson  staff  320 Mar  9 08:19 .
drwxr-xr-x  15 ewlarson  staff  480 Mar  9 08:18 ..
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:19 edu.harvard
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:19 edu.nyu
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:19 edu.princeton.arks
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:18 edu.stanford.purl
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:19 edu.tufts
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:19 edu.umn
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:19 edu.virginia
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:18 shared-repository

Second run

ewlarson@beanburrito GeoDiscovery % bundle exec rake geocombine:clone
rake aborted!
SystemStackError: stack level too deep
/Users/ewlarson/.rbenv/versions/3.2.1/bin/bundle:25:in `load'
/Users/ewlarson/.rbenv/versions/3.2.1/bin/bundle:25:in `<main>'
Tasks: TOP => geocombine:clone
(See full trace by running task with --trace)
ewlarson@beanburrito GeoDiscovery % cd tmp/opengeometadata 
ewlarson@beanburrito opengeometadata % ls -la
total 0
drwxr-xr-x  10 ewlarson  staff  320 Mar  9 08:43 .
drwxr-xr-x  15 ewlarson  staff  480 Mar  9 08:41 ..
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:43 edu.harvard
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:43 edu.nyu
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:42 edu.princeton.arks
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:41 edu.stanford.purl
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:43 edu.tufts
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:43 edu.umn
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:43 edu.virginia
drwxr-xr-x   3 ewlarson  staff   96 Mar  9 08:41 shared-repository

Can anyone else confirm? Seems to likely StackError the same place each clone run...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.