Giter Club home page Giter Club logo

spark3d's Introduction

Build Status codecov Maven Central

Latest News

  • [05/2018] GSoC 2018: spark3D has been selected to the Google Summer of Code (GSoC) 2018. Congratulation to @mayurdb who will work on the project this year!
  • [06/2018] Release: version 0.1.0, 0.1.1
  • [07/2018] New location: spark3D is an official project of AstroLab Software!
  • [07/2018] Release: version 0.1.3, 0.1.4, 0.1.5
  • [08/2018] Release: version 0.2.0, 0.2.1 (pyspark3d)
  • [09/2018] Release: version 0.2.2
  • [11/2018] Release: version 0.3.0, 0.3.1 (new DataFrame API)

Rationale

spark3D should be viewed as an extension of the Apache Spark framework, and more specifically the Spark SQL module, focusing on the manipulation of three*-dimensional data sets.

Why would you use spark3D? If you often need to repartition large spatial 3D data sets, or perform spatial queries (neighbour search, window queries, cross-match, clustering, ...), spark3D is for you. It contains optimised classes and methods to do so, and it spares you the implementation time! In addition, a big advantage of all those extensions is to efficiently perform visualisation of large data sets by quickly building a representation of your data set (see more here).

spark3D exposes two API: Scala (spark3D) and Python (pyspark3d). The core developments are done in Scala, and interfaced with Python using the great py4j package. This means pyspark3d might not contain all the features present in spark3D. In addition, due to difference between Scala and Python, there might be subtle differences in the two APIs.

While we try to stick to the latest Apache Spark developments, spark3D started with the RDD API and slowly migrated to use the DataFrame API. This process left a huge imprint on the code structure, and low-level layers in spark3D often still use RDD to manipulate the data. Do not be surprised if things are moving, the package is under an active development but we try to keep the user interface as stable as possible!

Last but not least: spark3D is by no means complete, and you are welcome to suggest changes, report bugs or inconsistent implementations, and contribute directly to the package!

Cheers, Julien

Why 3? Because there are already plenty of very good packages dealing with 2D data sets (e.g. geospark, geomesa, magellan, GeoTrellis, and others), but that was not suitable for many applications such as in astronomy!

Installation and tutorials

Scala

You can link spark3D to your project (either spark-shell or spark-submit) by specifying the coordinates:

spark-submit --packages "com.github.astrolabsoftware:spark3d_2.11:0.3.1"

Python

Just run

pip install pyspark3d

Note that we release the assembly JAR with it.

More information

See the website!

Contributors

  • Julien Peloton (peloton at lal.in2p3.fr)
  • Christian Arnault (arnault at lal.in2p3.fr)
  • Mayur Bhosale (mayurdb31 at gmail.com) -- GSoC 2018.

Contributing to spark3D: see CONTRIBUTING.

Support

spark3d's People

Contributors

chrisarnault avatar dependabot[bot] avatar julienpeloton avatar mayurdb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

spark3d's Issues

Bug: Octree full data set envelope relies on initial value

Octree initialisation requires to compute the envelope of the data (BoxEnvelope covering all the points). However the computation of the box starts at (0, -1, 0, -1, 0, -1) (default null value), and data values are compared to it:

/**
* Try to estimate the enclosing box for the data set.
* This is currently used to initialise the octree partitioning.
*
* @return dataBoundary (instance of BoxEnvelope)
*/
def getDataEnvelope(): BoxEnvelope = {
val seqOp: (BoxEnvelope, BoxEnvelope) => BoxEnvelope = {
(x, y) => {
BoxEnvelope.apply(
min(x.minX, y.minX), max(x.maxX, y.maxX),
min(x.minY, y.minY), max(x.maxY, y.maxY),
min(x.minZ, y.minZ), max(x.maxZ, y.maxZ)
)
}
}
val combOp: (BoxEnvelope, Shape3D) => BoxEnvelope = {
(obj1, obj2) => {
var x = obj1
val y = obj2.getEnvelope
if (x.isNull) {
x = y
}
seqOp(x, y)
}
}
// create a dummy envelope
val bx = BoxEnvelope.apply(0, 0, 0, 0, 0, 0)
// set it to null
bx.setToNull
val dataBoundary = rawRDD.aggregate(bx)(combOp, seqOp)
// expand the boundary to also Include the elements at the border
dataBoundary.expandOutwards(0.001)
dataBoundary
}

Hence for data far away from this null point this means the box envelope becomes huge (and mostly empty), leading to all the data being in one or few partition:

DOC020819-02082019155744-0001

Scala visualization tools

I came across a nice and useful Scala wrapper for plotly: https://github.com/facultyai/scala-plotly-client. The releases available on Maven are not up-to-date (they do not contain scatter plot for 3D data sets which is however from the last code version), hence we need to compile it and include the jar while running jobs.

Action item: include FAT jar for the wrapper (about 8MB), and push examples. Here are some Scala examples using onion and octree partitioning on two different dataset:

test-3d-onion-1

test-3d-octree

Failing KNN test

Looking at Travis, there is something weird. Starting from commit cfc7a5f, sometimes the build fails, sometimes it succeeds.
From cfc7a5f, I have done only commits related to documentation (no code change) — so I’m wondering why this behaviour. It is the same in my laptop, sometimes it fails, sometimes it succeeds.

Looking at the failing test (SpatialQueryTest.scala:Can you find the K nearest neighbours correctly?), it seems that there is a little problem when looking at unique elements...

Return unique elements

scala> // Run many times the same query
scala> for (i <- 0 to 10) {
     |   val knn = SpatialQuery.KNN(queryObject, sphereRDD_part, 3, true)
     |   println(knn.map(x => x.center.getCoordinate))
     | }
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 1.0, 1.0))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(3.0, 2.0, 1.0))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 1.0, 1.0))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 3.0, 0.7))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 3.0, 0.7))
List(List(2.0, 2.0, 2.0), List(1.0, 3.0, 0.7), List(3.0, 2.0, 1.0))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 3.0, 0.7))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 3.0, 0.7))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 3.0, 0.7))
List(List(2.0, 2.0, 2.0), List(1.0, 3.0, 0.7), List(3.0, 2.0, 1.0))
List(List(2.0, 2.0, 2.0), List(1.0, 1.0, 3.0), List(1.0, 3.0, 0.7))

The 2nd and 3rd elements are not always the same (and it is not just a matter of ordering)! Hence why the test is sometimes failing, sometimes passing. This looks like a bug to correct...
@mayurdb any ideas?

pyspark3d: improve the user interface

Currently, there are many JavaObjects coming from the JVM side living in the python side, which makes life difficult. There are some attempt in converters.py to reduce that, but it is by far not ideal. Conversion should be made internally such that the python user never sees a JavaObject.

On the partitioning: Garbage collector is huge while re-partitioning (2/3 of the total time)

OS: CentOS Linux release 7.4.1708 (Core)
spark3D: 0.1.4
spark-fits: 0.6.0

#72 adds a script to benchmark the partitioning. The idea is the following:

  1. Load data using spark-fits (10 millions)
  2. Apply partitioning or not to the RDD
  3. Trigger an action, and repeat this several times (put in cache data at the first time)

Regardless the partitioning (octree or onion), the GC time is rather big compared to the compute time:

Octree (mapPartitions at Shape3DRDD.scala:164):

Metric Min 25th percentile Median 75th percentile Max
Duration 48 s 48 s 48 s 48 s 48 s
GC Time 33 s 33 s 33 s 33 s 33 s

Onion (mapPartitions at Shape3DRDD.scala:164)

Metric Min 25th percentile Median 75th percentile Max
Duration 46 s 46 s 46 s 46 s 46 s
GC Time 28 s 28 s 28 s 28 s 28 s

The code responsible of this is (Shape3DRDD.scala:142)

/**
    * Repartion a RDD[T] according to a custom partitioner.
    *
    * @param rdd : (RDD[T])
    *   RDD of T (must extends Shape3D) with any partitioning.
    * @param partitioner : (SpatialPartitioner)
    *   Instance of SpatialPartitioner or any extension of it.
    * @return (RDD[T]) Repartitioned RDD[T].
    *
    */
  def partition(partitioner: SpatialPartitioner)(implicit c: ClassTag[T]) : RDD[T] = {
    // Go from RDD[V] to RDD[(K, V)] where K is specified by the partitioner.
    // Finally, return only RDD[V] with the new partitioning.

    def mapElements(iter: Iterator[T]) : Iterator[(Int, T)] = {
      var res = ListBuffer[(Int, T)]()
      while (iter.hasNext) {
        res ++= partitioner.placeObject(iter.next).toList
      }
      res.iterator
    }

    rawRDD.mapPartitions(mapElements).partitionBy(partitioner).mapPartitions(_.map(_._2), true)

  }

We must investigate this.

Make unique class to load the data

Currently, this is the chaos. Ideally we should expose to the user a single class to load data (or one per object type) and the data format should be understood internally.

run_part_scala.sh is failing because of assembly syntax

The test script run_part_scala.sh is failing because there is no match between the jar name produced by sbt assembly and the jar name used to run the job.

Action item: either change the jar name used to run the job (plus remove explicit deps as assembly jat already contains them), OR use sbt package (names should then match).

Integer overflows for kNN search?

pyspark3d issue.

kNN search for data set size > 2G elements seem to go crazy :D
I was running kNN for k=1000, and data set size = 5,000,000,000 elements.

py4j.protocol.Py4JJavaError: An error occurred while calling z:com.astrolabsoftware.spark3d.spatialOperator.SpatialQuery.KNN.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 556 in stage 0.0 failed 4
times, most recent failure: Lost task 556.3 in stage 0.0 (TID 967, 134.158.75.162, executor 3): 
java.lang.IllegalArgumentException: Comparison method violates its general contract!
	at java.util.TimSort.mergeHi(TimSort.java:899)
	at java.util.TimSort.mergeAt(TimSort.java:516)
	at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
	at java.util.TimSort.sort(TimSort.java:254)
	at java.util.Arrays.sort(Arrays.java:1512)
	at com.google.common.collect.Ordering.leastOf(Ordering.java:708)
	at com.astrolabsoftware.spark3d.utils.Utils$.com$astrolabsoftware$spark3d$utils$Utils$$takeOrdered(Utils.scala:174)
	at com.astrolabsoftware.spark3d.utils.Utils$$anonfun$1.apply(Utils.scala:154)
	at com.astrolabsoftware.spark3d.utils.Utils$$anonfun$1.apply(Utils.scala:152)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:800)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:800)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:109)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Note, the same problem appears regardless we want distinct objects or not.
My best guess is that we would to trade integer for long.

ADDED:
Interesting to note though, this does not happen in the pure Scala version.
The difference which comes into my mind is the default level of storage for RDD (None in Scala, MEMORY_ONLY in python).

Repo organisation

The spark3D is organized the following

organisation

that is from the user point of view, spark3D provides 2 main services:

  • creating a Shape3D RDD (Point3D, Sphere) from a raw RDD. The Shape3D RDD are RDD whose elements are Shape3D. They have the same partitioning as the raw RDD.
  • re-partition a Shape3D RDD, based on implemented partitioners. Partitioned Shape3D RDD are RDD whose elements are Shape3D, plus a specific partitioning.

On the partitioning: Later iterations in Octree are slower than expected

OS: CentOS Linux release 7.4.1708 (Core)
spark3D: 0.1.4
spark-fits: 0.6.0

#72 adds a script to benchmark the partitioning. The idea is the following:

  1. Load data using spark-fits (10 millions)
  2. Apply partitioning or not to the RDD
  3. Trigger an action, and repeat this several times (put in cache data at the first time)

Once the data is repartitioned and put in cache, one expects the later iteration to be fast. While later iterations are both faster in onion and octree, they are slower in the latter than in the former:

Onion:

Job Id Description Duration
3 count at Partitioning.scala:81 0.2 s
2 count at Partitioning.scala:81 0.2 s
1 count at Partitioning.scala:81 1.0 min

Octree:

Job Id Description Duration
3 count at Partitioning.scala:81 6 s
2 count at Partitioning.scala:81 5 s
1 count at Partitioning.scala:81 1.2 min

Is that expected?

Refactor the API

I feel it would be easier to use spark3D if we hide to user classes such as Point3D, ShellEnvelope, etc.. which would be only used internally to perform the computation. The users would give an input (file, or RDD, or DF) and get back a standard RDD[primitive].
By doing so, it will be easier to (1) integrate spark3D to other tools and pipelines, and (2) interface Scala functionalities in Python.

Tackling large shuffle?

For spatial partitioner, the current approach is to re-assign each element to a partition based on its key. This is largely inefficient if we have a large number of elements.
An idea would be first to assign a key for each element within original partitions (no shuffle), and then re-partition those groups of same-key elements, and finally flatten the result. This will imply larger but fewer objects to be sent in the network, hence reducing the amount of shuffle.

I'm surprised it does not seem to exist by default in Spark. Do I miss something? Or is this already done internally in Spark (I doubt)?

Build index to speed-up spatial queries

Would be good to have this feature... Syntax would be something like

// Load the data as Shape3DRDD[Point3D]
val objectRDD: Shape3DRDD[Point3D] = ...

// Build the indices based on TREE.
objectRDD.buildIndex(<indexTREE>)

// Run any spatial query faster! Envelope query for example
val usingIndex = true
val envelope = ...
val queryResult = RangeQuery.windowQuery(objectRDD, envelope, usingIndex)

Ideally, it should combined with partitioning.

TBD!

Spherical coordinates confusion in ShellEnvelope

I noticed that ShellEnvelope can be instantiated using

// First case: center is made from (x, y, z) 
val shellFromCoordinates = new ShellEnvelope(x: Double, y: Double, z: Double, innerRadius: Double, outerRadius: Double)
// Second case: center is created from directly from a Point3D
val shellFromPoint3D = new ShellEnvelope(center: Point3D, innerRadius: Double, outerRadius: Double)

In the first case, (x, y, z) can be cartesian or spherical but the shell doesn't know it.
In the second case, the Point3D carries the information with it.

More troublesome, the first case trigger another constructor based on (x, y, z):

this(new Point3D(x, y, z, true), innerRadius, outerRadius)

which means spherical coordinates is in fact required (fourth arg of Point3D is whether coordinate system is spherical) - but the user might have fill with cartesian!

I will go over the constructors of this class and make sure the coordinate system of the center is well defined and can be chosen btw cartesian and spherical :-)

Octree partitioning not working from spatialPartitioning

I tried this simple script:

import com.spark3d.spatial3DRDD._
import org.apache.spark.sql.SparkSession
import com.spark3d.utils.GridType

val spark = SparkSession.builder().appName("OctreeSpace").getOrCreate()

val fn = "src/test/resources/cartesian_spheres.fits"
val hdu = 1
val columns = "x,y,z,radius"
val spherical = false

// Load the data
val sphereRDD = new SphereRDDFromFITS(spark, fn, hdu, columns, spherical)

// Re-partition the space using OCTREE
val sphereRDD_part = sphereRDD.spatialPartitioning(GridType.OCTREE)

and here is the log

org.apache.spark.SparkDriverExecutionException: Execution error
  at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1206)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1729)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1687)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1676)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2029)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2069)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
  at org.apache.spark.rdd.RDD$$anonfun$takeSample$1.apply(RDD.scala:578)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
  at org.apache.spark.rdd.RDD.takeSample(RDD.scala:557)
  at com.spark3d.spatial3DRDD.Shape3DRDD.spatialPartitioning(Shape3DRDD.scala:113)
  ... 50 elided
Caused by: java.lang.ArrayStoreException
lastException: Throwable = null

I tested it before and after the big change of #40, so likely the problem is coming from before. Any ideas @mayurdb before I dig in the code?

Multiple class constructors in Scala: no real Java equivalent.

In Java, one can use different constructors to instantiate a class

public class toto
  /** First constructor */
  public toto(Integer arg1) {
    // some stuff
  }

  /** Second constructor */
  public toto(String arg2) {
    // some stuff
  }
}

This is what is used in GeoSpark to instantiate RDD[T] for example.
As far as I know, there is no direct equivalent in Scala... So the structure will differ.

Overlapping shells for OnionPartitioning

As of now, the shells use to define the OnionPartitioning are non-overlapping.
Cross-matching 2 data sets with this implementation will unavoidably lead to miss pairs at the borders.
One idea would be to make overlapping shells as

// To be in OnionPartitioning.scala, LinearOnionPartitioning(...)
// Add overlapping shells -- e.g. 1% (to be defined)
val extension : Double = 0.01
for (pos <- 0 to numPartitions - 1) {
  // Inner radius must be greater than 0 (or equal to 0)
  val innerRadius = math.max(0.0, pos * dZ * (1 - extension))
  val outerRadius = (pos + 1) * dZ * (1 + extension)
  
  val shell = new ShellEnvelope(center, innerRadius, outerRadius)
  grids += shell
}

GSoC: First steps...

The project will be divided in 3 parts:

  • Find alternative to JTS (no 3D support): new dev or existing package?
  • Upgrade 2DRDD to 3DRDD (circle -> sphere, parallelogram -> parallelepiped, and so on).
  • Upgrade 2D methods for partitioning/join/query/indexing to 3D.

Multi-resolution search

Currently, the methods to perform the cross-match between 2 data sets are at fixed grid resolution (the resolution of the grid within a partition - we would always keep the Spark partitioning the same). The idea would be to first perform a quick search of match with a coarse resolution to reduce the number of candidates, and then use a finer resolution with the resulting small set of objects to identify true match.

Extend Envelope to 3D

As in 2D, the envelope here would be a bounding box of any 3D object. This envelope can then be subsequently used to support custom operations like intersection, containment, etc of the 3D objects.
cc @JulienPeloton @ChristianArnault

test_scala fails because of python

test_scala uses a python call to retrieve SCALA_BINARY_VERSION. However
-this requires setting the PYTHONPATH
-it requires having py4j installed

it woulds be better to avoid any python calls.

GSoC: going deeper!

First steps logbook: #6

Week 5:

  • Added Octree Partitioned RDD support #36
  • Add SphereRDD #38
  • Code refactoring: Replace Sphere by ShellEnvelope, Shape3D heritage, and move everything under geometryObjects #40
  • v0.1.1 #43
  • Remove confusion between cartesian and spherical coordinates for ShellEnvelope and BoxEnvelope #44

A few TODO for week 6:

  • Fix bug while loading Octree #45
  • Treatment of objects at the border of partitions (see #25)
  • Multi-resolution algorithm for cross-match (see #26)
  • Tackling large shuffle? #32
  • Implement RangeQuery methods (to find neighbours).
  • spark3D website

DataFrame API

For historical reasons, we are using the RDD API of Spark. The idea would be to update everything with the DataFrame API.

Scala or Python?

All is in the title: should we develop the package in Scala or Python?

Scala:

  • Pros
    • Easy to work with Spark (native)
    • Geospark is in Java/Scala
  • Cons
    • Hard to develop visualisation tools

Python:

  • Pros
    • Easy to develop visualisation tools
    • Large community, plenty of third-party packages
  • Cons
    • Harder to develop with Spark (need interfacing, although pyspark is not so bad)

Others??

In the meantime, the dev will be done in Scala.

Make a partitioning based on the clustering of the data

Idea 1 (fixed clustering):

  • load raw data
  • perform a k-means where k = number of partitions
  • repartition accordingly

Idea 2 (dynamic clustering):

  • Load raw data
  • Look dynamically for clusters in the data. Maybe we can start with some guess, and increase/decrease if data suggest.
  • repartition accordingly

KNN!

Would be good to have a 3D version of k-nearest neighbour. TBD asap!

On the partitioning: Octree partitioning does not keep all elements

OS: CentOS Linux release 7.4.1708 (Core)
spark3D: 0.1.4
spark-fits: 0.6.0

#72 adds a script to benchmark the partitioning. The idea is the following:

  1. Load data using spark-fits (10 millions)
  2. Apply partitioning or not to the RDD
  3. Trigger an action, and repeat this several times (put in cache data at the first time)

Just printing the number of elements of the repartitioned RDD:

    // Load the data
    val options = Map("hdu" -> hdu)
    val pRDD = new Point3DRDD(spark, fn_fits, columns, isSpherical, "fits", options)

    // Partition it
    val rdd = mode match {
        case "nopart" => pRDD.rawRDD.cache()
        case "octree" => pRDD.spatialPartitioning(GridType.OCTREE).cache()
        case "onion" => pRDD.spatialPartitioning(GridType.LINEARONIONGRID).cache()
        case _ => throw new AssertionError("Choose between nopart, onion, or octree for the partitioning.")
    }

    // MC it to minimize flukes
    for (i <- 0 to 2) {
      val number = rdd.count()
      println(s"Number of points ($mode) : $number")
    }

I obtain:

Number of points (nopart) : 10000000
Number of points (octree) : 9999995
Number of points (onion) : 10000000

Weird?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.