Giter Club home page Giter Club logo

mmlspark-1's Introduction

MMLSpark Microsoft Machine Learning for Apache Spark

MMLSpark provides a number of deep learning and data science tools for Apache Spark, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK) and OpenCV, enabling you to quickly create powerful, highly-scalable predictive and analytical models for large image and text datasets.

MMLSpark requires Scala 2.11, Spark 2.1+, and either Python 2.7 or Python 3.5+. See the API documentation for Scala and for PySpark.

Table of Contents

Notable features

  • Easily ingest images from HDFS into Spark DataFrame (example:301)
  • Pre-process image data using transforms from OpenCV (example:302)
  • Featurize images using pre-trained deep neural nets using CNTK (example:301)
  • Use pre-trained bidirectional LSTMs from Keras for medical entity extraction (example:304)
  • Train DNN-based image classification models on N-Series GPU VMs on Azure
  • Featurize free-form text data using convenient APIs on top of primitives in SparkML via a single transformer (example:201)
  • Train classification and regression models easily via implicit featurization of data (example:101)
  • Compute a rich set of evaluation metrics including per-instance metrics (example:102)

See our notebooks for all examples.

A short example

Below is an excerpt from a simple example of using a pre-trained CNN to classify images in the CIFAR-10 dataset. View the whole source code as an example notebook.

...
import mmlspark
# Initialize CNTKModel and define input and output columns
cntkModel = mmlspark.CNTKModel() \
                    .setInputCol("images").setOutputCol("output") \
                    .setModelLocation(modelFile)
# Train on dataset with internal spark pipeline
scoredImages = cntkModel.transform(imagesWithLabels)
...

See other sample notebooks as well as the MMLSpark documentation for Scala and PySpark.

Setup and installation

Docker

The easiest way to evaluate MMLSpark is via our pre-built Docker container. To do so, run the following command:

docker run -it -p 8888:8888 -e ACCEPT_EULA=yes microsoft/mmlspark

Navigate to http://localhost:8888 in your web browser to run the sample notebooks. See the documentation for more on Docker use.

To read the EULA for using the docker image, run
docker run -it -p 8888:8888 microsoft/mmlspark eula

GPU VM Setup

MMLSpark can be used to train deep learning models on a GPU node from a Spark application. See the instructions for setting up an Azure GPU VM.

Spark package

MMLSpark can be conveniently installed on existing Spark clusters via the --packages option, examples:

spark-shell --packages Azure:mmlspark:0.10
pyspark --packages Azure:mmlspark:0.10
spark-submit --packages Azure:mmlspark:0.10 MyApp.jar

This can be used in other Spark contexts too, for example, you can use MMLSpark in AZTK by adding it to the .aztk/spark-defaults.conf file.

Python

To try out MMLSpark on a Python (or Conda) installation you can get Spark installed via pip with pip install pyspark. You can then use pyspark as in the above example, or from python:

import pyspark
sp = pyspark.sql.SparkSession.builder.appName("MyApp") \
            .config("spark.jars.packages", "Azure:mmlspark:0.10") \
            .getOrCreate()
import mmlspark

HDInsight

To install MMLSpark on an existing HDInsight Spark Cluster, you can execute a script action on the cluster head and worker nodes. For instructions on running script actions, see this guide.

The script action url is: https://mmlspark.azureedge.net/buildartifacts/0.10/install-mmlspark.sh.

If you're using the Azure Portal to run the script action, go to Script actionsSubmit new in the Overview section of your cluster blade. In the Bash script URI field, input the script action URL provided above. Mark the rest of the options as shown on the screenshot to the right.

Submit, and the cluster should finish configuring within 10 minutes or so.

Databricks cloud

To install MMLSpark on the Databricks cloud, create a new library from Maven coordinates in your workspace.

For the coordinates use: com.microsoft.ml.spark:mmlspark:0.10. Then, under Advanced Options, use https://mmlspark.azureedge.net/maven for the repository. Ensure this library is attached to all clusters you create.

Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.

You can use MMLSpark in both your Scala and PySpark notebooks.

SBT

If you are building a Spark application in Scala, add the following lines to your build.sbt:

resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.10"

Building from source

You can also easily create your own build by cloning this repo and use the main build script: ./runme. Run it once to install the needed dependencies, and again to do a build. See this guide for more information.

Contributing & feedback

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

See CONTRIBUTING.md for contribution guidelines.

To give feedback and/or report an issue, open a GitHub Issue.

Other relevant projects

Apache®, Apache Spark, and Spark® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

mmlspark-1's People

Contributors

elibarzilay avatar mhamilton723 avatar imatiach-msft avatar mabusch avatar mmlspark-bot avatar rastala avatar tongwen11 avatar dakirsa avatar akzaidi avatar ratanrsur avatar utsav2 avatar eedeleon avatar

Watchers

Sudarshan Raghunathan avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.