Deep Learning Pipelines for Apache Spark
@databricks / (2)
Deep Learning Pipelines aims at enabling everyone to easily integrate scalable deep learning into their workflows, from machine learning practitioners to business analysts. It builds on Apache Spark's ML Pipelines for training, and on Spark DataFrames and SQL for deploying models. It includes high-level APIs for common aspects of deep learning so they can be efficiently done in a few lines of code.
Include this package in your Spark Applications using:
spark-shell, pyspark, or spark-submit
> $SPARK_HOME/bin/spark-shell --packages databricks:spark-deep-learning:0.1.0-spark2.1-s_2.11
If you use the sbt-spark-package plugin, in your sbt build file, add:
spDependencies += "databricks/spark-deep-learning:0.1.0-spark2.1-s_2.11"
resolvers += "Spark Packages Repo" at "http://dl.bintray.com/spark-packages/maven" libraryDependencies += "databricks" % "spark-deep-learning" % "0.1.0-spark2.1-s_2.11"
MavenIn your pom.xml, add:
<dependencies> <!-- list of dependencies --> <dependency> <groupId>databricks</groupId> <artifactId>spark-deep-learning</artifactId> <version>0.1.0-spark2.1-s_2.11</version> </dependency> </dependencies> <repositories> <!-- list of other repositories --> <repository> <id>SparkPackagesRepo</id> <url>http://dl.bintray.com/spark-packages/maven</url> </repository> </repositories>