spark-kafka (homepage)
Low level integration of Spark and Kafka
@tresata / (0)
Spark-kafka is a library that facilitates batch loading data from Kafka into Spark, and from Spark into Kafka.
Tags
How to
Include this package in your Spark Applications using:
spark-shell, pyspark, or spark-submit
> $SPARK_HOME/bin/spark-shell --packages com.tresata:spark-kafka_2.10:0.6.0
sbt
In your sbt build file, add:
libraryDependencies += "com.tresata" % "spark-kafka_2.10" % "0.6.0"
Maven
In your pom.xml, add:<dependencies> <!-- list of dependencies --> <dependency> <groupId>com.tresata</groupId> <artifactId>spark-kafka_2.10</artifactId> <version>0.6.0</version> </dependency> </dependencies>
Releases
Version: 0.6.0-s_2.10 ( 47f2ff | zip | jar ) / Date: 2015-11-13 / License: Apache-2.0 / Scala version: 2.10
Version: 0.6.0-s_2.11 ( 47f2ff | zip | jar ) / Date: 2015-11-13 / License: Apache-2.0 / Scala version: 2.11
Version: 0.5.0 ( bd80d9 | zip | jar ) / Date: 2015-08-29 / License: Apache-2.0 / Scala version: 2.10
Version: 0.4.0 ( 89f49d | zip | jar ) / Date: 2015-08-29 / License: Apache-2.0 / Scala version: 2.10
Version: 0.3.0 ( 33a34f | zip | jar ) / Date: 2015-03-30 / License: Apache-2.0 / Scala version: 2.10