Secondary sort and streaming reduce for Spark
@tresata / (0)
Spark-sorted is a library that aims to make non-reduce type operations on very large groups in spark possible, including support for processing ordered values. To do so it relies on Spark's new sort-based shuffle and on never materializing the group for a given key but instead representing it by consecutive rows within a partition that get processed with a map-like (iterator based streaming) operation.
Include this package in your Spark Applications using:
spark-shell, pyspark, or spark-submit
> $SPARK_HOME/bin/spark-shell --packages com.tresata:spark-sorted_2.11:0.4.0
If you use the sbt-spark-package plugin, in your sbt build file, add:
spDependencies += "tresata/spark-sorted:0.4.0"
libraryDependencies += "com.tresata" % "spark-sorted_2.11" % "0.4.0"
MavenIn your pom.xml, add:
<dependencies> <!-- list of dependencies --> <dependency> <groupId>com.tresata</groupId> <artifactId>spark-sorted_2.11</artifactId> <version>0.4.0</version> </dependency> </dependencies>