Spark_Knapsack (homepage)

A PySpark simple greedy parallel implementation of 0-1 Knapsack algorithm.

@drulm / (0)

by: Darrell Ulm
Parameters:
--------------------
knapsackDF : Spark Dataframe with knapsack data
    sqlContext.createDataFrame(knapsackData, ['item', 'weights', 'values'])
W : float
    Total weight allowed for knapsack.
 
knapTotals : list
    List of result totals of knapsack values and weights.
Returns
------------
Dataframe: 
Dataframe with results.


Tags (No tags yet, login to add one. )


How to

This package doesn't have any releases published in the Spark Packages repo, or with maven coordinates supplied. You may have to build this package from source, or it may simply be a script. To use this Spark Package, please follow the instructions in the README.

Releases

No releases yet.