Binary compatibility report for the succinct-0.1.2 library  between 1.3.0 and 1.4.0 versions   (relating to the portability of client application succinct-0.1.2.jar)

Test Info


Library Namesuccinct-0.1.2
Version #11.3.0
Version #21.4.0
Java Version1.7.0_75

Test Results


Total Java ARchives1
Total Methods / Classes2480 / 463
VerdictIncompatible
(13.1%)

Problem Summary


SeverityCount
Added Methods-246
Removed MethodsHigh198
Problems with
Data Types
High21
Medium15
Low44
Problems with
Methods
High3
Medium1
Low1
Other Changes
in Data Types
-4

Added Methods (246)


spark-sql_2.10-1.4.0.jar, Aggregate.class
package org.apache.spark.sql.execution
Aggregate.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Aggregate.requiredChildDistribution ( )  :  scala.collection.immutable.List<org.apache.spark.sql.catalyst.plans.physical.Distribution>

spark-sql_2.10-1.4.0.jar, AppendingParquetOutputFormat.class
package org.apache.spark.sql.parquet
AppendingParquetOutputFormat.getOutputCommitter ( org.apache.hadoop.mapreduce.TaskAttemptContext context )  :  org.apache.hadoop.mapreduce.OutputCommitter

spark-sql_2.10-1.4.0.jar, BaseRelation.class
package org.apache.spark.sql.sources
BaseRelation.needConversion ( )  :  boolean

spark-sql_2.10-1.4.0.jar, BatchPythonEvaluation.class
package org.apache.spark.sql.execution
BatchPythonEvaluation.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, BroadcastHashJoin.class
package org.apache.spark.sql.execution.joins
BroadcastHashJoin.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, BroadcastLeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
BroadcastLeftSemiJoinHash.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, BroadcastNestedLoopJoin.class
package org.apache.spark.sql.execution.joins
BroadcastNestedLoopJoin.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, CacheTableCommand.class
package org.apache.spark.sql.execution
CacheTableCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.4.0.jar, CartesianProduct.class
package org.apache.spark.sql.execution.joins
CartesianProduct.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, CatalystConverter.class
package org.apache.spark.sql.parquet
CatalystConverter.readDecimal ( org.apache.spark.sql.types.Decimal dest, parquet.io.api.Binary value, org.apache.spark.sql.types.DecimalType ctype )  :  org.apache.spark.sql.types.Decimal
CatalystConverter.updateDate ( int fieldIndex, int value )  :  void
CatalystConverter.updateString ( int fieldIndex, byte[ ] value )  :  void

spark-sql_2.10-1.4.0.jar, Column.class
package org.apache.spark.sql
Column.alias ( String alias )  :  Column
Column.apply ( Object extraction )  :  Column
Column.as ( scala.collection.Seq<String> aliases )  :  Column
Column.as ( String alias, types.Metadata metadata )  :  Column
Column.as ( String[ ] aliases )  :  Column
Column.between ( Object lowerBound, Object upperBound )  :  Column
Column.bitwiseAND ( Object other )  :  Column
Column.bitwiseOR ( Object other )  :  Column
Column.bitwiseXOR ( Object other )  :  Column
Column.equals ( Object that )  :  boolean
Column.getItem ( Object key )  :  Column
Column.hashCode ( )  :  int
Column.isTraceEnabled ( )  :  boolean
Column.log ( )  :  org.slf4j.Logger
Column.logDebug ( scala.Function0<String> msg )  :  void
Column.logDebug ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logError ( scala.Function0<String> msg )  :  void
Column.logError ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logInfo ( scala.Function0<String> msg )  :  void
Column.logInfo ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logName ( )  :  String
Column.logTrace ( scala.Function0<String> msg )  :  void
Column.logTrace ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logWarning ( scala.Function0<String> msg )  :  void
Column.logWarning ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.org.apache.spark.Logging..log_ ( )  :  org.slf4j.Logger
Column.org.apache.spark.Logging..log__.eq ( org.slf4j.Logger p1 )  :  void
Column.otherwise ( Object value )  :  Column
Column.over ( expressions.WindowSpec window )  :  Column
Column.when ( Column condition, Object value )  :  Column

spark-sql_2.10-1.4.0.jar, CreateTableUsing.class
package org.apache.spark.sql.sources
CreateTableUsing.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
CreateTableUsing.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.4.0.jar, CreateTableUsingAsSelect.class
package org.apache.spark.sql.sources
CreateTableUsingAsSelect.copy ( String tableName, String provider, boolean temporary, String[ ] partitionColumns, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan child )  :  CreateTableUsingAsSelect
CreateTableUsingAsSelect.CreateTableUsingAsSelect ( String tableName, String provider, boolean temporary, String[ ] partitionColumns, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan child )
CreateTableUsingAsSelect.partitionColumns ( )  :  String[ ]

spark-sql_2.10-1.4.0.jar, CreateTempTableUsing.class
package org.apache.spark.sql.sources
CreateTempTableUsing.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
CreateTempTableUsing.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.4.0.jar, CreateTempTableUsingAsSelect.class
package org.apache.spark.sql.sources
CreateTempTableUsingAsSelect.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
CreateTempTableUsingAsSelect.copy ( String tableName, String provider, String[ ] partitionColumns, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query )  :  CreateTempTableUsingAsSelect
CreateTempTableUsingAsSelect.CreateTempTableUsingAsSelect ( String tableName, String provider, String[ ] partitionColumns, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query )
CreateTempTableUsingAsSelect.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
CreateTempTableUsingAsSelect.partitionColumns ( )  :  String[ ]

spark-sql_2.10-1.4.0.jar, DataFrame.class
package org.apache.spark.sql
DataFrame.coalesce ( int numPartitions )  :  DataFrame
DataFrame.cube ( Column... cols )  :  GroupedData
DataFrame.cube ( scala.collection.Seq<Column> cols )  :  GroupedData
DataFrame.cube ( String col1, scala.collection.Seq<String> cols )  :  GroupedData
DataFrame.cube ( String col1, String... cols )  :  GroupedData
DataFrame.describe ( scala.collection.Seq<String> cols )  :  DataFrame
DataFrame.describe ( String... cols )  :  DataFrame
DataFrame.drop ( String colName )  :  DataFrame
DataFrame.dropDuplicates ( )  :  DataFrame
DataFrame.dropDuplicates ( scala.collection.Seq<String> colNames )  :  DataFrame
DataFrame.dropDuplicates ( String[ ] colNames )  :  DataFrame
DataFrame.join ( DataFrame right, String usingColumn )  :  DataFrame
DataFrame.na ( )  :  DataFrameNaFunctions
DataFrame.DataFrame..logicalPlanToDataFrame ( catalyst.plans.logical.LogicalPlan logicalPlan )  :  DataFrame
DataFrame.randomSplit ( double[ ] weights )  :  DataFrame[ ]
DataFrame.randomSplit ( double[ ] weights, long seed )  :  DataFrame[ ]
DataFrame.randomSplit ( scala.collection.immutable.List<Object> weights, long seed )  :  DataFrame[ ]
DataFrame.rollup ( Column... cols )  :  GroupedData
DataFrame.rollup ( scala.collection.Seq<Column> cols )  :  GroupedData
DataFrame.rollup ( String col1, scala.collection.Seq<String> cols )  :  GroupedData
DataFrame.rollup ( String col1, String... cols )  :  GroupedData
DataFrame.stat ( )  :  DataFrameStatFunctions
DataFrame.write ( )  :  DataFrameWriter

spark-sql_2.10-1.4.0.jar, DataFrameNaFunctions.class
package org.apache.spark.sql
DataFrameNaFunctions.DataFrameNaFunctions ( DataFrame df )

spark-sql_2.10-1.4.0.jar, DataFrameWriter.class
package org.apache.spark.sql
DataFrameWriter.format ( String source )  :  DataFrameWriter
DataFrameWriter.save ( String path )  :  void

spark-sql_2.10-1.4.0.jar, DescribeCommand.class
package org.apache.spark.sql.execution
DescribeCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
package org.apache.spark.sql.sources
DescribeCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.4.0.jar, Distinct.class
package org.apache.spark.sql.execution
Distinct.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, Except.class
package org.apache.spark.sql.execution
Except.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, Exchange.class
package org.apache.spark.sql.execution
Exchange.canSortWithShuffle ( org.apache.spark.sql.catalyst.plans.physical.Partitioning p1, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> p2 ) [static]  :  boolean
Exchange.copy ( org.apache.spark.sql.catalyst.plans.physical.Partitioning newPartitioning, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> newOrdering, SparkPlan child )  :  Exchange
Exchange.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Exchange.Exchange ( org.apache.spark.sql.catalyst.plans.physical.Partitioning newPartitioning, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> newOrdering, SparkPlan child )
Exchange.newOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>
Exchange.Exchange..getSerializer ( org.apache.spark.sql.types.DataType[ ] keySchema, org.apache.spark.sql.types.DataType[ ] valueSchema, boolean hasKeyOrdering, int numPartitions )  :  org.apache.spark.serializer.Serializer
Exchange.Exchange..keyOrdering ( )  :  org.apache.spark.sql.catalyst.expressions.RowOrdering
Exchange.Exchange..needToCopyObjectsBeforeShuffle ( org.apache.spark.Partitioner partitioner, org.apache.spark.serializer.Serializer serializer )  :  boolean
Exchange.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.4.0.jar, ExecutedCommand.class
package org.apache.spark.sql.execution
ExecutedCommand.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, Expand.class
package org.apache.spark.sql.execution
Expand.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, ExplainCommand.class
package org.apache.spark.sql.execution
ExplainCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.4.0.jar, ExternalSort.class
package org.apache.spark.sql.execution
ExternalSort.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
ExternalSort.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.4.0.jar, Filter.class
package org.apache.spark.sql.execution
Filter.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Filter.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.4.0.jar, Generate.class
package org.apache.spark.sql.execution
Generate.copy ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, SparkPlan child )  :  Generate
Generate.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Generate.Generate ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, SparkPlan child )

spark-sql_2.10-1.4.0.jar, GeneratedAggregate.class
package org.apache.spark.sql.execution
GeneratedAggregate.copy ( boolean partial, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> groupingExpressions, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> aggregateExpressions, boolean unsafeEnabled, SparkPlan child )  :  GeneratedAggregate
GeneratedAggregate.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
GeneratedAggregate.GeneratedAggregate ( boolean partial, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> groupingExpressions, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> aggregateExpressions, boolean unsafeEnabled, SparkPlan child )
GeneratedAggregate.unsafeEnabled ( )  :  boolean

spark-sql_2.10-1.4.0.jar, HashedRelation.class
package org.apache.spark.sql.execution.joins
HashedRelation.readBytes ( java.io.ObjectInput p1 ) [abstract]  :  byte[ ]
HashedRelation.writeBytes ( java.io.ObjectOutput p1, byte[ ] p2 ) [abstract]  :  void

spark-sql_2.10-1.4.0.jar, HashOuterJoin.class
package org.apache.spark.sql.execution.joins
HashOuterJoin.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, InMemoryColumnarTableScan.class
package org.apache.spark.sql.columnar
InMemoryColumnarTableScan.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
InMemoryColumnarTableScan.enableAccumulators ( )  :  boolean

spark-sql_2.10-1.4.0.jar, InMemoryRelation.class
package org.apache.spark.sql.columnar
InMemoryRelation.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics, org.apache.spark.Accumulable<scala.collection.mutable.ArrayBuffer<org.apache.spark.sql.Row>,org.apache.spark.sql.Row> _batchStats )  :  InMemoryRelation
InMemoryRelation.InMemoryRelation ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics, org.apache.spark.Accumulable<scala.collection.mutable.ArrayBuffer<org.apache.spark.sql.Row>,org.apache.spark.sql.Row> _batchStats )
InMemoryRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
InMemoryRelation.uncache ( boolean blocking )  :  void

spark-sql_2.10-1.4.0.jar, InsertIntoDataSource.class
package org.apache.spark.sql.sources
InsertIntoDataSource.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
InsertIntoDataSource.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.4.0.jar, InsertIntoParquetTable.class
package org.apache.spark.sql.parquet
InsertIntoParquetTable.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, Intersect.class
package org.apache.spark.sql.execution
Intersect.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, JDBCRDD.class
package org.apache.spark.sql.jdbc
JDBCRDD.getConnector ( String p1, String p2, java.util.Properties p3 ) [static]  :  scala.Function0<java.sql.Connection>
JDBCRDD.JDBCRDD ( org.apache.spark.SparkContext sc, scala.Function0<java.sql.Connection> getConnection, org.apache.spark.sql.types.StructType schema, String fqTable, String[ ] columns, org.apache.spark.sql.sources.Filter[ ] filters, org.apache.spark.Partition[ ] partitions, java.util.Properties properties )
JDBCRDD.resolveTable ( String p1, String p2, java.util.Properties p3 ) [static]  :  org.apache.spark.sql.types.StructType
JDBCRDD.scanTable ( org.apache.spark.SparkContext p1, org.apache.spark.sql.types.StructType p2, String p3, String p4, java.util.Properties p5, String p6, String[ ] p7, org.apache.spark.sql.sources.Filter[ ] p8, org.apache.spark.Partition[ ] p9 ) [static]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, JDBCRelation.class
package org.apache.spark.sql.jdbc
JDBCRelation.copy ( String url, String table, org.apache.spark.Partition[ ] parts, java.util.Properties properties, org.apache.spark.sql.SQLContext sqlContext )  :  JDBCRelation
JDBCRelation.insert ( org.apache.spark.sql.DataFrame data, boolean overwrite )  :  void
JDBCRelation.JDBCRelation ( String url, String table, org.apache.spark.Partition[ ] parts, java.util.Properties properties, org.apache.spark.sql.SQLContext sqlContext )
JDBCRelation.needConversion ( )  :  boolean
JDBCRelation.properties ( )  :  java.util.Properties

spark-sql_2.10-1.4.0.jar, JSONRelation.class
package org.apache.spark.sql.json
JSONRelation.buildScan ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> requiredColumns, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> filters )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
JSONRelation.JSONRelation ( scala.Function0<org.apache.spark.rdd.RDD<String>> baseRDD, scala.Option<String> path, double samplingRatio, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, org.apache.spark.sql.SQLContext sqlContext )
JSONRelation.needConversion ( )  :  boolean
JSONRelation.JSONRelation..useJacksonStreamingAPI ( )  :  boolean
JSONRelation.path ( )  :  scala.Option<String>

spark-sql_2.10-1.4.0.jar, LeftSemiJoinBNL.class
package org.apache.spark.sql.execution.joins
LeftSemiJoinBNL.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, LeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
LeftSemiJoinHash.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, Limit.class
package org.apache.spark.sql.execution
Limit.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, LocalTableScan.class
package org.apache.spark.sql.execution
LocalTableScan.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, LogicalLocalTable.class
package org.apache.spark.sql.execution
LogicalLocalTable.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.4.0.jar, LogicalRDD.class
package org.apache.spark.sql.execution
LogicalRDD.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.4.0.jar, LogicalRelation.class
package org.apache.spark.sql.sources
LogicalRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.4.0.jar, NativeColumnType<T>.class
package org.apache.spark.sql.columnar
NativeColumnType<T>.dataType ( )  :  T
NativeColumnType<T>.NativeColumnType ( T dataType, int typeId, int defaultSize )  :  public

spark-sql_2.10-1.4.0.jar, OutputFaker.class
package org.apache.spark.sql.execution
OutputFaker.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, ParquetRelation.class
package org.apache.spark.sql.parquet
ParquetRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.4.0.jar, ParquetRelation2.class
package org.apache.spark.sql.parquet
ParquetRelation2.buildScan ( String[ ] requiredColumns, org.apache.spark.sql.sources.Filter[ ] filters, org.apache.hadoop.fs.FileStatus[ ] inputFiles, org.apache.spark.broadcast.Broadcast<org.apache.spark.SerializableWritable<org.apache.hadoop.conf.Configuration>> broadcastedConf )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
ParquetRelation2.dataSchema ( )  :  org.apache.spark.sql.types.StructType
ParquetRelation2.needConversion ( )  :  boolean
ParquetRelation2.ParquetRelation2..maybeDataSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>
ParquetRelation2.ParquetRelation2 ( String[ ] paths, scala.Option<org.apache.spark.sql.types.StructType> maybeDataSchema, scala.Option<org.apache.spark.sql.sources.PartitionSpec> maybePartitionSpec, scala.collection.immutable.Map<String,String> parameters, org.apache.spark.sql.SQLContext sqlContext )
ParquetRelation2.ParquetRelation2 ( String[ ] paths, scala.Option<org.apache.spark.sql.types.StructType> maybeDataSchema, scala.Option<org.apache.spark.sql.sources.PartitionSpec> maybePartitionSpec, scala.Option<org.apache.spark.sql.types.StructType> userDefinedPartitionColumns, scala.collection.immutable.Map<String,String> parameters, org.apache.spark.sql.SQLContext sqlContext )
ParquetRelation2.paths ( )  :  String[ ]
ParquetRelation2.prepareJobForWrite ( org.apache.hadoop.mapreduce.Job job )  :  org.apache.spark.sql.sources.OutputWriterFactory
ParquetRelation2.refresh ( )  :  void
ParquetRelation2.userDefinedPartitionColumns ( )  :  scala.Option<org.apache.spark.sql.types.StructType>

spark-sql_2.10-1.4.0.jar, ParquetTableScan.class
package org.apache.spark.sql.parquet
ParquetTableScan.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, PhysicalRDD.class
package org.apache.spark.sql.execution
PhysicalRDD.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, PreWriteCheck.class
package org.apache.spark.sql.sources
PreWriteCheck.failAnalysis ( String msg )  :  void

spark-sql_2.10-1.4.0.jar, Project.class
package org.apache.spark.sql.execution
Project.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Project.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.4.0.jar, PythonUDF.class
package org.apache.spark.sql.execution
PythonUDF.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )  :  PythonUDF
PythonUDF.PythonUDF ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )
PythonUDF.pythonVer ( )  :  String

spark-sql_2.10-1.4.0.jar, RefreshTable.class
package org.apache.spark.sql.sources
RefreshTable.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
RefreshTable.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.4.0.jar, ResolvedDataSource.class
package org.apache.spark.sql.sources
ResolvedDataSource.apply ( org.apache.spark.sql.SQLContext p1, scala.Option<org.apache.spark.sql.types.StructType> p2, String[ ] p3, String p4, scala.collection.immutable.Map<String,String> p5 ) [static]  :  ResolvedDataSource
ResolvedDataSource.apply ( org.apache.spark.sql.SQLContext p1, String p2, String[ ] p3, org.apache.spark.sql.SaveMode p4, scala.collection.immutable.Map<String,String> p5, org.apache.spark.sql.DataFrame p6 ) [static]  :  ResolvedDataSource

spark-sql_2.10-1.4.0.jar, RunnableCommand.class
package org.apache.spark.sql.execution
RunnableCommand.children ( ) [abstract]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
RunnableCommand.output ( ) [abstract]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.4.0.jar, Sample.class
package org.apache.spark.sql.execution
Sample.copy ( double lowerBound, double upperBound, boolean withReplacement, long seed, SparkPlan child )  :  Sample
Sample.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Sample.lowerBound ( )  :  double
Sample.Sample ( double lowerBound, double upperBound, boolean withReplacement, long seed, SparkPlan child )
Sample.upperBound ( )  :  double

spark-sql_2.10-1.4.0.jar, SetCommand.class
package org.apache.spark.sql.execution
SetCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.4.0.jar, ShowTablesCommand.class
package org.apache.spark.sql.execution
ShowTablesCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.4.0.jar, ShuffledHashJoin.class
package org.apache.spark.sql.execution.joins
ShuffledHashJoin.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, Sort.class
package org.apache.spark.sql.execution
Sort.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Sort.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.4.0.jar, SparkPlan.class
package org.apache.spark.sql.execution
SparkPlan.doExecute ( ) [abstract]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
SparkPlan.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>
SparkPlan.requiredChildOrdering ( )  :  scala.collection.Seq<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>>

spark-sql_2.10-1.4.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.cacheManager ( )  :  execution.CacheManager
SQLContext.createDataFrame ( org.apache.spark.rdd.RDD<Row> rowRDD, types.StructType schema, boolean needsConversion )  :  DataFrame
SQLContext.createSession ( )  :  SQLContext.SQLSession
SQLContext.currentSession ( )  :  SQLContext.SQLSession
SQLContext.defaultSession ( )  :  SQLContext.SQLSession
SQLContext.detachSession ( )  :  void
SQLContext.dialectClassName ( )  :  String
SQLContext.getOrCreate ( org.apache.spark.SparkContext p1 ) [static]  :  SQLContext
SQLContext.getSQLDialect ( )  :  catalyst.ParserDialect
SQLContext.openSession ( )  :  SQLContext.SQLSession
SQLContext.range ( long start, long end )  :  DataFrame
SQLContext.range ( long start, long end, long step, int numPartitions )  :  DataFrame
SQLContext.read ( )  :  DataFrameReader
SQLContext.tlSession ( )  :  ThreadLocal<SQLContext.SQLSession>

spark-sql_2.10-1.4.0.jar, StringContains.class
package org.apache.spark.sql.sources
StringContains.attribute ( )  :  String
StringContains.canEqual ( Object p1 )  :  boolean
StringContains.copy ( String attribute, String value )  :  StringContains
StringContains.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,StringContains>>
StringContains.equals ( Object p1 )  :  boolean
StringContains.hashCode ( )  :  int
StringContains.productArity ( )  :  int
StringContains.productElement ( int p1 )  :  Object
StringContains.productIterator ( )  :  scala.collection.Iterator<Object>
StringContains.productPrefix ( )  :  String
StringContains.StringContains ( String attribute, String value )
StringContains.toString ( )  :  String
StringContains.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,String>,StringContains>
StringContains.value ( )  :  String

spark-sql_2.10-1.4.0.jar, StringEndsWith.class
package org.apache.spark.sql.sources
StringEndsWith.attribute ( )  :  String
StringEndsWith.canEqual ( Object p1 )  :  boolean
StringEndsWith.copy ( String attribute, String value )  :  StringEndsWith
StringEndsWith.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,StringEndsWith>>
StringEndsWith.equals ( Object p1 )  :  boolean
StringEndsWith.hashCode ( )  :  int
StringEndsWith.productArity ( )  :  int
StringEndsWith.productElement ( int p1 )  :  Object
StringEndsWith.productIterator ( )  :  scala.collection.Iterator<Object>
StringEndsWith.productPrefix ( )  :  String
StringEndsWith.StringEndsWith ( String attribute, String value )
StringEndsWith.toString ( )  :  String
StringEndsWith.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,String>,StringEndsWith>
StringEndsWith.value ( )  :  String

spark-sql_2.10-1.4.0.jar, StringStartsWith.class
package org.apache.spark.sql.sources
StringStartsWith.attribute ( )  :  String
StringStartsWith.canEqual ( Object p1 )  :  boolean
StringStartsWith.copy ( String attribute, String value )  :  StringStartsWith
StringStartsWith.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,StringStartsWith>>
StringStartsWith.equals ( Object p1 )  :  boolean
StringStartsWith.hashCode ( )  :  int
StringStartsWith.productArity ( )  :  int
StringStartsWith.productElement ( int p1 )  :  Object
StringStartsWith.productIterator ( )  :  scala.collection.Iterator<Object>
StringStartsWith.productPrefix ( )  :  String
StringStartsWith.StringStartsWith ( String attribute, String value )
StringStartsWith.toString ( )  :  String
StringStartsWith.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,String>,StringStartsWith>
StringStartsWith.value ( )  :  String

spark-sql_2.10-1.4.0.jar, TakeOrdered.class
package org.apache.spark.sql.execution
TakeOrdered.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
TakeOrdered.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.4.0.jar, UncacheTableCommand.class
package org.apache.spark.sql.execution
UncacheTableCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.4.0.jar, Union.class
package org.apache.spark.sql.execution
Union.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.4.0.jar, UserDefinedPythonFunction.class
package org.apache.spark.sql
UserDefinedPythonFunction.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )  :  UserDefinedPythonFunction
UserDefinedPythonFunction.pythonVer ( )  :  String
UserDefinedPythonFunction.UserDefinedPythonFunction ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )

to the top

Removed Methods (198)


spark-sql_2.10-1.3.0.jar, AddExchange.class
package org.apache.spark.sql.execution
AddExchange.AddExchange ( org.apache.spark.sql.SQLContext sqlContext )
AddExchange.andThen ( scala.Function1<AddExchange,A> p1 ) [static]  :  scala.Function1<org.apache.spark.sql.SQLContext,A>
AddExchange.apply ( org.apache.spark.sql.catalyst.trees.TreeNode plan )  :  org.apache.spark.sql.catalyst.trees.TreeNode
AddExchange.apply ( SparkPlan plan )  :  SparkPlan
AddExchange.canEqual ( Object p1 )  :  boolean
AddExchange.compose ( scala.Function1<A,org.apache.spark.sql.SQLContext> p1 ) [static]  :  scala.Function1<A,AddExchange>
AddExchange.copy ( org.apache.spark.sql.SQLContext sqlContext )  :  AddExchange
AddExchange.equals ( Object p1 )  :  boolean
AddExchange.hashCode ( )  :  int
AddExchange.numPartitions ( )  :  int
AddExchange.productArity ( )  :  int
AddExchange.productElement ( int p1 )  :  Object
AddExchange.productIterator ( )  :  scala.collection.Iterator<Object>
AddExchange.productPrefix ( )  :  String
AddExchange.sqlContext ( )  :  org.apache.spark.sql.SQLContext
AddExchange.toString ( )  :  String

spark-sql_2.10-1.3.0.jar, Aggregate.class
package org.apache.spark.sql.execution
Aggregate.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, BatchPythonEvaluation.class
package org.apache.spark.sql.execution
BatchPythonEvaluation.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, BroadcastHashJoin.class
package org.apache.spark.sql.execution.joins
BroadcastHashJoin.curried ( ) [static]  :  scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<package.BuildSide,scala.Function1<org.apache.spark.sql.execution.SparkPlan,scala.Function1<org.apache.spark.sql.execution.SparkPlan,BroadcastHashJoin>>>>>
BroadcastHashJoin.tupled ( ) [static]  :  scala.Function1<scala.Tuple5<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,package.BuildSide,org.apache.spark.sql.execution.SparkPlan,org.apache.spark.sql.execution.SparkPlan>,BroadcastHashJoin>

spark-sql_2.10-1.3.0.jar, BroadcastLeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
BroadcastLeftSemiJoinHash.buildSide ( )  :  package.BuildRight.

spark-sql_2.10-1.3.0.jar, CachedData.class
package org.apache.spark.sql
CachedData.CachedData ( catalyst.plans.logical.LogicalPlan plan, columnar.InMemoryRelation cachedRepresentation )
CachedData.cachedRepresentation ( )  :  columnar.InMemoryRelation
CachedData.canEqual ( Object p1 )  :  boolean
CachedData.copy ( catalyst.plans.logical.LogicalPlan plan, columnar.InMemoryRelation cachedRepresentation )  :  CachedData
CachedData.curried ( ) [static]  :  scala.Function1<catalyst.plans.logical.LogicalPlan,scala.Function1<columnar.InMemoryRelation,CachedData>>
CachedData.equals ( Object p1 )  :  boolean
CachedData.hashCode ( )  :  int
CachedData.plan ( )  :  catalyst.plans.logical.LogicalPlan
CachedData.productArity ( )  :  int
CachedData.productElement ( int p1 )  :  Object
CachedData.productIterator ( )  :  scala.collection.Iterator<Object>
CachedData.productPrefix ( )  :  String
CachedData.toString ( )  :  String
CachedData.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<catalyst.plans.logical.LogicalPlan,columnar.InMemoryRelation>,CachedData>

spark-sql_2.10-1.3.0.jar, CacheManager.class
package org.apache.spark.sql
CacheManager.CacheManager ( SQLContext sqlContext )
CacheManager.cacheQuery ( DataFrame query, scala.Option<String> tableName, org.apache.spark.storage.StorageLevel storageLevel )  :  void
CacheManager.cacheTable ( String tableName )  :  void
CacheManager.clearCache ( )  :  void
CacheManager.invalidateCache ( catalyst.plans.logical.LogicalPlan plan )  :  void
CacheManager.isCached ( String tableName )  :  boolean
CacheManager.tryUncacheQuery ( DataFrame query, boolean blocking )  :  boolean
CacheManager.uncacheTable ( String tableName )  :  void
CacheManager.useCachedData ( catalyst.plans.logical.LogicalPlan plan )  :  catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.3.0.jar, CatalystConverter.class
package org.apache.spark.sql.parquet
CatalystConverter.readDecimal ( org.apache.spark.sql.types.Decimal dest, parquet.io.api.Binary value, org.apache.spark.sql.types.DecimalType ctype )  :  void
CatalystConverter.updateString ( int fieldIndex, String value )  :  void

spark-sql_2.10-1.3.0.jar, CatalystNativeArrayConverter.class
package org.apache.spark.sql.parquet
CatalystNativeArrayConverter.CatalystNativeArrayConverter ( org.apache.spark.sql.types.NativeType elementType, int index, CatalystConverter parent, int capacity )

spark-sql_2.10-1.3.0.jar, Column.class
package org.apache.spark.sql
Column.apply ( catalyst.expressions.Expression p1 ) [static]  :  Column
Column.apply ( String p1 ) [static]  :  Column
Column.getItem ( int ordinal )  :  Column

spark-sql_2.10-1.3.0.jar, CreateTableUsingAsSelect.class
package org.apache.spark.sql.sources
CreateTableUsingAsSelect.copy ( String tableName, String provider, boolean temporary, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan child )  :  CreateTableUsingAsSelect
CreateTableUsingAsSelect.CreateTableUsingAsSelect ( String tableName, String provider, boolean temporary, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan child )

spark-sql_2.10-1.3.0.jar, CreateTempTableUsingAsSelect.class
package org.apache.spark.sql.sources
CreateTempTableUsingAsSelect.copy ( String tableName, String provider, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query )  :  CreateTempTableUsingAsSelect
CreateTempTableUsingAsSelect.CreateTempTableUsingAsSelect ( String tableName, String provider, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query )

spark-sql_2.10-1.3.0.jar, DDLParser.class
package org.apache.spark.sql.sources
DDLParser.apply ( String input, boolean exceptionOnError )  :  scala.Option<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.3.0.jar, Distinct.class
package org.apache.spark.sql.execution
Distinct.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, DriverQuirks.class
package org.apache.spark.sql.jdbc
DriverQuirks.DriverQuirks ( )
DriverQuirks.get ( String p1 ) [static]  :  DriverQuirks
DriverQuirks.getCatalystType ( int p1, String p2, int p3, org.apache.spark.sql.types.MetadataBuilder p4 ) [abstract]  :  org.apache.spark.sql.types.DataType
DriverQuirks.getJDBCType ( org.apache.spark.sql.types.DataType p1 ) [abstract]  :  scala.Tuple2<String,scala.Option<Object>>

spark-sql_2.10-1.3.0.jar, Exchange.class
package org.apache.spark.sql.execution
Exchange.children ( )  :  scala.collection.immutable.List<SparkPlan>
Exchange.copy ( org.apache.spark.sql.catalyst.plans.physical.Partitioning newPartitioning, SparkPlan child )  :  Exchange
Exchange.curried ( ) [static]  :  scala.Function1<org.apache.spark.sql.catalyst.plans.physical.Partitioning,scala.Function1<SparkPlan,Exchange>>
Exchange.Exchange ( org.apache.spark.sql.catalyst.plans.physical.Partitioning newPartitioning, SparkPlan child )
Exchange.Exchange..bypassMergeThreshold ( )  :  int
Exchange.sortBasedShuffleOn ( )  :  boolean
Exchange.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<org.apache.spark.sql.catalyst.plans.physical.Partitioning,SparkPlan>,Exchange>

spark-sql_2.10-1.3.0.jar, ExecutedCommand.class
package org.apache.spark.sql.execution
ExecutedCommand.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, Expand.class
package org.apache.spark.sql.execution
Expand.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, ExternalSort.class
package org.apache.spark.sql.execution
ExternalSort.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, Filter.class
package org.apache.spark.sql.execution
Filter.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, Generate.class
package org.apache.spark.sql.execution
Generate.children ( )  :  scala.collection.immutable.List<SparkPlan>
Generate.copy ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, SparkPlan child )  :  Generate
Generate.Generate ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, SparkPlan child )
Generate.generatorOutput ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.3.0.jar, GeneratedAggregate.class
package org.apache.spark.sql.execution
GeneratedAggregate.children ( )  :  scala.collection.immutable.List<SparkPlan>
GeneratedAggregate.copy ( boolean partial, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> groupingExpressions, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> aggregateExpressions, SparkPlan child )  :  GeneratedAggregate
GeneratedAggregate.GeneratedAggregate ( boolean partial, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> groupingExpressions, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> aggregateExpressions, SparkPlan child )

spark-sql_2.10-1.3.0.jar, GroupedData.class
package org.apache.spark.sql
GroupedData.GroupedData ( DataFrame df, scala.collection.Seq<catalyst.expressions.Expression> groupingExprs )

spark-sql_2.10-1.3.0.jar, InMemoryColumnarTableScan.class
package org.apache.spark.sql.columnar
InMemoryColumnarTableScan.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, InMemoryRelation.class
package org.apache.spark.sql.columnar
InMemoryRelation.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics )  :  InMemoryRelation
InMemoryRelation.InMemoryRelation ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics )
InMemoryRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, InsertIntoParquetTable.class
package org.apache.spark.sql.parquet
InsertIntoParquetTable.children ( )  :  scala.collection.immutable.List<org.apache.spark.sql.execution.SparkPlan>

spark-sql_2.10-1.3.0.jar, JDBCRDD.class
package org.apache.spark.sql.jdbc
JDBCRDD.getConnector ( String p1, String p2 ) [static]  :  scala.Function0<java.sql.Connection>
JDBCRDD.JDBCRDD ( org.apache.spark.SparkContext sc, scala.Function0<java.sql.Connection> getConnection, org.apache.spark.sql.types.StructType schema, String fqTable, String[ ] columns, org.apache.spark.sql.sources.Filter[ ] filters, org.apache.spark.Partition[ ] partitions )
JDBCRDD.resolveTable ( String p1, String p2 ) [static]  :  org.apache.spark.sql.types.StructType
JDBCRDD.scanTable ( org.apache.spark.SparkContext p1, org.apache.spark.sql.types.StructType p2, String p3, String p4, String p5, String[ ] p6, org.apache.spark.sql.sources.Filter[ ] p7, org.apache.spark.Partition[ ] p8 ) [static]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>

spark-sql_2.10-1.3.0.jar, JDBCRelation.class
package org.apache.spark.sql.jdbc
JDBCRelation.copy ( String url, String table, org.apache.spark.Partition[ ] parts, org.apache.spark.sql.SQLContext sqlContext )  :  JDBCRelation
JDBCRelation.JDBCRelation ( String url, String table, org.apache.spark.Partition[ ] parts, org.apache.spark.sql.SQLContext sqlContext )

spark-sql_2.10-1.3.0.jar, JSONRelation.class
package org.apache.spark.sql.json
JSONRelation.canEqual ( Object p1 )  :  boolean
JSONRelation.copy ( String path, double samplingRatio, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, org.apache.spark.sql.SQLContext sqlContext )  :  JSONRelation
JSONRelation.JSONRelation..baseRDD ( )  :  org.apache.spark.rdd.RDD<String>
JSONRelation.path ( )  :  String
JSONRelation.productArity ( )  :  int
JSONRelation.productElement ( int p1 )  :  Object
JSONRelation.productIterator ( )  :  scala.collection.Iterator<Object>
JSONRelation.productPrefix ( )  :  String
JSONRelation.toString ( )  :  String
JSONRelation.userSpecifiedSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>

spark-sql_2.10-1.3.0.jar, LeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
LeftSemiJoinHash.buildSide ( )  :  package.BuildRight.

spark-sql_2.10-1.3.0.jar, Limit.class
package org.apache.spark.sql.execution
Limit.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, LocalTableScan.class
package org.apache.spark.sql.execution
LocalTableScan.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, LogicalLocalTable.class
package org.apache.spark.sql.execution
LogicalLocalTable.children ( )  :  scala.collection.immutable.Nil.
LogicalLocalTable.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, LogicalRDD.class
package org.apache.spark.sql.execution
LogicalRDD.children ( )  :  scala.collection.immutable.Nil.
LogicalRDD.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, LogicalRelation.class
package org.apache.spark.sql.sources
LogicalRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, MySQLQuirks.class
package org.apache.spark.sql.jdbc
MySQLQuirks.MySQLQuirks ( )

spark-sql_2.10-1.3.0.jar, NativeColumnType<T>.class
package org.apache.spark.sql.columnar
NativeColumnType<T>.dataType ( )  :  T
NativeColumnType<T>.NativeColumnType ( T dataType, int typeId, int defaultSize )  :  public

spark-sql_2.10-1.3.0.jar, NoQuirks.class
package org.apache.spark.sql.jdbc
NoQuirks.NoQuirks ( )

spark-sql_2.10-1.3.0.jar, OutputFaker.class
package org.apache.spark.sql.execution
OutputFaker.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, ParquetRelation.class
package org.apache.spark.sql.parquet
ParquetRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, ParquetRelation2.class
package org.apache.spark.sql.parquet
ParquetRelation2.buildScan ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> predicates )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
ParquetRelation2.canEqual ( Object p1 )  :  boolean
ParquetRelation2.copy ( scala.collection.Seq<String> paths, scala.collection.immutable.Map<String,String> parameters, scala.Option<org.apache.spark.sql.types.StructType> maybeSchema, scala.Option<PartitionSpec> maybePartitionSpec, org.apache.spark.sql.SQLContext sqlContext )  :  ParquetRelation2
ParquetRelation2.DEFAULT_PARTITION_NAME ( ) [static]  :  String
ParquetRelation2.insert ( org.apache.spark.sql.DataFrame data, boolean overwrite )  :  void
ParquetRelation2.isPartitioned ( )  :  boolean
ParquetRelation2.maybeSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>
ParquetRelation2.MERGE_SCHEMA ( ) [static]  :  String
ParquetRelation2.newJobContext ( org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.mapreduce.JobID jobId )  :  org.apache.hadoop.mapreduce.JobContext
ParquetRelation2.newTaskAttemptContext ( org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.mapreduce.TaskAttemptID attemptId )  :  org.apache.hadoop.mapreduce.TaskAttemptContext
ParquetRelation2.newTaskAttemptID ( String jtIdentifier, int jobId, boolean isMap, int taskId, int attemptId )  :  org.apache.hadoop.mapreduce.TaskAttemptID
ParquetRelation2.ParquetRelation2..defaultPartitionName ( )  :  String
ParquetRelation2.ParquetRelation2..isSummaryFile ( org.apache.hadoop.fs.Path file )  :  boolean
ParquetRelation2.parameters ( )  :  scala.collection.immutable.Map<String,String>
ParquetRelation2.ParquetRelation2 ( scala.collection.Seq<String> paths, scala.collection.immutable.Map<String,String> parameters, scala.Option<org.apache.spark.sql.types.StructType> maybeSchema, scala.Option<PartitionSpec> maybePartitionSpec, org.apache.spark.sql.SQLContext sqlContext )
ParquetRelation2.partitions ( )  :  scala.collection.Seq<Partition>
ParquetRelation2.productArity ( )  :  int
ParquetRelation2.productElement ( int p1 )  :  Object
ParquetRelation2.productIterator ( )  :  scala.collection.Iterator<Object>
ParquetRelation2.productPrefix ( )  :  String
ParquetRelation2.sparkContext ( )  :  org.apache.spark.SparkContext
ParquetRelation2.toString ( )  :  String

spark-sql_2.10-1.3.0.jar, ParquetTableScan.class
package org.apache.spark.sql.parquet
ParquetTableScan.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, ParquetTest.class
package org.apache.spark.sql.parquet
ParquetTest.configuration ( ) [abstract]  :  org.apache.hadoop.conf.Configuration
ParquetTest.makeParquetFile ( org.apache.spark.sql.DataFrame p1, java.io.File p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.makeParquetFile ( scala.collection.Seq<T> p1, java.io.File p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.makePartitionDir ( java.io.File p1, String p2, scala.collection.Seq<scala.Tuple2<String,Object>> p3 ) [abstract]  :  java.io.File
ParquetTest.sqlContext ( ) [abstract]  :  org.apache.spark.sql.SQLContext
ParquetTest.withParquetDataFrame ( scala.collection.Seq<T> p1, scala.Function1<org.apache.spark.sql.DataFrame,scala.runtime.BoxedUnit> p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.withParquetFile ( scala.collection.Seq<T> p1, scala.Function1<String,scala.runtime.BoxedUnit> p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.withParquetTable ( scala.collection.Seq<T> p1, String p2, scala.Function0<scala.runtime.BoxedUnit> p3, scala.reflect.ClassTag<T> p4, scala.reflect.api.TypeTags.TypeTag<T> p5 ) [abstract]  :  void
ParquetTest.withSQLConf ( scala.collection.Seq<scala.Tuple2<String,String>> p1, scala.Function0<scala.runtime.BoxedUnit> p2 ) [abstract]  :  void
ParquetTest.withTempDir ( scala.Function1<java.io.File,scala.runtime.BoxedUnit> p1 ) [abstract]  :  void
ParquetTest.withTempPath ( scala.Function1<java.io.File,scala.runtime.BoxedUnit> p1 ) [abstract]  :  void
ParquetTest.withTempTable ( String p1, scala.Function0<scala.runtime.BoxedUnit> p2 ) [abstract]  :  void

spark-sql_2.10-1.3.0.jar, Partition.class
package org.apache.spark.sql.parquet
Partition.canEqual ( Object p1 )  :  boolean
Partition.copy ( org.apache.spark.sql.Row values, String path )  :  Partition
Partition.curried ( ) [static]  :  scala.Function1<org.apache.spark.sql.Row,scala.Function1<String,Partition>>
Partition.equals ( Object p1 )  :  boolean
Partition.hashCode ( )  :  int
Partition.Partition ( org.apache.spark.sql.Row values, String path )
Partition.path ( )  :  String
Partition.productArity ( )  :  int
Partition.productElement ( int p1 )  :  Object
Partition.productIterator ( )  :  scala.collection.Iterator<Object>
Partition.productPrefix ( )  :  String
Partition.toString ( )  :  String
Partition.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<org.apache.spark.sql.Row,String>,Partition>
Partition.values ( )  :  org.apache.spark.sql.Row

spark-sql_2.10-1.3.0.jar, PartitionSpec.class
package org.apache.spark.sql.parquet
PartitionSpec.canEqual ( Object p1 )  :  boolean
PartitionSpec.copy ( org.apache.spark.sql.types.StructType partitionColumns, scala.collection.Seq<Partition> partitions )  :  PartitionSpec
PartitionSpec.curried ( ) [static]  :  scala.Function1<org.apache.spark.sql.types.StructType,scala.Function1<scala.collection.Seq<Partition>,PartitionSpec>>
PartitionSpec.equals ( Object p1 )  :  boolean
PartitionSpec.hashCode ( )  :  int
PartitionSpec.partitionColumns ( )  :  org.apache.spark.sql.types.StructType
PartitionSpec.partitions ( )  :  scala.collection.Seq<Partition>
PartitionSpec.PartitionSpec ( org.apache.spark.sql.types.StructType partitionColumns, scala.collection.Seq<Partition> partitions )
PartitionSpec.productArity ( )  :  int
PartitionSpec.productElement ( int p1 )  :  Object
PartitionSpec.productIterator ( )  :  scala.collection.Iterator<Object>
PartitionSpec.productPrefix ( )  :  String
PartitionSpec.toString ( )  :  String
PartitionSpec.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<org.apache.spark.sql.types.StructType,scala.collection.Seq<Partition>>,PartitionSpec>

spark-sql_2.10-1.3.0.jar, PhysicalRDD.class
package org.apache.spark.sql.execution
PhysicalRDD.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, PostgresQuirks.class
package org.apache.spark.sql.jdbc
PostgresQuirks.PostgresQuirks ( )

spark-sql_2.10-1.3.0.jar, PreWriteCheck.class
package org.apache.spark.sql.sources
PreWriteCheck.failAnalysis ( String msg )  :  scala.runtime.Nothing.

spark-sql_2.10-1.3.0.jar, Project.class
package org.apache.spark.sql.execution
Project.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, PythonUDF.class
package org.apache.spark.sql.execution
PythonUDF.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )  :  PythonUDF
PythonUDF.eval ( org.apache.spark.sql.Row input )  :  scala.runtime.Nothing.
PythonUDF.PythonUDF ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )

spark-sql_2.10-1.3.0.jar, ResolvedDataSource.class
package org.apache.spark.sql.sources
ResolvedDataSource.apply ( org.apache.spark.sql.SQLContext p1, scala.Option<org.apache.spark.sql.types.StructType> p2, String p3, scala.collection.immutable.Map<String,String> p4 ) [static]  :  ResolvedDataSource
ResolvedDataSource.apply ( org.apache.spark.sql.SQLContext p1, String p2, org.apache.spark.sql.SaveMode p3, scala.collection.immutable.Map<String,String> p4, org.apache.spark.sql.DataFrame p5 ) [static]  :  ResolvedDataSource

spark-sql_2.10-1.3.0.jar, Sample.class
package org.apache.spark.sql.execution
Sample.children ( )  :  scala.collection.immutable.List<SparkPlan>
Sample.copy ( double fraction, boolean withReplacement, long seed, SparkPlan child )  :  Sample
Sample.fraction ( )  :  double
Sample.Sample ( double fraction, boolean withReplacement, long seed, SparkPlan child )

spark-sql_2.10-1.3.0.jar, Sort.class
package org.apache.spark.sql.execution
Sort.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.cacheManager ( )  :  CacheManager
SQLContext.checkAnalysis ( )  :  catalyst.analysis.CheckAnalysis
SQLContext.createDataFrame ( org.apache.spark.api.java.JavaRDD<Row> rowRDD, java.util.List<String> columns )  :  DataFrame

spark-sql_2.10-1.3.0.jar, TakeOrdered.class
package org.apache.spark.sql.execution
TakeOrdered.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, TestGroupWriteSupport.class
package org.apache.spark.sql.parquet
TestGroupWriteSupport.TestGroupWriteSupport ( parquet.schema.MessageType schema )

spark-sql_2.10-1.3.0.jar, UserDefinedPythonFunction.class
package org.apache.spark.sql
UserDefinedPythonFunction.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )  :  UserDefinedPythonFunction
UserDefinedPythonFunction.UserDefinedPythonFunction ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )

to the top

Problems with Data Types, High Severity (21)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql
[+] CachedData (1)
[+] CacheManager (1)

package org.apache.spark.sql.execution
[+] AddExchange (1)

package org.apache.spark.sql.execution.joins
[+] GeneralHashedRelation (1)
[+] UniqueKeyHashedRelation (1)

package org.apache.spark.sql.jdbc
[+] DriverQuirks (1)
[+] JDBCRDD.DecimalConversion. (1)
[+] MySQLQuirks (1)
[+] NoQuirks (1)
[+] PostgresQuirks (1)

package org.apache.spark.sql.json
[+] JSONRelation (2)

package org.apache.spark.sql.parquet
[+] ParquetRelation2 (5)
[+] ParquetTest (1)
[+] Partition (1)
[+] PartitionSpec (1)
[+] TestGroupWriteSupport (1)

to the top

Problems with Methods, High Severity (3)


spark-sql_2.10-1.3.0.jar, CatalystConverter
package org.apache.spark.sql.parquet
[+] CatalystConverter.readDecimal ( org.apache.spark.sql.types.Decimal dest, parquet.io.api.Binary value, org.apache.spark.sql.types.DecimalType ctype )  :  void (1)

spark-sql_2.10-1.3.0.jar, ParquetRelation2
package org.apache.spark.sql.parquet
[+] ParquetRelation2.maybePartitionSpec ( )  :  scala.Option<PartitionSpec> (1)

spark-sql_2.10-1.3.0.jar, TakeOrdered
package org.apache.spark.sql.execution
[+] TakeOrdered.ord ( )  :  org.apache.spark.sql.catalyst.expressions.RowOrdering (1)

to the top

Problems with Data Types, Medium Severity (15)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql.execution
[+] CacheTableCommand (1)
[+] DescribeCommand (1)
[+] ExplainCommand (1)
[+] RunnableCommand (1)
[+] SetCommand (1)
[+] ShowTablesCommand (1)
[+] UncacheTableCommand (1)

package org.apache.spark.sql.jdbc
[+] JDBCRDD.DecimalConversion. (1)

package org.apache.spark.sql.parquet
[+] ParquetRelation2 (1)

package org.apache.spark.sql.sources
[+] CreateTableUsing (1)
[+] CreateTempTableUsing (1)
[+] CreateTempTableUsingAsSelect (1)
[+] DescribeCommand (1)
[+] InsertIntoDataSource (1)
[+] RefreshTable (1)

to the top

Problems with Methods, Medium Severity (1)


spark-sql_2.10-1.3.0.jar, SparkPlan
package org.apache.spark.sql.execution
[+] SparkPlan.execute ( ) [abstract]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row> (1)

to the top

Problems with Data Types, Low Severity (44)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql.columnar
[+] InMemoryColumnarTableScan (1)

package org.apache.spark.sql.execution
[+] Aggregate (2)
[+] BatchPythonEvaluation (1)
[+] Distinct (1)
[+] Except (1)
[+] Exchange (1)
[+] ExecutedCommand (1)
[+] Expand (1)
[+] ExternalSort (1)
[+] Filter (1)
[+] Generate (1)
[+] GeneratedAggregate (1)
[+] Intersect (1)
[+] Limit (2)
[+] LocalTableScan (1)
[+] OutputFaker (1)
[+] PhysicalRDD (1)
[+] Project (1)
[+] Sample (1)
[+] Sort (1)
[+] SparkPlan (1)
[+] TakeOrdered (2)
[+] Union (1)

package org.apache.spark.sql.execution.joins
[+] BroadcastHashJoin (2)
[+] BroadcastLeftSemiJoinHash (1)
[+] BroadcastNestedLoopJoin (1)
[+] CartesianProduct (1)
[+] HashOuterJoin (2)
[+] LeftSemiJoinBNL (1)
[+] LeftSemiJoinHash (2)
[+] ShuffledHashJoin (2)

package org.apache.spark.sql.parquet
[+] InsertIntoParquetTable (1)
[+] ParquetRelation2 (4)
[+] ParquetTableScan (1)

to the top

Problems with Methods, Low Severity (1)


spark-sql_2.10-1.3.0.jar, SparkPlan
package org.apache.spark.sql.execution
[+] SparkPlan.execute ( ) [abstract]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row> (1)

to the top

Other Changes in Data Types (4)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql.execution
[+] RunnableCommand (1)
[+] SparkPlan (1)

package org.apache.spark.sql.execution.joins
[+] HashedRelation (2)

to the top

Java ARchives (1)


spark-sql_2.10-1.3.0.jar

to the top




Generated on Wed Oct 28 11:09:12 2015 for succinct-0.1.2 by Java API Compliance Checker 1.4.1  
A tool for checking backward compatibility of a Java library API