Binary compatibility report for the spark-cassandra-connector_2.11-1.3.1 library  between 1.3.0 and 1.4.0 versions   (relating to the portability of client application spark-cassandra-connector_2.11-1.3.1.jar)

Test Info


Library Namespark-cassandra-connector_2.11-1.3.1
Version #11.3.0
Version #21.4.0
Java Version1.7.0_75

Test Results


Total Java ARchives5
Total Methods / Classes1324 / 3050
VerdictIncompatible
(2.5%)

Problem Summary


SeverityCount
Added Methods-99
Removed MethodsHigh21
Problems with
Data Types
High4
Medium1
Low1
Problems with
Methods
High0
Medium1
Low0
Other Changes
in Data Types
-4

Added Methods (99)


spark-catalyst_2.11-1.4.0.jar, Catalog.class
package org.apache.spark.sql.catalyst.analysis
Catalog.conf ( ) [abstract]  :  org.apache.spark.sql.catalyst.CatalystConf

spark-catalyst_2.11-1.4.0.jar, Expression.class
package org.apache.spark.sql.catalyst.expressions
Expression.deterministic ( )  :  boolean
Expression.semanticEquals ( Expression other )  :  boolean

spark-catalyst_2.11-1.4.0.jar, LogicalPlan.class
package org.apache.spark.sql.catalyst.plans.logical
LogicalPlan.LogicalPlan..resolveAsColumn ( scala.collection.Seq<String> nameParts, scala.Function2<String,String,Object> resolver, org.apache.spark.sql.catalyst.expressions.Attribute attribute )  :  scala.Option<scala.Tuple2<org.apache.spark.sql.catalyst.expressions.Attribute,scala.collection.immutable.List<String>>>
LogicalPlan.LogicalPlan..resolveAsTableColumn ( scala.collection.Seq<String> nameParts, scala.Function2<String,String,Object> resolver, org.apache.spark.sql.catalyst.expressions.Attribute attribute )  :  scala.Option<scala.Tuple2<org.apache.spark.sql.catalyst.expressions.Attribute,scala.collection.immutable.List<String>>>
LogicalPlan.resolve ( scala.collection.Seq<String> nameParts, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> input, scala.Function2<String,String,Object> resolver, boolean throwErrors )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.NamedExpression>
LogicalPlan.resolve ( scala.collection.Seq<String> nameParts, scala.Function2<String,String,Object> resolver, boolean throwErrors )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.NamedExpression>
LogicalPlan.resolveChildren ( scala.collection.Seq<String> nameParts, scala.Function2<String,String,Object> resolver, boolean throwErrors )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.NamedExpression>
LogicalPlan.resolveQuoted ( String name, scala.Function2<String,String,Object> resolver )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.NamedExpression>

spark-catalyst_2.11-1.4.0.jar, Row.class
package org.apache.spark.sql
Row.fieldIndex ( String p1 ) [abstract]  :  int
Row.getAs ( String p1 ) [abstract]  :  T
Row.getValuesMap ( scala.collection.Seq<String> p1 ) [abstract]  :  scala.collection.immutable.Map<String,T>

spark-catalyst_2.11-1.4.0.jar, StructType.class
package org.apache.spark.sql.types
StructType.fieldIndex ( String name )  :  int
StructType.getFieldIndex ( String name )  :  scala.Option<Object>
StructType.StructType ( )

spark-core_2.11-1.4.0.jar, RDD<T>.class
package org.apache.spark.rdd
RDD<T>.RDD..doCheckpointCalled ( )  :  boolean
RDD<T>.RDD..doCheckpointCalled_.eq ( boolean p1 )  :  void
RDD<T>.RDD..sc ( )  :  org.apache.spark.SparkContext
RDD<T>.randomSampleWithRange ( double lb, double ub, long seed )  :  RDD<T>
RDD<T>.scope ( )  :  scala.Option<RDDOperationScope>
RDD<T>.withScope ( scala.Function0<U> body )  :  U

spark-core_2.11-1.4.0.jar, SparkConf.class
package org.apache.spark
SparkConf.getDeprecatedConfig ( String p1, SparkConf p2 ) [static]  :  scala.Option<String>
SparkConf.getSizeAsBytes ( String key )  :  long
SparkConf.getSizeAsBytes ( String key, String defaultValue )  :  long
SparkConf.getSizeAsGb ( String key )  :  long
SparkConf.getSizeAsGb ( String key, String defaultValue )  :  long
SparkConf.getSizeAsKb ( String key )  :  long
SparkConf.getSizeAsKb ( String key, String defaultValue )  :  long
SparkConf.getSizeAsMb ( String key )  :  long
SparkConf.getSizeAsMb ( String key, String defaultValue )  :  long
SparkConf.getTimeAsMs ( String key )  :  long
SparkConf.getTimeAsMs ( String key, String defaultValue )  :  long
SparkConf.getTimeAsSeconds ( String key )  :  long
SparkConf.getTimeAsSeconds ( String key, String defaultValue )  :  long
SparkConf.logDeprecationWarning ( String p1 ) [static]  :  void

spark-core_2.11-1.4.0.jar, SparkContext.class
package org.apache.spark
SparkContext.applicationAttemptId ( )  :  scala.Option<String>
SparkContext.externalBlockStoreFolderName ( )  :  String
SparkContext.getOrCreate ( ) [static]  :  SparkContext
SparkContext.getOrCreate ( SparkConf p1 ) [static]  :  SparkContext
SparkContext.SparkContext.._conf ( )  :  SparkConf
SparkContext.SparkContext.._env ( )  :  SparkEnv
SparkContext.SparkContext..assertNotStopped ( )  :  void
SparkContext.range ( long start, long end, long step, int numSlices )  :  rdd.RDD<Object>
SparkContext.setLogLevel ( String logLevel )  :  void
SparkContext.supportDynamicAllocation ( )  :  boolean
SparkContext.withScope ( scala.Function0<U> body )  :  U

spark-core_2.11-1.4.0.jar, TaskContext.class
package org.apache.spark
TaskContext.taskMemoryManager ( ) [abstract]  :  unsafe.memory.TaskMemoryManager

spark-sql_2.11-1.4.0.jar, BaseRelation.class
package org.apache.spark.sql.sources
BaseRelation.needConversion ( )  :  boolean

spark-sql_2.11-1.4.0.jar, DataFrame.class
package org.apache.spark.sql
DataFrame.coalesce ( int numPartitions )  :  DataFrame
DataFrame.cube ( Column... cols )  :  GroupedData
DataFrame.cube ( scala.collection.Seq<Column> cols )  :  GroupedData
DataFrame.cube ( String col1, scala.collection.Seq<String> cols )  :  GroupedData
DataFrame.cube ( String col1, String... cols )  :  GroupedData
DataFrame.describe ( scala.collection.Seq<String> cols )  :  DataFrame
DataFrame.describe ( String... cols )  :  DataFrame
DataFrame.drop ( String colName )  :  DataFrame
DataFrame.dropDuplicates ( )  :  DataFrame
DataFrame.dropDuplicates ( scala.collection.Seq<String> colNames )  :  DataFrame
DataFrame.dropDuplicates ( String[ ] colNames )  :  DataFrame
DataFrame.join ( DataFrame right, String usingColumn )  :  DataFrame
DataFrame.na ( )  :  DataFrameNaFunctions
DataFrame.DataFrame..logicalPlanToDataFrame ( catalyst.plans.logical.LogicalPlan logicalPlan )  :  DataFrame
DataFrame.randomSplit ( double[ ] weights )  :  DataFrame[ ]
DataFrame.randomSplit ( double[ ] weights, long seed )  :  DataFrame[ ]
DataFrame.randomSplit ( scala.collection.immutable.List<Object> weights, long seed )  :  DataFrame[ ]
DataFrame.rollup ( Column... cols )  :  GroupedData
DataFrame.rollup ( scala.collection.Seq<Column> cols )  :  GroupedData
DataFrame.rollup ( String col1, scala.collection.Seq<String> cols )  :  GroupedData
DataFrame.rollup ( String col1, String... cols )  :  GroupedData
DataFrame.stat ( )  :  DataFrameStatFunctions
DataFrame.write ( )  :  DataFrameWriter

spark-sql_2.11-1.4.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.cacheManager ( )  :  execution.CacheManager
SQLContext.createDataFrame ( org.apache.spark.rdd.RDD<Row> rowRDD, types.StructType schema, boolean needsConversion )  :  DataFrame
SQLContext.createSession ( )  :  SQLContext.SQLSession
SQLContext.currentSession ( )  :  SQLContext.SQLSession
SQLContext.defaultSession ( )  :  SQLContext.SQLSession
SQLContext.detachSession ( )  :  void
SQLContext.dialectClassName ( )  :  String
SQLContext.getOrCreate ( org.apache.spark.SparkContext p1 ) [static]  :  SQLContext
SQLContext.getSQLDialect ( )  :  catalyst.ParserDialect
SQLContext.openSession ( )  :  SQLContext.SQLSession
SQLContext.range ( long start, long end )  :  DataFrame
SQLContext.range ( long start, long end, long step, int numPartitions )  :  DataFrame
SQLContext.read ( )  :  DataFrameReader
SQLContext.tlSession ( )  :  ThreadLocal<SQLContext.SQLSession>

spark-streaming_2.11-1.4.0.jar, DStream<T>.class
package org.apache.spark.streaming.dstream
DStream<T>.baseScope ( )  :  scala.Option<String>
DStream<T>.createRDDWithLocalProperties ( org.apache.spark.streaming.Time time, scala.Function0<U> body )  :  U
DStream<T>.validateAtStart ( )  :  void

spark-streaming_2.11-1.4.0.jar, StreamingContext.class
package org.apache.spark.streaming
StreamingContext.getActive ( ) [static]  :  scala.Option<StreamingContext>
StreamingContext.getActiveOrCreate ( scala.Function0<StreamingContext> p1 ) [static]  :  StreamingContext
StreamingContext.getActiveOrCreate ( String p1, scala.Function0<StreamingContext> p2, org.apache.hadoop.conf.Configuration p3, boolean p4 ) [static]  :  StreamingContext
StreamingContext.getNewInputStreamId ( )  :  int
StreamingContext.getState ( )  :  StreamingContextState
StreamingContext.isCheckpointingEnabled ( )  :  boolean
StreamingContext.StreamingContext..startSite ( )  :  java.util.concurrent.atomic.AtomicReference<org.apache.spark.util.CallSite>
StreamingContext.StreamingContext..stopOnShutdown ( )  :  void
StreamingContext.StreamingContext ( String path, org.apache.spark.SparkContext sparkContext )
StreamingContext.withNamedScope ( String name, scala.Function0<U> body )  :  U
StreamingContext.withScope ( scala.Function0<U> body )  :  U

to the top

Removed Methods (21)


spark-catalyst_2.11-1.3.0.jar, Catalog.class
package org.apache.spark.sql.catalyst.analysis
Catalog.caseSensitive ( ) [abstract]  :  boolean

spark-catalyst_2.11-1.3.0.jar, LogicalPlan.class
package org.apache.spark.sql.catalyst.plans.logical
LogicalPlan.LogicalPlan..resolveAsColumn ( String[ ] nameParts, scala.Function2<String,String,Object> resolver, org.apache.spark.sql.catalyst.expressions.Attribute attribute )  :  scala.Option<scala.Tuple2<org.apache.spark.sql.catalyst.expressions.Attribute,scala.collection.immutable.List<String>>>
LogicalPlan.LogicalPlan..resolveAsTableColumn ( String[ ] nameParts, scala.Function2<String,String,Object> resolver, org.apache.spark.sql.catalyst.expressions.Attribute attribute )  :  scala.Option<scala.Tuple2<org.apache.spark.sql.catalyst.expressions.Attribute,scala.collection.immutable.List<String>>>
LogicalPlan.resolve ( String name, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> input, scala.Function2<String,String,Object> resolver )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.NamedExpression>
LogicalPlan.resolve ( String name, scala.Function2<String,String,Object> resolver )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.NamedExpression>
LogicalPlan.resolveChildren ( String name, scala.Function2<String,String,Object> resolver )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.NamedExpression>

spark-catalyst_2.11-1.3.0.jar, NativeType.class
package org.apache.spark.sql.types
NativeType.all ( ) [static]  :  scala.collection.Seq<PrimitiveType>
NativeType.classTag ( )  :  scala.reflect.ClassTag<Object>
NativeType.NativeType ( )
NativeType.ordering ( ) [abstract]  :  scala.math.Ordering<Object>
NativeType.tag ( ) [abstract]  :  scala.reflect.api.TypeTags.TypeTag<Object>

spark-catalyst_2.11-1.3.0.jar, PrimitiveType.class
package org.apache.spark.sql.types
PrimitiveType.isPrimitive ( ) [abstract]  :  boolean

spark-core_2.11-1.3.0.jar, SparkConf.class
package org.apache.spark
SparkConf.translateConfKey ( String p1, boolean p2 ) [static]  :  String

spark-sql_2.11-1.3.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.cacheManager ( )  :  CacheManager
SQLContext.checkAnalysis ( )  :  catalyst.analysis.CheckAnalysis
SQLContext.createDataFrame ( org.apache.spark.api.java.JavaRDD<Row> rowRDD, java.util.List<String> columns )  :  DataFrame

spark-streaming_2.11-1.3.0.jar, DStream<T>.class
package org.apache.spark.streaming.dstream
DStream<T>.validate ( )  :  void

spark-streaming_2.11-1.3.0.jar, StreamingContext.class
package org.apache.spark.streaming
StreamingContext.getNewReceiverStreamId ( )  :  int
StreamingContext.state ( )  :  scala.Enumeration.Value
StreamingContext.state_.eq ( scala.Enumeration.Value p1 )  :  void
StreamingContext.StreamingContextState ( )  :  StreamingContext.StreamingContextState.

to the top

Problems with Data Types, High Severity (4)


spark-catalyst_2.11-1.3.0.jar
package org.apache.spark.sql.catalyst.analysis
[+] Catalog (1)

package org.apache.spark.sql.types
[+] NativeType (1)
[+] PrimitiveType (1)

spark-core_2.11-1.3.0.jar
package org.apache.spark.api.java
[+] JavaRDD<T> (1)

to the top

Problems with Data Types, Medium Severity (1)


spark-catalyst_2.11-1.3.0.jar
package org.apache.spark.sql.catalyst.analysis
[+] Catalog (1)

to the top

Problems with Methods, Medium Severity (1)


spark-streaming_2.11-1.3.0.jar, DStream
package org.apache.spark.streaming.dstream
[+] DStream<T>.getOrCompute ( org.apache.spark.streaming.Time time )  :  scala.Option<org.apache.spark.rdd.RDD<T>> (1)

to the top

Problems with Data Types, Low Severity (1)


spark-core_2.11-1.3.0.jar
package org.apache.spark.api.java
[+] JavaRDD<T> (1)

to the top

Other Changes in Data Types (4)


spark-catalyst_2.11-1.3.0.jar
package org.apache.spark.sql
[+] Row (3)

spark-core_2.11-1.3.0.jar
package org.apache.spark
[+] TaskContext (1)

to the top

Java ARchives (5)


spark-catalyst_2.11-1.3.0.jar
spark-core_2.11-1.3.0.jar
spark-hive_2.11-1.3.0.jar
spark-sql_2.11-1.3.0.jar
spark-streaming_2.11-1.3.0.jar

to the top




Generated on Mon Oct 5 11:47:39 2015 for spark-cassandra-connector_2.11-1.3.1 by Java API Compliance Checker 1.4.1  
A tool for checking backward compatibility of a Java library API