Binary compatibility report for the spark-avro_2.10-2.0.1 library between 1.4.0 and 1.0.0 versions (relating to the portability of client application spark-avro_2.10-2.0.1.jar)
Test Info
Library Name | spark-avro_2.10-2.0.1 |
Version #1 | 1.4.0 |
Version #2 | 1.0.0 |
Java Version | 1.7.0_75 |
Test Results
Total Java ARchives | 2 |
---|
Total Methods / Classes | 186 / 2470 |
---|
Verdict | Incompatible (85.5%) |
Problem Summary
| Severity | Count |
---|
Added Methods | - | 27 |
---|
Removed Methods | High | 132 |
---|
Problems with Data Types | High | 10 |
---|
Medium | 0 |
Low | 0 |
Problems with Methods | High | 0 |
---|
Medium | 0 |
Low | 0 |
Added Methods (27)
spark-sql_2.10-1.0.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.binaryToLiteral ( byte[ ] a ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.binaryToLiteral:([B)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.booleanToLiteral ( boolean b ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.booleanToLiteral:(Z)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.byteToLiteral ( byte b ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.byteToLiteral:(B)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.createParquetFile ( String path, boolean allowExisting, org.apache.hadoop.conf.Configuration conf, scala.reflect.api.TypeTags.TypeTag<A> p4 ) : SchemaRDD
[mangled: org/apache/spark/sql/SQLContext.createParquetFile:(Ljava/lang/String;ZLorg/apache/hadoop/conf/Configuration;Lscala/reflect/api/TypeTags$TypeTag;)Lorg/apache/spark/sql/SchemaRDD;]
SQLContext.createSchemaRDD ( org.apache.spark.rdd.RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> p2 ) : SchemaRDD
[mangled: org/apache/spark/sql/SQLContext.createSchemaRDD:(Lorg/apache/spark/rdd/RDD;Lscala/reflect/api/TypeTags$TypeTag;)Lorg/apache/spark/sql/SchemaRDD;]
SQLContext.decimalToLiteral ( scala.math.BigDecimal d ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.decimalToLiteral:(Lscala/math/BigDecimal;)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.doubleToLiteral ( double d ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.doubleToLiteral:(D)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.DslAttribute ( catalyst.expressions.AttributeReference a ) : catalyst.dsl.package.ExpressionConversions.DslAttribute
[mangled: org/apache/spark/sql/SQLContext.DslAttribute:(Lorg/apache/spark/sql/catalyst/expressions/AttributeReference;)Lorg/apache/spark/sql/catalyst/dsl/package$ExpressionConversions$DslAttribute;]
SQLContext.DslExpression ( catalyst.expressions.Expression e ) : catalyst.dsl.package.ExpressionConversions.DslExpression
[mangled: org/apache/spark/sql/SQLContext.DslExpression:(Lorg/apache/spark/sql/catalyst/expressions/Expression;)Lorg/apache/spark/sql/catalyst/dsl/package$ExpressionConversions$DslExpression;]
SQLContext.DslString ( String s ) : catalyst.dsl.package.ExpressionConversions.DslString
[mangled: org/apache/spark/sql/SQLContext.DslString:(Ljava/lang/String;)Lorg/apache/spark/sql/catalyst/dsl/package$ExpressionConversions$DslString;]
SQLContext.DslSymbol ( scala.Symbol sym ) : catalyst.dsl.package.ExpressionConversions.DslSymbol
[mangled: org/apache/spark/sql/SQLContext.DslSymbol:(Lscala/Symbol;)Lorg/apache/spark/sql/catalyst/dsl/package$ExpressionConversions$DslSymbol;]
SQLContext.floatToLiteral ( float f ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.floatToLiteral:(F)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.inferSchema ( org.apache.spark.rdd.RDD<scala.collection.immutable.Map<String,Object>> rdd ) : SchemaRDD
[mangled: org/apache/spark/sql/SQLContext.inferSchema:(Lorg/apache/spark/rdd/RDD;)Lorg/apache/spark/sql/SchemaRDD;]
SQLContext.intToLiteral ( int i ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.intToLiteral:(I)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.logger ( ) : com.typesafe.scalalogging.slf4j.Logger
[mangled: org/apache/spark/sql/SQLContext.logger:()Lcom/typesafe/scalalogging/slf4j/Logger;]
SQLContext.logicalPlanToSparkQuery ( catalyst.plans.logical.LogicalPlan plan ) : SchemaRDD
[mangled: org/apache/spark/sql/SQLContext.logicalPlanToSparkQuery:(Lorg/apache/spark/sql/catalyst/plans/logical/LogicalPlan;)Lorg/apache/spark/sql/SchemaRDD;]
SQLContext.longToLiteral ( long l ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.longToLiteral:(J)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.optimizer ( ) : catalyst.optimizer.Optimizer.
[mangled: org/apache/spark/sql/SQLContext.optimizer:()Lorg/apache/spark/sql/catalyst/optimizer/Optimizer$;]
SQLContext.parquetFile ( String path ) : SchemaRDD
[mangled: org/apache/spark/sql/SQLContext.parquetFile:(Ljava/lang/String;)Lorg/apache/spark/sql/SchemaRDD;]
SQLContext.parser ( ) : catalyst.SqlParser
[mangled: org/apache/spark/sql/SQLContext.parser:()Lorg/apache/spark/sql/catalyst/SqlParser;]
SQLContext.registerRDDAsTable ( SchemaRDD rdd, String tableName ) : void
[mangled: org/apache/spark/sql/SQLContext.registerRDDAsTable:(Lorg/apache/spark/sql/SchemaRDD;Ljava/lang/String;)V]
SQLContext.shortToLiteral ( short s ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.shortToLiteral:(S)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.sql ( String sqlText ) : SchemaRDD
[mangled: org/apache/spark/sql/SQLContext.sql:(Ljava/lang/String;)Lorg/apache/spark/sql/SchemaRDD;]
SQLContext.stringToLiteral ( String s ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.stringToLiteral:(Ljava/lang/String;)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
SQLContext.symbolToUnresolvedAttribute ( scala.Symbol s ) : catalyst.analysis.UnresolvedAttribute
[mangled: org/apache/spark/sql/SQLContext.symbolToUnresolvedAttribute:(Lscala/Symbol;)Lorg/apache/spark/sql/catalyst/analysis/UnresolvedAttribute;]
SQLContext.table ( String tableName ) : SchemaRDD
[mangled: org/apache/spark/sql/SQLContext.table:(Ljava/lang/String;)Lorg/apache/spark/sql/SchemaRDD;]
SQLContext.timestampToLiteral ( java.sql.Timestamp t ) : catalyst.expressions.Literal
[mangled: org/apache/spark/sql/SQLContext.timestampToLiteral:(Ljava/sql/Timestamp;)Lorg/apache/spark/sql/catalyst/expressions/Literal;]
to the top
Removed Methods (132)
spark-core_2.10-1.4.0.jar, Logging.class
package org.apache.spark
Logging.logName ( ) [abstract] : String
[mangled: org/apache/spark/Logging.logName:()Ljava/lang/String;]
spark-sql_2.10-1.4.0.jar, DataFrameReader.class
package org.apache.spark.sql
DataFrameReader.DataFrameReader ( SQLContext sqlContext )
[mangled: org/apache/spark/sql/DataFrameReader."<init>":(Lorg/apache/spark/sql/SQLContext;)V]
DataFrameReader.format ( String source ) : DataFrameReader
[mangled: org/apache/spark/sql/DataFrameReader.format:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrameReader;]
DataFrameReader.jdbc ( String url, String table, java.util.Properties properties ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.jdbc:(Ljava/lang/String;Ljava/lang/String;Ljava/util/Properties;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.jdbc ( String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions, java.util.Properties connectionProperties ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.jdbc:(Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;JJILjava/util/Properties;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.jdbc ( String url, String table, String[ ] predicates, java.util.Properties connectionProperties ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.jdbc:(Ljava/lang/String;Ljava/lang/String;[Ljava/lang/String;Ljava/util/Properties;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.json ( org.apache.spark.api.java.JavaRDD<String> jsonRDD ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.json:(Lorg/apache/spark/api/java/JavaRDD;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.json ( org.apache.spark.rdd.RDD<String> jsonRDD ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.json:(Lorg/apache/spark/rdd/RDD;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.json ( String path ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.json:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.load ( ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.load:()Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.load ( String path ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.load:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.option ( String key, String value ) : DataFrameReader
[mangled: org/apache/spark/sql/DataFrameReader.option:(Ljava/lang/String;Ljava/lang/String;)Lorg/apache/spark/sql/DataFrameReader;]
DataFrameReader.options ( java.util.Map<String,String> options ) : DataFrameReader
[mangled: org/apache/spark/sql/DataFrameReader.options:(Ljava/util/Map;)Lorg/apache/spark/sql/DataFrameReader;]
DataFrameReader.options ( scala.collection.Map<String,String> options ) : DataFrameReader
[mangled: org/apache/spark/sql/DataFrameReader.options:(Lscala/collection/Map;)Lorg/apache/spark/sql/DataFrameReader;]
DataFrameReader.parquet ( scala.collection.Seq<String> paths ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.parquet:(Lscala/collection/Seq;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.parquet ( String... paths ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.parquet:([Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
DataFrameReader.schema ( types.StructType schema ) : DataFrameReader
[mangled: org/apache/spark/sql/DataFrameReader.schema:(Lorg/apache/spark/sql/types/StructType;)Lorg/apache/spark/sql/DataFrameReader;]
DataFrameReader.table ( String tableName ) : DataFrame
[mangled: org/apache/spark/sql/DataFrameReader.table:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
spark-sql_2.10-1.4.0.jar, DataFrameWriter.class
package org.apache.spark.sql
DataFrameWriter.DataFrameWriter ( DataFrame df )
[mangled: org/apache/spark/sql/DataFrameWriter."<init>":(Lorg/apache/spark/sql/DataFrame;)V]
DataFrameWriter.format ( String source ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.format:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.insertInto ( String tableName ) : void
[mangled: org/apache/spark/sql/DataFrameWriter.insertInto:(Ljava/lang/String;)V]
DataFrameWriter.jdbc ( String url, String table, java.util.Properties connectionProperties ) : void
[mangled: org/apache/spark/sql/DataFrameWriter.jdbc:(Ljava/lang/String;Ljava/lang/String;Ljava/util/Properties;)V]
DataFrameWriter.json ( String path ) : void
[mangled: org/apache/spark/sql/DataFrameWriter.json:(Ljava/lang/String;)V]
DataFrameWriter.mode ( SaveMode saveMode ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.mode:(Lorg/apache/spark/sql/SaveMode;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.mode ( String saveMode ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.mode:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.option ( String key, String value ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.option:(Ljava/lang/String;Ljava/lang/String;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.options ( java.util.Map<String,String> options ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.options:(Ljava/util/Map;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.options ( scala.collection.Map<String,String> options ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.options:(Lscala/collection/Map;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.parquet ( String path ) : void
[mangled: org/apache/spark/sql/DataFrameWriter.parquet:(Ljava/lang/String;)V]
DataFrameWriter.partitionBy ( scala.collection.Seq<String> colNames ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.partitionBy:(Lscala/collection/Seq;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.partitionBy ( String... colNames ) : DataFrameWriter
[mangled: org/apache/spark/sql/DataFrameWriter.partitionBy:([Ljava/lang/String;)Lorg/apache/spark/sql/DataFrameWriter;]
DataFrameWriter.save ( ) : void
[mangled: org/apache/spark/sql/DataFrameWriter.save:()V]
DataFrameWriter.save ( String path ) : void
[mangled: org/apache/spark/sql/DataFrameWriter.save:(Ljava/lang/String;)V]
DataFrameWriter.saveAsTable ( String tableName ) : void
[mangled: org/apache/spark/sql/DataFrameWriter.saveAsTable:(Ljava/lang/String;)V]
spark-sql_2.10-1.4.0.jar, HadoopFsRelation.class
package org.apache.spark.sql.sources
HadoopFsRelation.buildScan ( org.apache.hadoop.fs.FileStatus[ ] inputFiles ) : org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.buildScan:([Lorg/apache/hadoop/fs/FileStatus;)Lorg/apache/spark/rdd/RDD;]
HadoopFsRelation.buildScan ( String[ ] requiredColumns, org.apache.hadoop.fs.FileStatus[ ] inputFiles ) : org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.buildScan:([Ljava/lang/String;[Lorg/apache/hadoop/fs/FileStatus;)Lorg/apache/spark/rdd/RDD;]
HadoopFsRelation.buildScan ( String[ ] requiredColumns, Filter[ ] filters, org.apache.hadoop.fs.FileStatus[ ] inputFiles ) : org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.buildScan:([Ljava/lang/String;[Lorg/apache/spark/sql/sources/Filter;[Lorg/apache/hadoop/fs/FileStatus;)Lorg/apache/spark/rdd/RDD;]
HadoopFsRelation.buildScan ( String[ ] requiredColumns, Filter[ ] filters, org.apache.hadoop.fs.FileStatus[ ] inputFiles, org.apache.spark.broadcast.Broadcast<org.apache.spark.SerializableWritable<org.apache.hadoop.conf.Configuration>> broadcastedConf ) : org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.buildScan:([Ljava/lang/String;[Lorg/apache/spark/sql/sources/Filter;[Lorg/apache/hadoop/fs/FileStatus;Lorg/apache/spark/broadcast/Broadcast;)Lorg/apache/spark/rdd/RDD;]
HadoopFsRelation.buildScan ( String[ ] requiredColumns, Filter[ ] filters, String[ ] inputPaths, org.apache.spark.broadcast.Broadcast<org.apache.spark.SerializableWritable<org.apache.hadoop.conf.Configuration>> broadcastedConf ) : org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.buildScan:([Ljava/lang/String;[Lorg/apache/spark/sql/sources/Filter;[Ljava/lang/String;Lorg/apache/spark/broadcast/Broadcast;)Lorg/apache/spark/rdd/RDD;]
HadoopFsRelation.cachedLeafStatuses ( ) : scala.collection.immutable.Set<org.apache.hadoop.fs.FileStatus>
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.cachedLeafStatuses:()Lscala/collection/immutable/Set;]
HadoopFsRelation.dataSchema ( ) [abstract] : org.apache.spark.sql.types.StructType
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.dataSchema:()Lorg/apache/spark/sql/types/StructType;]
HadoopFsRelation.HadoopFsRelation ( )
[mangled: org/apache/spark/sql/sources/HadoopFsRelation."<init>":()V]
HadoopFsRelation.HadoopFsRelation ( scala.Option<PartitionSpec> maybePartitionSpec )
[mangled: org/apache/spark/sql/sources/HadoopFsRelation."<init>":(Lscala/Option;)V]
HadoopFsRelation.HadoopFsRelation..discoverPartitions ( ) : PartitionSpec
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.org.apache.spark.sql.sources.HadoopFsRelation..discoverPartitions:()Lorg/apache/spark/sql/sources/PartitionSpec;]
HadoopFsRelation.HadoopFsRelation..fileStatusCache ( ) : HadoopFsRelation.FileStatusCache
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.org.apache.spark.sql.sources.HadoopFsRelation..fileStatusCache:()Lorg/apache/spark/sql/sources/HadoopFsRelation$FileStatusCache;]
HadoopFsRelation.HadoopFsRelation..hadoopConf ( ) : org.apache.hadoop.conf.Configuration
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.org.apache.spark.sql.sources.HadoopFsRelation..hadoopConf:()Lorg/apache/hadoop/conf/Configuration;]
HadoopFsRelation.partitionColumns ( ) : org.apache.spark.sql.types.StructType
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.partitionColumns:()Lorg/apache/spark/sql/types/StructType;]
HadoopFsRelation.partitionSpec ( ) : PartitionSpec
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.partitionSpec:()Lorg/apache/spark/sql/sources/PartitionSpec;]
HadoopFsRelation.paths ( ) [abstract] : String[ ]
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.paths:()[Ljava/lang/String;]
HadoopFsRelation.prepareJobForWrite ( org.apache.hadoop.mapreduce.Job p1 ) [abstract] : OutputWriterFactory
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.prepareJobForWrite:(Lorg/apache/hadoop/mapreduce/Job;)Lorg/apache/spark/sql/sources/OutputWriterFactory;]
HadoopFsRelation.refresh ( ) : void
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.refresh:()V]
HadoopFsRelation.schema ( ) : org.apache.spark.sql.types.StructType
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.schema:()Lorg/apache/spark/sql/types/StructType;]
HadoopFsRelation.userDefinedPartitionColumns ( ) : scala.Option<org.apache.spark.sql.types.StructType>
[mangled: org/apache/spark/sql/sources/HadoopFsRelation.userDefinedPartitionColumns:()Lscala/Option;]
spark-sql_2.10-1.4.0.jar, HadoopFsRelationProvider.class
package org.apache.spark.sql.sources
HadoopFsRelationProvider.createRelation ( org.apache.spark.sql.SQLContext p1, String[ ] p2, scala.Option<org.apache.spark.sql.types.StructType> p3, scala.Option<org.apache.spark.sql.types.StructType> p4, scala.collection.immutable.Map<String,String> p5 ) [abstract] : HadoopFsRelation
[mangled: org/apache/spark/sql/sources/HadoopFsRelationProvider.createRelation:(Lorg/apache/spark/sql/SQLContext;[Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/collection/immutable/Map;)Lorg/apache/spark/sql/sources/HadoopFsRelation;]
spark-sql_2.10-1.4.0.jar, OutputWriter.class
package org.apache.spark.sql.sources
OutputWriter.close ( ) [abstract] : void
[mangled: org/apache/spark/sql/sources/OutputWriter.close:()V]
OutputWriter.OutputWriter ( )
[mangled: org/apache/spark/sql/sources/OutputWriter."<init>":()V]
OutputWriter.write ( org.apache.spark.sql.Row p1 ) [abstract] : void
[mangled: org/apache/spark/sql/sources/OutputWriter.write:(Lorg/apache/spark/sql/Row;)V]
spark-sql_2.10-1.4.0.jar, OutputWriterFactory.class
package org.apache.spark.sql.sources
OutputWriterFactory.newInstance ( String p1, org.apache.spark.sql.types.StructType p2, org.apache.hadoop.mapreduce.TaskAttemptContext p3 ) [abstract] : OutputWriter
[mangled: org/apache/spark/sql/sources/OutputWriterFactory.newInstance:(Ljava/lang/String;Lorg/apache/spark/sql/types/StructType;Lorg/apache/hadoop/mapreduce/TaskAttemptContext;)Lorg/apache/spark/sql/sources/OutputWriter;]
OutputWriterFactory.OutputWriterFactory ( )
[mangled: org/apache/spark/sql/sources/OutputWriterFactory."<init>":()V]
spark-sql_2.10-1.4.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.applySchemaToPythonRDD ( org.apache.spark.rdd.RDD<Object[ ]> rdd, types.StructType schema ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.applySchemaToPythonRDD:(Lorg/apache/spark/rdd/RDD;Lorg/apache/spark/sql/types/StructType;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.applySchemaToPythonRDD ( org.apache.spark.rdd.RDD<Object[ ]> rdd, String schemaString ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.applySchemaToPythonRDD:(Lorg/apache/spark/rdd/RDD;Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.baseRelationToDataFrame ( sources.BaseRelation baseRelation ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.baseRelationToDataFrame:(Lorg/apache/spark/sql/sources/BaseRelation;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.cacheManager ( ) : execution.CacheManager
[mangled: org/apache/spark/sql/SQLContext.cacheManager:()Lorg/apache/spark/sql/execution/CacheManager;]
SQLContext.clearCache ( ) : void
[mangled: org/apache/spark/sql/SQLContext.clearCache:()V]
SQLContext.conf ( ) : SQLConf
[mangled: org/apache/spark/sql/SQLContext.conf:()Lorg/apache/spark/sql/SQLConf;]
SQLContext.createDataFrame ( org.apache.spark.api.java.JavaRDD<?> rdd, Class<?> beanClass ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createDataFrame:(Lorg/apache/spark/api/java/JavaRDD;Ljava/lang/Class;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createDataFrame ( org.apache.spark.api.java.JavaRDD<Row> rowRDD, types.StructType schema ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createDataFrame:(Lorg/apache/spark/api/java/JavaRDD;Lorg/apache/spark/sql/types/StructType;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createDataFrame ( org.apache.spark.rdd.RDD<?> rdd, Class<?> beanClass ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createDataFrame:(Lorg/apache/spark/rdd/RDD;Ljava/lang/Class;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createDataFrame ( org.apache.spark.rdd.RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> p2 ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createDataFrame:(Lorg/apache/spark/rdd/RDD;Lscala/reflect/api/TypeTags$TypeTag;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createDataFrame ( org.apache.spark.rdd.RDD<Row> rowRDD, types.StructType schema ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createDataFrame:(Lorg/apache/spark/rdd/RDD;Lorg/apache/spark/sql/types/StructType;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createDataFrame ( org.apache.spark.rdd.RDD<Row> rowRDD, types.StructType schema, boolean needsConversion ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createDataFrame:(Lorg/apache/spark/rdd/RDD;Lorg/apache/spark/sql/types/StructType;Z)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createDataFrame ( scala.collection.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> p2 ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createDataFrame:(Lscala/collection/Seq;Lscala/reflect/api/TypeTags$TypeTag;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createExternalTable ( String tableName, String path ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createExternalTable:(Ljava/lang/String;Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createExternalTable ( String tableName, String path, String source ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createExternalTable:(Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createExternalTable ( String tableName, String source, java.util.Map<String,String> options ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createExternalTable:(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createExternalTable ( String tableName, String source, types.StructType schema, java.util.Map<String,String> options ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createExternalTable:(Ljava/lang/String;Ljava/lang/String;Lorg/apache/spark/sql/types/StructType;Ljava/util/Map;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createExternalTable ( String tableName, String source, types.StructType schema, scala.collection.immutable.Map<String,String> options ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createExternalTable:(Ljava/lang/String;Ljava/lang/String;Lorg/apache/spark/sql/types/StructType;Lscala/collection/immutable/Map;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createExternalTable ( String tableName, String source, scala.collection.immutable.Map<String,String> options ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.createExternalTable:(Ljava/lang/String;Ljava/lang/String;Lscala/collection/immutable/Map;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.createSession ( ) : SQLContext.SQLSession
[mangled: org/apache/spark/sql/SQLContext.createSession:()Lorg/apache/spark/sql/SQLContext$SQLSession;]
SQLContext.currentSession ( ) : SQLContext.SQLSession
[mangled: org/apache/spark/sql/SQLContext.currentSession:()Lorg/apache/spark/sql/SQLContext$SQLSession;]
SQLContext.ddlParser ( ) : sources.DDLParser
[mangled: org/apache/spark/sql/SQLContext.ddlParser:()Lorg/apache/spark/sql/sources/DDLParser;]
SQLContext.defaultSession ( ) : SQLContext.SQLSession
[mangled: org/apache/spark/sql/SQLContext.defaultSession:()Lorg/apache/spark/sql/SQLContext$SQLSession;]
SQLContext.detachSession ( ) : void
[mangled: org/apache/spark/sql/SQLContext.detachSession:()V]
SQLContext.dialectClassName ( ) : String
[mangled: org/apache/spark/sql/SQLContext.dialectClassName:()Ljava/lang/String;]
SQLContext.dropTempTable ( String tableName ) : void
[mangled: org/apache/spark/sql/SQLContext.dropTempTable:(Ljava/lang/String;)V]
SQLContext.emptyDataFrame ( ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.emptyDataFrame:()Lorg/apache/spark/sql/DataFrame;]
SQLContext.emptyResult ( ) : org.apache.spark.rdd.RDD<Row>
[mangled: org/apache/spark/sql/SQLContext.emptyResult:()Lorg/apache/spark/rdd/RDD;]
SQLContext.experimental ( ) : ExperimentalMethods
[mangled: org/apache/spark/sql/SQLContext.experimental:()Lorg/apache/spark/sql/ExperimentalMethods;]
SQLContext.functionRegistry ( ) : catalyst.analysis.FunctionRegistry
[mangled: org/apache/spark/sql/SQLContext.functionRegistry:()Lorg/apache/spark/sql/catalyst/analysis/FunctionRegistry;]
SQLContext.getAllConfs ( ) : scala.collection.immutable.Map<String,String>
[mangled: org/apache/spark/sql/SQLContext.getAllConfs:()Lscala/collection/immutable/Map;]
SQLContext.getConf ( String key ) : String
[mangled: org/apache/spark/sql/SQLContext.getConf:(Ljava/lang/String;)Ljava/lang/String;]
SQLContext.getConf ( String key, String defaultValue ) : String
[mangled: org/apache/spark/sql/SQLContext.getConf:(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;]
SQLContext.getOrCreate ( org.apache.spark.SparkContext p1 ) [static] : SQLContext
[mangled: org/apache/spark/sql/SQLContext.getOrCreate:(Lorg/apache/spark/SparkContext;)Lorg/apache/spark/sql/SQLContext;]
SQLContext.getSchema ( Class<?> beanClass ) : scala.collection.Seq<catalyst.expressions.AttributeReference>
[mangled: org/apache/spark/sql/SQLContext.getSchema:(Ljava/lang/Class;)Lscala/collection/Seq;]
SQLContext.getSQLDialect ( ) : catalyst.ParserDialect
[mangled: org/apache/spark/sql/SQLContext.getSQLDialect:()Lorg/apache/spark/sql/catalyst/ParserDialect;]
SQLContext.implicits ( ) : SQLContext.implicits.
[mangled: org/apache/spark/sql/SQLContext.implicits:()Lorg/apache/spark/sql/SQLContext$implicits$;]
SQLContext.isCached ( String tableName ) : boolean
[mangled: org/apache/spark/sql/SQLContext.isCached:(Ljava/lang/String;)Z]
SQLContext.isTraceEnabled ( ) : boolean
[mangled: org/apache/spark/sql/SQLContext.isTraceEnabled:()Z]
SQLContext.log ( ) : org.slf4j.Logger
[mangled: org/apache/spark/sql/SQLContext.log:()Lorg/slf4j/Logger;]
SQLContext.logDebug ( scala.Function0<String> msg ) : void
[mangled: org/apache/spark/sql/SQLContext.logDebug:(Lscala/Function0;)V]
SQLContext.logDebug ( scala.Function0<String> msg, Throwable throwable ) : void
[mangled: org/apache/spark/sql/SQLContext.logDebug:(Lscala/Function0;Ljava/lang/Throwable;)V]
SQLContext.logError ( scala.Function0<String> msg ) : void
[mangled: org/apache/spark/sql/SQLContext.logError:(Lscala/Function0;)V]
SQLContext.logError ( scala.Function0<String> msg, Throwable throwable ) : void
[mangled: org/apache/spark/sql/SQLContext.logError:(Lscala/Function0;Ljava/lang/Throwable;)V]
SQLContext.logInfo ( scala.Function0<String> msg ) : void
[mangled: org/apache/spark/sql/SQLContext.logInfo:(Lscala/Function0;)V]
SQLContext.logInfo ( scala.Function0<String> msg, Throwable throwable ) : void
[mangled: org/apache/spark/sql/SQLContext.logInfo:(Lscala/Function0;Ljava/lang/Throwable;)V]
SQLContext.logName ( ) : String
[mangled: org/apache/spark/sql/SQLContext.logName:()Ljava/lang/String;]
SQLContext.logTrace ( scala.Function0<String> msg ) : void
[mangled: org/apache/spark/sql/SQLContext.logTrace:(Lscala/Function0;)V]
SQLContext.logTrace ( scala.Function0<String> msg, Throwable throwable ) : void
[mangled: org/apache/spark/sql/SQLContext.logTrace:(Lscala/Function0;Ljava/lang/Throwable;)V]
SQLContext.logWarning ( scala.Function0<String> msg ) : void
[mangled: org/apache/spark/sql/SQLContext.logWarning:(Lscala/Function0;)V]
SQLContext.logWarning ( scala.Function0<String> msg, Throwable throwable ) : void
[mangled: org/apache/spark/sql/SQLContext.logWarning:(Lscala/Function0;Ljava/lang/Throwable;)V]
SQLContext.openSession ( ) : SQLContext.SQLSession
[mangled: org/apache/spark/sql/SQLContext.openSession:()Lorg/apache/spark/sql/SQLContext$SQLSession;]
SQLContext.optimizer ( ) : catalyst.optimizer.Optimizer
[mangled: org/apache/spark/sql/SQLContext.optimizer:()Lorg/apache/spark/sql/catalyst/optimizer/Optimizer;]
SQLContext.org.apache.spark.Logging..log_ ( ) : org.slf4j.Logger
[mangled: org/apache/spark/sql/SQLContext.org.apache.spark.Logging..log_:()Lorg/slf4j/Logger;]
SQLContext.org.apache.spark.Logging..log__.eq ( org.slf4j.Logger p1 ) : void
[mangled: org/apache/spark/sql/SQLContext.org.apache.spark.Logging..log__.eq:(Lorg/slf4j/Logger;)V]
SQLContext.parquetFile ( String... paths ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.parquetFile:([Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.parseDataType ( String dataTypeString ) : types.DataType
[mangled: org/apache/spark/sql/SQLContext.parseDataType:(Ljava/lang/String;)Lorg/apache/spark/sql/types/DataType;]
SQLContext.range ( long start, long end ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.range:(JJ)Lorg/apache/spark/sql/DataFrame;]
SQLContext.range ( long start, long end, long step, int numPartitions ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.range:(JJJI)Lorg/apache/spark/sql/DataFrame;]
SQLContext.read ( ) : DataFrameReader
[mangled: org/apache/spark/sql/SQLContext.read:()Lorg/apache/spark/sql/DataFrameReader;]
SQLContext.registerDataFrameAsTable ( DataFrame df, String tableName ) : void
[mangled: org/apache/spark/sql/SQLContext.registerDataFrameAsTable:(Lorg/apache/spark/sql/DataFrame;Ljava/lang/String;)V]
SQLContext.setConf ( java.util.Properties props ) : void
[mangled: org/apache/spark/sql/SQLContext.setConf:(Ljava/util/Properties;)V]
SQLContext.setConf ( String key, String value ) : void
[mangled: org/apache/spark/sql/SQLContext.setConf:(Ljava/lang/String;Ljava/lang/String;)V]
SQLContext.sql ( String sqlText ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.sql:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.SQLContext ( org.apache.spark.api.java.JavaSparkContext sparkContext )
[mangled: org/apache/spark/sql/SQLContext."<init>":(Lorg/apache/spark/api/java/JavaSparkContext;)V]
SQLContext.sqlParser ( ) : SparkSQLParser
[mangled: org/apache/spark/sql/SQLContext.sqlParser:()Lorg/apache/spark/sql/SparkSQLParser;]
SQLContext.table ( String tableName ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.table:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.tableNames ( ) : String[ ]
[mangled: org/apache/spark/sql/SQLContext.tableNames:()[Ljava/lang/String;]
SQLContext.tableNames ( String databaseName ) : String[ ]
[mangled: org/apache/spark/sql/SQLContext.tableNames:(Ljava/lang/String;)[Ljava/lang/String;]
SQLContext.tables ( ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.tables:()Lorg/apache/spark/sql/DataFrame;]
SQLContext.tables ( String databaseName ) : DataFrame
[mangled: org/apache/spark/sql/SQLContext.tables:(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;]
SQLContext.tlSession ( ) : ThreadLocal<SQLContext.SQLSession>
[mangled: org/apache/spark/sql/SQLContext.tlSession:()Ljava/lang/ThreadLocal;]
SQLContext.udf ( ) : UDFRegistration
[mangled: org/apache/spark/sql/SQLContext.udf:()Lorg/apache/spark/sql/UDFRegistration;]
to the top
Problems with Data Types, High Severity (10)
spark-core_2.10-1.4.0.jar
package org.apache.spark
[+] Logging (1)
| Change | Effect |
---|
1 | Abstract method logName ( ) has been removed from this interface. | A client program may be interrupted by NoSuchMethodError exception. |
[+] affected methods (14)
isTraceEnabled ( )This abstract method is from 'Logging' interface.
log ( )This abstract method is from 'Logging' interface.
logDebug ( scala.Function0<java.lang.String> )This abstract method is from 'Logging' interface.
logDebug ( scala.Function0<java.lang.String>, java.lang.Throwable )This abstract method is from 'Logging' interface.
logError ( scala.Function0<java.lang.String> )This abstract method is from 'Logging' interface.
logError ( scala.Function0<java.lang.String>, java.lang.Throwable )This abstract method is from 'Logging' interface.
logInfo ( scala.Function0<java.lang.String> )This abstract method is from 'Logging' interface.
logInfo ( scala.Function0<java.lang.String>, java.lang.Throwable )This abstract method is from 'Logging' interface.
logTrace ( scala.Function0<java.lang.String> )This abstract method is from 'Logging' interface.
logTrace ( scala.Function0<java.lang.String>, java.lang.Throwable )This abstract method is from 'Logging' interface.
logWarning ( scala.Function0<java.lang.String> )This abstract method is from 'Logging' interface.
logWarning ( scala.Function0<java.lang.String>, java.lang.Throwable )This abstract method is from 'Logging' interface.
Logging..log_ ( )This abstract method is from 'Logging' interface.
Logging..log__.eq ( org.slf4j.Logger )This abstract method is from 'Logging' interface.
[+] SparkContext (1)
| Change | Effect |
---|
1 | Removed super-interface ExecutorAllocationClient. | A client program may be interrupted by NoSuchMethodError exception. |
[+] affected methods (4)
emptyRDD ( scala.reflect.ClassTag<T> )This method is from 'SparkContext' class.
hadoopConfiguration ( )This method is from 'SparkContext' class.
sparkContext ( )Return value of this method has type 'SparkContext'.
SQLContext ( SparkContext )1st parameter 'sparkContext' of this method has type 'SparkContext'.
spark-sql_2.10-1.4.0.jar
package org.apache.spark.sql
[+] DataFrameReader (1)
| Change | Effect |
---|
1 | This class has been removed. | A client program may be interrupted by NoClassDefFoundError exception. |
[+] affected methods (17)
DataFrameReader ( SQLContext )This constructor is from 'DataFrameReader' class.
format ( java.lang.String )This method is from 'DataFrameReader' class.
jdbc ( java.lang.String, java.lang.String, java.lang.String, long, long, int, java.util.Properties )This method is from 'DataFrameReader' class.
jdbc ( java.lang.String, java.lang.String, java.lang.String[ ], java.util.Properties )This method is from 'DataFrameReader' class.
jdbc ( java.lang.String, java.lang.String, java.util.Properties )This method is from 'DataFrameReader' class.
json ( java.lang.String )This method is from 'DataFrameReader' class.
json ( org.apache.spark.api.java.JavaRDD<java.lang.String> )This method is from 'DataFrameReader' class.
json ( org.apache.spark.rdd.RDD<java.lang.String> )This method is from 'DataFrameReader' class.
load ( )This method is from 'DataFrameReader' class.
load ( java.lang.String )This method is from 'DataFrameReader' class.
option ( java.lang.String, java.lang.String )This method is from 'DataFrameReader' class.
options ( java.util.Map<java.lang.String,java.lang.String> )This method is from 'DataFrameReader' class.
options ( scala.collection.Map<java.lang.String,java.lang.String> )This method is from 'DataFrameReader' class.
parquet ( java.lang.String... )This method is from 'DataFrameReader' class.
parquet ( scala.collection.Seq<java.lang.String> )This method is from 'DataFrameReader' class.
schema ( types.StructType )This method is from 'DataFrameReader' class.
table ( java.lang.String )This method is from 'DataFrameReader' class.
[+] DataFrameWriter (1)
| Change | Effect |
---|
1 | This class has been removed. | A client program may be interrupted by NoClassDefFoundError exception. |
[+] affected methods (16)
DataFrameWriter ( DataFrame )This constructor is from 'DataFrameWriter' class.
format ( java.lang.String )This method is from 'DataFrameWriter' class.
insertInto ( java.lang.String )This method is from 'DataFrameWriter' class.
jdbc ( java.lang.String, java.lang.String, java.util.Properties )This method is from 'DataFrameWriter' class.
json ( java.lang.String )This method is from 'DataFrameWriter' class.
mode ( java.lang.String )This method is from 'DataFrameWriter' class.
mode ( SaveMode )This method is from 'DataFrameWriter' class.
option ( java.lang.String, java.lang.String )This method is from 'DataFrameWriter' class.
options ( java.util.Map<java.lang.String,java.lang.String> )This method is from 'DataFrameWriter' class.
options ( scala.collection.Map<java.lang.String,java.lang.String> )This method is from 'DataFrameWriter' class.
parquet ( java.lang.String )This method is from 'DataFrameWriter' class.
partitionBy ( java.lang.String... )This method is from 'DataFrameWriter' class.
partitionBy ( scala.collection.Seq<java.lang.String> )This method is from 'DataFrameWriter' class.
save ( )This method is from 'DataFrameWriter' class.
save ( java.lang.String )This method is from 'DataFrameWriter' class.
saveAsTable ( java.lang.String )This method is from 'DataFrameWriter' class.
[+] SQLContext (1)
| Change | Effect |
---|
1 | Removed super-interface org.apache.spark.Logging. | A client program may be interrupted by NoSuchMethodError exception. |
[+] affected methods (11)
analyzer ( )This method is from 'SQLContext' class.
cacheTable ( java.lang.String )This method is from 'SQLContext' class.
catalog ( )This method is from 'SQLContext' class.
executePlan ( catalyst.plans.logical.LogicalPlan )This method is from 'SQLContext' class.
executeSql ( java.lang.String )This method is from 'SQLContext' class.
parseSql ( java.lang.String )This method is from 'SQLContext' class.
planner ( )This method is from 'SQLContext' class.
prepareForExecution ( )This method is from 'SQLContext' class.
sparkContext ( )This method is from 'SQLContext' class.
SQLContext ( org.apache.spark.SparkContext )This constructor is from 'SQLContext' class.
uncacheTable ( java.lang.String )This method is from 'SQLContext' class.
[+] SQLContext.QueryExecution (1)
| Change | Effect |
---|
1 | This class became abstract. | A client program may be interrupted by InstantiationError exception. |
[+] affected methods (2)
executePlan ( catalyst.plans.logical.LogicalPlan )Return value of this method has type 'SQLContext.QueryExecution'.
executeSql ( java.lang.String )Return value of this method has type 'SQLContext.QueryExecution'.
package org.apache.spark.sql.sources
[+] HadoopFsRelation (1)
| Change | Effect |
---|
1 | This class has been removed. | A client program may be interrupted by NoClassDefFoundError exception. |
[+] affected methods (19)
buildScan ( java.lang.String[ ], org.apache.hadoop.fs.FileStatus[ ] )This method is from 'HadoopFsRelation' abstract class.
buildScan ( java.lang.String[ ], Filter[ ], java.lang.String[ ], org.apache.spark.broadcast.Broadcast<org.apache.spark.SerializableWritable<org.apache.hadoop.conf.Configuration>> )This method is from 'HadoopFsRelation' abstract class.
buildScan ( java.lang.String[ ], Filter[ ], org.apache.hadoop.fs.FileStatus[ ] )This method is from 'HadoopFsRelation' abstract class.
buildScan ( java.lang.String[ ], Filter[ ], org.apache.hadoop.fs.FileStatus[ ], org.apache.spark.broadcast.Broadcast<org.apache.spark.SerializableWritable<org.apache.hadoop.conf.Configuration>> )This method is from 'HadoopFsRelation' abstract class.
buildScan ( org.apache.hadoop.fs.FileStatus[ ] )This method is from 'HadoopFsRelation' abstract class.
cachedLeafStatuses ( )This method is from 'HadoopFsRelation' abstract class.
dataSchema ( )This abstract method is from 'HadoopFsRelation' abstract class.
HadoopFsRelation ( )This constructor is from 'HadoopFsRelation' abstract class.
HadoopFsRelation ( scala.Option<PartitionSpec> )This constructor is from 'HadoopFsRelation' abstract class.
HadoopFsRelation..discoverPartitions ( )This method is from 'HadoopFsRelation' abstract class.
HadoopFsRelation..fileStatusCache ( )This method is from 'HadoopFsRelation' abstract class.
HadoopFsRelation..hadoopConf ( )This method is from 'HadoopFsRelation' abstract class.
partitionColumns ( )This method is from 'HadoopFsRelation' abstract class.
partitionSpec ( )This method is from 'HadoopFsRelation' abstract class.
paths ( )This abstract method is from 'HadoopFsRelation' abstract class.
prepareJobForWrite ( org.apache.hadoop.mapreduce.Job )This abstract method is from 'HadoopFsRelation' abstract class.
refresh ( )This method is from 'HadoopFsRelation' abstract class.
schema ( )This method is from 'HadoopFsRelation' abstract class.
userDefinedPartitionColumns ( )This method is from 'HadoopFsRelation' abstract class.
[+] HadoopFsRelationProvider (1)
| Change | Effect |
---|
1 | This interface has been removed. | A client program may be interrupted by NoClassDefFoundError exception. |
[+] affected methods (1)
createRelation ( org.apache.spark.sql.SQLContext, java.lang.String[ ], scala.Option<org.apache.spark.sql.types.StructType>, scala.Option<org.apache.spark.sql.types.StructType>, scala.collection.immutable.Map<java.lang.String,java.lang.String> )This abstract method is from 'HadoopFsRelationProvider' interface.
[+] OutputWriter (1)
| Change | Effect |
---|
1 | This class has been removed. | A client program may be interrupted by NoClassDefFoundError exception. |
[+] affected methods (3)
close ( )This abstract method is from 'OutputWriter' abstract class.
OutputWriter ( )This constructor is from 'OutputWriter' abstract class.
write ( org.apache.spark.sql.Row )This abstract method is from 'OutputWriter' abstract class.
[+] OutputWriterFactory (1)
| Change | Effect |
---|
1 | This class has been removed. | A client program may be interrupted by NoClassDefFoundError exception. |
[+] affected methods (2)
newInstance ( java.lang.String, org.apache.spark.sql.types.StructType, org.apache.hadoop.mapreduce.TaskAttemptContext )This abstract method is from 'OutputWriterFactory' abstract class.
OutputWriterFactory ( )This constructor is from 'OutputWriterFactory' abstract class.
to the top
Java ARchives (2)
spark-core_2.10-1.4.0.jar
spark-sql_2.10-1.4.0.jar
to the top
Generated on Tue Sep 8 22:38:32 2015 for spark-avro_2.10-2.0.1 by Java API Compliance Checker 1.4.1
A tool for checking backward compatibility of a Java library API