Binary compatibility report for the succinct-0.1.2 library  between 1.3.0 and 1.5.0 versions   (relating to the portability of client application succinct-0.1.2.jar)

Test Info


Library Namesuccinct-0.1.2
Version #11.3.0
Version #21.5.0
Java Version1.7.0_75

Test Results


Total Java ARchives1
Total Methods / Classes2586 / 463
VerdictIncompatible
(67.3%)

Problem Summary


SeverityCount
Added Methods-352
Removed MethodsHigh902
Problems with
Data Types
High104
Medium13
Low36
Problems with
Methods
High0
Medium1
Low1
Other Changes
in Data Types
-14

Added Methods (352)


spark-sql_2.10-1.5.0.jar, Aggregate.class
package org.apache.spark.sql.execution
Aggregate.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Aggregate.metrics ( )  :  scala.collection.immutable.Map<String,metric.LongSQLMetric>
Aggregate.Aggregate..newAggregateBuffer ( )  :  org.apache.spark.sql.catalyst.expressions.AggregateFunction1[ ]
Aggregate.requiredChildDistribution ( )  :  scala.collection.immutable.List<org.apache.spark.sql.catalyst.plans.physical.Distribution>

spark-sql_2.10-1.5.0.jar, BaseRelation.class
package org.apache.spark.sql.sources
BaseRelation.needConversion ( )  :  boolean

spark-sql_2.10-1.5.0.jar, BatchPythonEvaluation.class
package org.apache.spark.sql.execution
BatchPythonEvaluation.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, BinaryNode.class
package org.apache.spark.sql.execution
BinaryNode.children ( ) [abstract]  :  scala.collection.Seq<SparkPlan>
BinaryNode.left ( ) [abstract]  :  SparkPlan
BinaryNode.right ( ) [abstract]  :  SparkPlan

spark-sql_2.10-1.5.0.jar, BroadcastHashJoin.class
package org.apache.spark.sql.execution.joins
BroadcastHashJoin.canProcessSafeRows ( )  :  boolean
BroadcastHashJoin.canProcessUnsafeRows ( )  :  boolean
BroadcastHashJoin.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
BroadcastHashJoin.doPrepare ( )  :  void
BroadcastHashJoin.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> streamIter, org.apache.spark.sql.execution.metric.LongSQLMetric numStreamRows, HashedRelation hashedRelation, org.apache.spark.sql.execution.metric.LongSQLMetric numOutputRows )  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
BroadcastHashJoin.isUnsafeMode ( )  :  boolean
BroadcastHashJoin.metrics ( )  :  scala.collection.immutable.Map<String,org.apache.spark.sql.execution.metric.LongSQLMetric>
BroadcastHashJoin.outputsUnsafeRows ( )  :  boolean
BroadcastHashJoin.streamSideKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection

spark-sql_2.10-1.5.0.jar, BroadcastLeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
BroadcastLeftSemiJoinHash.BroadcastLeftSemiJoinHash ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right, scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> condition )
BroadcastLeftSemiJoinHash.buildKeyHashSet ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> buildIter, org.apache.spark.sql.execution.metric.LongSQLMetric numBuildRows )  :  java.util.Set<org.apache.spark.sql.catalyst.InternalRow>
BroadcastLeftSemiJoinHash.canProcessSafeRows ( )  :  boolean
BroadcastLeftSemiJoinHash.canProcessUnsafeRows ( )  :  boolean
BroadcastLeftSemiJoinHash.condition ( )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.Expression>
BroadcastLeftSemiJoinHash.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right, scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> condition )  :  BroadcastLeftSemiJoinHash
BroadcastLeftSemiJoinHash.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
BroadcastLeftSemiJoinHash.hashSemiJoin ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> streamIter, org.apache.spark.sql.execution.metric.LongSQLMetric numStreamRows, java.util.Set<org.apache.spark.sql.catalyst.InternalRow> hashSet, org.apache.spark.sql.execution.metric.LongSQLMetric numOutputRows )  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
BroadcastLeftSemiJoinHash.hashSemiJoin ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> streamIter, org.apache.spark.sql.execution.metric.LongSQLMetric numStreamRows, HashedRelation hashedRelation, org.apache.spark.sql.execution.metric.LongSQLMetric numOutputRows )  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
BroadcastLeftSemiJoinHash.leftKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection
BroadcastLeftSemiJoinHash.metrics ( )  :  scala.collection.immutable.Map<String,org.apache.spark.sql.execution.metric.LongSQLMetric>
BroadcastLeftSemiJoinHash.HashSemiJoin..boundCondition ( )  :  scala.Function1<org.apache.spark.sql.catalyst.InternalRow,Object>
BroadcastLeftSemiJoinHash.outputsUnsafeRows ( )  :  boolean
BroadcastLeftSemiJoinHash.rightKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection
BroadcastLeftSemiJoinHash.supportUnsafe ( )  :  boolean

spark-sql_2.10-1.5.0.jar, BroadcastNestedLoopJoin.class
package org.apache.spark.sql.execution.joins
BroadcastNestedLoopJoin.canProcessUnsafeRows ( )  :  boolean
BroadcastNestedLoopJoin.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
BroadcastNestedLoopJoin.metrics ( )  :  scala.collection.immutable.Map<String,org.apache.spark.sql.execution.metric.LongSQLMetric>
BroadcastNestedLoopJoin.BroadcastNestedLoopJoin..genResultProjection ( )  :  scala.Function1<org.apache.spark.sql.catalyst.InternalRow,org.apache.spark.sql.catalyst.InternalRow>
BroadcastNestedLoopJoin.outputsUnsafeRows ( )  :  boolean

spark-sql_2.10-1.5.0.jar, ByteArrayColumnType.class
package org.apache.spark.sql.columnar
ByteArrayColumnType.ByteArrayColumnType ( int typeId, int defaultSize )

spark-sql_2.10-1.5.0.jar, CachedBatch.class
package org.apache.spark.sql.columnar
CachedBatch.CachedBatch ( byte[ ][ ] buffers, org.apache.spark.sql.catalyst.InternalRow stats )
CachedBatch.copy ( byte[ ][ ] buffers, org.apache.spark.sql.catalyst.InternalRow stats )  :  CachedBatch
CachedBatch.stats ( )  :  org.apache.spark.sql.catalyst.InternalRow

spark-sql_2.10-1.5.0.jar, CacheTableCommand.class
package org.apache.spark.sql.execution
CacheTableCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.5.0.jar, CartesianProduct.class
package org.apache.spark.sql.execution.joins
CartesianProduct.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
CartesianProduct.metrics ( )  :  scala.collection.immutable.Map<String,org.apache.spark.sql.execution.metric.LongSQLMetric>

spark-sql_2.10-1.5.0.jar, Column.class
package org.apache.spark.sql
Column.alias ( String alias )  :  Column
Column.apply ( Object extraction )  :  Column
Column.as ( scala.collection.Seq<String> aliases )  :  Column
Column.as ( String alias, types.Metadata metadata )  :  Column
Column.as ( String[ ] aliases )  :  Column
Column.between ( Object lowerBound, Object upperBound )  :  Column
Column.bitwiseAND ( Object other )  :  Column
Column.bitwiseOR ( Object other )  :  Column
Column.bitwiseXOR ( Object other )  :  Column
Column.equals ( Object that )  :  boolean
Column.getItem ( Object key )  :  Column
Column.hashCode ( )  :  int
Column.in ( Object... list )  :  Column
Column.isin ( Object... list )  :  Column
Column.isin ( scala.collection.Seq<Object> list )  :  Column
Column.isNaN ( )  :  Column
Column.isTraceEnabled ( )  :  boolean
Column.log ( )  :  org.slf4j.Logger
Column.logDebug ( scala.Function0<String> msg )  :  void
Column.logDebug ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logError ( scala.Function0<String> msg )  :  void
Column.logError ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logInfo ( scala.Function0<String> msg )  :  void
Column.logInfo ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logName ( )  :  String
Column.logTrace ( scala.Function0<String> msg )  :  void
Column.logTrace ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.logWarning ( scala.Function0<String> msg )  :  void
Column.logWarning ( scala.Function0<String> msg, Throwable throwable )  :  void
Column.org.apache.spark.Logging..log_ ( )  :  org.slf4j.Logger
Column.org.apache.spark.Logging..log__.eq ( org.slf4j.Logger p1 )  :  void
Column.otherwise ( Object value )  :  Column
Column.over ( expressions.WindowSpec window )  :  Column
Column.when ( Column condition, Object value )  :  Column

spark-sql_2.10-1.5.0.jar, ColumnBuilder.class
package org.apache.spark.sql.columnar
ColumnBuilder.appendFrom ( org.apache.spark.sql.catalyst.InternalRow p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.5.0.jar, ColumnStats.class
package org.apache.spark.sql.columnar
ColumnStats.collectedStatistics ( ) [abstract]  :  org.apache.spark.sql.catalyst.expressions.GenericInternalRow
ColumnStats.gatherStats ( org.apache.spark.sql.catalyst.InternalRow p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.5.0.jar, DataFrame.class
package org.apache.spark.sql
DataFrame.coalesce ( int numPartitions )  :  DataFrame
DataFrame.cube ( Column... cols )  :  GroupedData
DataFrame.cube ( scala.collection.Seq<Column> cols )  :  GroupedData
DataFrame.cube ( String col1, scala.collection.Seq<String> cols )  :  GroupedData
DataFrame.cube ( String col1, String... cols )  :  GroupedData
DataFrame.describe ( scala.collection.Seq<String> cols )  :  DataFrame
DataFrame.describe ( String... cols )  :  DataFrame
DataFrame.drop ( Column col )  :  DataFrame
DataFrame.drop ( String colName )  :  DataFrame
DataFrame.dropDuplicates ( )  :  DataFrame
DataFrame.dropDuplicates ( scala.collection.Seq<String> colNames )  :  DataFrame
DataFrame.dropDuplicates ( String[ ] colNames )  :  DataFrame
DataFrame.inputFiles ( )  :  String[ ]
DataFrame.join ( DataFrame right, scala.collection.Seq<String> usingColumns )  :  DataFrame
DataFrame.join ( DataFrame right, String usingColumn )  :  DataFrame
DataFrame.na ( )  :  DataFrameNaFunctions
DataFrame.DataFrame..logicalPlanToDataFrame ( catalyst.plans.logical.LogicalPlan logicalPlan )  :  DataFrame
DataFrame.randomSplit ( double[ ] weights )  :  DataFrame[ ]
DataFrame.randomSplit ( double[ ] weights, long seed )  :  DataFrame[ ]
DataFrame.randomSplit ( scala.collection.immutable.List<Object> weights, long seed )  :  DataFrame[ ]
DataFrame.rollup ( Column... cols )  :  GroupedData
DataFrame.rollup ( scala.collection.Seq<Column> cols )  :  GroupedData
DataFrame.rollup ( String col1, scala.collection.Seq<String> cols )  :  GroupedData
DataFrame.rollup ( String col1, String... cols )  :  GroupedData
DataFrame.show ( boolean truncate )  :  void
DataFrame.show ( int numRows, boolean truncate )  :  void
DataFrame.showString ( int _numRows, boolean truncate )  :  String
DataFrame.stat ( )  :  DataFrameStatFunctions
DataFrame.where ( String conditionExpr )  :  DataFrame
DataFrame.withNewExecutionId ( scala.Function0<T> body )  :  T
DataFrame.write ( )  :  DataFrameWriter

spark-sql_2.10-1.5.0.jar, DataFrameNaFunctions.class
package org.apache.spark.sql
DataFrameNaFunctions.DataFrameNaFunctions ( DataFrame df )

spark-sql_2.10-1.5.0.jar, DataFrameWriter.class
package org.apache.spark.sql
DataFrameWriter.format ( String source )  :  DataFrameWriter
DataFrameWriter.save ( String path )  :  void

spark-sql_2.10-1.5.0.jar, DescribeCommand.class
package org.apache.spark.sql.execution
DescribeCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.5.0.jar, Encoder<T>.class
package org.apache.spark.sql.columnar.compression
Encoder<T>.gatherCompressibilityStats ( org.apache.spark.sql.catalyst.InternalRow p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.5.0.jar, EvaluatePython.class
package org.apache.spark.sql.execution
EvaluatePython.javaToPython ( org.apache.spark.rdd.RDD<Object> p1 ) [static]  :  org.apache.spark.rdd.RDD<byte[ ]>
EvaluatePython.registerPicklers ( ) [static]  :  void

spark-sql_2.10-1.5.0.jar, Except.class
package org.apache.spark.sql.execution
Except.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, Exchange.class
package org.apache.spark.sql.execution
Exchange.canProcessSafeRows ( )  :  boolean
Exchange.canProcessUnsafeRows ( )  :  boolean
Exchange.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Exchange.nodeName ( )  :  String
Exchange.Exchange..needToCopyObjectsBeforeShuffle ( org.apache.spark.Partitioner partitioner, org.apache.spark.serializer.Serializer serializer )  :  boolean
Exchange.Exchange..serializer ( )  :  org.apache.spark.serializer.Serializer
Exchange.outputsUnsafeRows ( )  :  boolean

spark-sql_2.10-1.5.0.jar, ExecutedCommand.class
package org.apache.spark.sql.execution
ExecutedCommand.argString ( )  :  String
ExecutedCommand.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, Expand.class
package org.apache.spark.sql.execution
Expand.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, ExplainCommand.class
package org.apache.spark.sql.execution
ExplainCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.5.0.jar, ExternalSort.class
package org.apache.spark.sql.execution
ExternalSort.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
ExternalSort.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.5.0.jar, Filter.class
package org.apache.spark.sql.execution
Filter.canProcessSafeRows ( )  :  boolean
Filter.canProcessUnsafeRows ( )  :  boolean
Filter.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Filter.metrics ( )  :  scala.collection.immutable.Map<String,metric.LongSQLMetric>
Filter.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>
Filter.outputsUnsafeRows ( )  :  boolean

spark-sql_2.10-1.5.0.jar, Generate.class
package org.apache.spark.sql.execution
Generate.copy ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, SparkPlan child )  :  Generate
Generate.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Generate.Generate ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, SparkPlan child )

spark-sql_2.10-1.5.0.jar, HashedRelation.class
package org.apache.spark.sql.execution.joins
HashedRelation.get ( org.apache.spark.sql.catalyst.InternalRow p1 ) [abstract]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.InternalRow>
HashedRelation.readBytes ( java.io.ObjectInput p1 ) [abstract]  :  byte[ ]
HashedRelation.writeBytes ( java.io.ObjectOutput p1, byte[ ] p2 ) [abstract]  :  void

spark-sql_2.10-1.5.0.jar, HashJoin.class
package org.apache.spark.sql.execution.joins
HashJoin.canProcessSafeRows ( ) [abstract]  :  boolean
HashJoin.canProcessUnsafeRows ( ) [abstract]  :  boolean
HashJoin.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> p1, org.apache.spark.sql.execution.metric.LongSQLMetric p2, HashedRelation p3, org.apache.spark.sql.execution.metric.LongSQLMetric p4 ) [abstract]  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
HashJoin.isUnsafeMode ( ) [abstract]  :  boolean
HashJoin.outputsUnsafeRows ( ) [abstract]  :  boolean
HashJoin.streamSideKeyGenerator ( ) [abstract]  :  org.apache.spark.sql.catalyst.expressions.package.Projection

spark-sql_2.10-1.5.0.jar, HashOuterJoin.class
package org.apache.spark.sql.execution.joins
HashOuterJoin.buildHashTable ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> p1, org.apache.spark.sql.execution.metric.LongSQLMetric p2, org.apache.spark.sql.catalyst.expressions.package.Projection p3 ) [abstract]  :  java.util.HashMap<org.apache.spark.sql.catalyst.InternalRow,org.apache.spark.util.collection.CompactBuffer<org.apache.spark.sql.catalyst.InternalRow>>
HashOuterJoin.buildKeyGenerator ( ) [abstract]  :  org.apache.spark.sql.catalyst.expressions.package.Projection
HashOuterJoin.buildKeys ( ) [abstract]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
HashOuterJoin.buildPlan ( ) [abstract]  :  org.apache.spark.sql.execution.SparkPlan
HashOuterJoin.canProcessSafeRows ( ) [abstract]  :  boolean
HashOuterJoin.canProcessUnsafeRows ( ) [abstract]  :  boolean
HashOuterJoin.EMPTY_LIST ( ) [abstract]  :  org.apache.spark.util.collection.CompactBuffer<org.apache.spark.sql.catalyst.InternalRow>
HashOuterJoin.fullOuterIterator ( org.apache.spark.sql.catalyst.InternalRow p1, scala.collection.Iterable<org.apache.spark.sql.catalyst.InternalRow> p2, scala.collection.Iterable<org.apache.spark.sql.catalyst.InternalRow> p3, org.apache.spark.sql.catalyst.expressions.JoinedRow p4, org.apache.spark.sql.execution.metric.LongSQLMetric p5 ) [abstract]  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
HashOuterJoin.isUnsafeMode ( ) [abstract]  :  boolean
HashOuterJoin.leftOuterIterator ( org.apache.spark.sql.catalyst.InternalRow p1, org.apache.spark.sql.catalyst.expressions.JoinedRow p2, scala.collection.Iterable<org.apache.spark.sql.catalyst.InternalRow> p3, scala.Function1<org.apache.spark.sql.catalyst.InternalRow,org.apache.spark.sql.catalyst.InternalRow> p4, org.apache.spark.sql.execution.metric.LongSQLMetric p5 ) [abstract]  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
HashOuterJoin.HashOuterJoin..DUMMY_LIST ( ) [abstract]  :  org.apache.spark.util.collection.CompactBuffer<org.apache.spark.sql.catalyst.InternalRow>
HashOuterJoin.HashOuterJoin..leftNullRow ( ) [abstract]  :  org.apache.spark.sql.catalyst.expressions.GenericInternalRow
HashOuterJoin.HashOuterJoin..rightNullRow ( ) [abstract]  :  org.apache.spark.sql.catalyst.expressions.GenericInternalRow
HashOuterJoin.outputsUnsafeRows ( ) [abstract]  :  boolean
HashOuterJoin.resultProjection ( ) [abstract]  :  scala.Function1<org.apache.spark.sql.catalyst.InternalRow,org.apache.spark.sql.catalyst.InternalRow>
HashOuterJoin.rightOuterIterator ( org.apache.spark.sql.catalyst.InternalRow p1, scala.collection.Iterable<org.apache.spark.sql.catalyst.InternalRow> p2, org.apache.spark.sql.catalyst.expressions.JoinedRow p3, scala.Function1<org.apache.spark.sql.catalyst.InternalRow,org.apache.spark.sql.catalyst.InternalRow> p4, org.apache.spark.sql.execution.metric.LongSQLMetric p5 ) [abstract]  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
HashOuterJoin.streamedKeyGenerator ( ) [abstract]  :  org.apache.spark.sql.catalyst.expressions.package.Projection
HashOuterJoin.streamedKeys ( ) [abstract]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
HashOuterJoin.streamedPlan ( ) [abstract]  :  org.apache.spark.sql.execution.SparkPlan

spark-sql_2.10-1.5.0.jar, InMemoryColumnarTableScan.class
package org.apache.spark.sql.columnar
InMemoryColumnarTableScan.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
InMemoryColumnarTableScan.enableAccumulators ( )  :  boolean

spark-sql_2.10-1.5.0.jar, InMemoryRelation.class
package org.apache.spark.sql.columnar
InMemoryRelation.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics, org.apache.spark.Accumulable<scala.collection.mutable.ArrayBuffer<org.apache.spark.sql.catalyst.InternalRow>,org.apache.spark.sql.catalyst.InternalRow> _batchStats )  :  InMemoryRelation
InMemoryRelation.InMemoryRelation ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics, org.apache.spark.Accumulable<scala.collection.mutable.ArrayBuffer<org.apache.spark.sql.catalyst.InternalRow>,org.apache.spark.sql.catalyst.InternalRow> _batchStats )
InMemoryRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
InMemoryRelation.uncache ( boolean blocking )  :  void

spark-sql_2.10-1.5.0.jar, IntColumnStats.class
package org.apache.spark.sql.columnar
IntColumnStats.collectedStatistics ( )  :  org.apache.spark.sql.catalyst.expressions.GenericInternalRow
IntColumnStats.gatherStats ( org.apache.spark.sql.catalyst.InternalRow row, int ordinal )  :  void

spark-sql_2.10-1.5.0.jar, Intersect.class
package org.apache.spark.sql.execution
Intersect.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, LeafNode.class
package org.apache.spark.sql.execution
LeafNode.children ( ) [abstract]  :  scala.collection.Seq<SparkPlan>

spark-sql_2.10-1.5.0.jar, LeftSemiJoinBNL.class
package org.apache.spark.sql.execution.joins
LeftSemiJoinBNL.canProcessUnsafeRows ( )  :  boolean
LeftSemiJoinBNL.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
LeftSemiJoinBNL.metrics ( )  :  scala.collection.immutable.Map<String,org.apache.spark.sql.execution.metric.LongSQLMetric>
LeftSemiJoinBNL.outputsUnsafeRows ( )  :  boolean

spark-sql_2.10-1.5.0.jar, LeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
LeftSemiJoinHash.buildKeyHashSet ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> buildIter, org.apache.spark.sql.execution.metric.LongSQLMetric numBuildRows )  :  java.util.Set<org.apache.spark.sql.catalyst.InternalRow>
LeftSemiJoinHash.canProcessSafeRows ( )  :  boolean
LeftSemiJoinHash.canProcessUnsafeRows ( )  :  boolean
LeftSemiJoinHash.condition ( )  :  scala.Option<org.apache.spark.sql.catalyst.expressions.Expression>
LeftSemiJoinHash.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right, scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> condition )  :  LeftSemiJoinHash
LeftSemiJoinHash.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
LeftSemiJoinHash.hashSemiJoin ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> streamIter, org.apache.spark.sql.execution.metric.LongSQLMetric numStreamRows, java.util.Set<org.apache.spark.sql.catalyst.InternalRow> hashSet, org.apache.spark.sql.execution.metric.LongSQLMetric numOutputRows )  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
LeftSemiJoinHash.hashSemiJoin ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> streamIter, org.apache.spark.sql.execution.metric.LongSQLMetric numStreamRows, HashedRelation hashedRelation, org.apache.spark.sql.execution.metric.LongSQLMetric numOutputRows )  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
LeftSemiJoinHash.leftKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection
LeftSemiJoinHash.LeftSemiJoinHash ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right, scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> condition )
LeftSemiJoinHash.metrics ( )  :  scala.collection.immutable.Map<String,org.apache.spark.sql.execution.metric.LongSQLMetric>
LeftSemiJoinHash.HashSemiJoin..boundCondition ( )  :  scala.Function1<org.apache.spark.sql.catalyst.InternalRow,Object>
LeftSemiJoinHash.outputPartitioning ( )  :  org.apache.spark.sql.catalyst.plans.physical.Partitioning
LeftSemiJoinHash.outputsUnsafeRows ( )  :  boolean
LeftSemiJoinHash.rightKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection
LeftSemiJoinHash.supportUnsafe ( )  :  boolean

spark-sql_2.10-1.5.0.jar, Limit.class
package org.apache.spark.sql.execution
Limit.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, LocalTableScan.class
package org.apache.spark.sql.execution
LocalTableScan.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, LogicalLocalTable.class
package org.apache.spark.sql.execution
LogicalLocalTable.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.5.0.jar, LogicalRDD.class
package org.apache.spark.sql.execution
LogicalRDD.newInstance ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.5.0.jar, NativeColumnType<T>.class
package org.apache.spark.sql.columnar
NativeColumnType<T>.dataType ( )  :  org.apache.spark.sql.types.DataType
NativeColumnType<T>.dataType ( )  :  T
NativeColumnType<T>.defaultSize ( )  :  int
NativeColumnType<T>.NativeColumnType ( T dataType, int typeId, int defaultSize )  :  public
NativeColumnType<T>.typeId ( )  :  int

spark-sql_2.10-1.5.0.jar, NullableColumnBuilder.class
package org.apache.spark.sql.columnar
NullableColumnBuilder.appendFrom ( org.apache.spark.sql.catalyst.InternalRow p1, int p2 ) [abstract]  :  void
NullableColumnBuilder.NullableColumnBuilder..super.appendFrom ( org.apache.spark.sql.catalyst.InternalRow p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.5.0.jar, OutputFaker.class
package org.apache.spark.sql.execution
OutputFaker.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>

spark-sql_2.10-1.5.0.jar, PhysicalRDD.class
package org.apache.spark.sql.execution
PhysicalRDD.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow> rdd, String extraInformation )  :  PhysicalRDD
PhysicalRDD.createFromDataSource ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> p1, org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow> p2, org.apache.spark.sql.sources.BaseRelation p3 ) [static]  :  PhysicalRDD
PhysicalRDD.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
PhysicalRDD.extraInformation ( )  :  String
PhysicalRDD.PhysicalRDD ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow> rdd, String extraInformation )
PhysicalRDD.simpleString ( )  :  String

spark-sql_2.10-1.5.0.jar, Project.class
package org.apache.spark.sql.execution
Project.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Project.metrics ( )  :  scala.collection.immutable.Map<String,metric.LongSQLMetric>
Project.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.5.0.jar, PythonUDF.class
package org.apache.spark.sql.execution
PythonUDF.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )  :  PythonUDF
PythonUDF.eval ( org.apache.spark.sql.catalyst.InternalRow input )  :  Object
PythonUDF.genCode ( org.apache.spark.sql.catalyst.expressions.codegen.CodeGenContext ctx, org.apache.spark.sql.catalyst.expressions.codegen.GeneratedExpressionCode ev )  :  String
PythonUDF.PythonUDF ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )
PythonUDF.pythonVer ( )  :  String

spark-sql_2.10-1.5.0.jar, RunnableCommand.class
package org.apache.spark.sql.execution
RunnableCommand.children ( ) [abstract]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
RunnableCommand.output ( ) [abstract]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.5.0.jar, Sample.class
package org.apache.spark.sql.execution
Sample.canProcessSafeRows ( )  :  boolean
Sample.canProcessUnsafeRows ( )  :  boolean
Sample.copy ( double lowerBound, double upperBound, boolean withReplacement, long seed, SparkPlan child )  :  Sample
Sample.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Sample.lowerBound ( )  :  double
Sample.outputsUnsafeRows ( )  :  boolean
Sample.Sample ( double lowerBound, double upperBound, boolean withReplacement, long seed, SparkPlan child )
Sample.upperBound ( )  :  double

spark-sql_2.10-1.5.0.jar, SetCommand.class
package org.apache.spark.sql.execution
SetCommand.andThen ( scala.Function1<SetCommand,A> p1 ) [static]  :  scala.Function1<scala.Option<scala.Tuple2<String,scala.Option<String>>>,A>
SetCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
SetCommand.compose ( scala.Function1<A,scala.Option<scala.Tuple2<String,scala.Option<String>>>> p1 ) [static]  :  scala.Function1<A,SetCommand>
SetCommand.copy ( scala.Option<scala.Tuple2<String,scala.Option<String>>> kv )  :  SetCommand
SetCommand.SetCommand ( scala.Option<scala.Tuple2<String,scala.Option<String>>> kv )

spark-sql_2.10-1.5.0.jar, ShowTablesCommand.class
package org.apache.spark.sql.execution
ShowTablesCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.5.0.jar, ShuffledHashJoin.class
package org.apache.spark.sql.execution.joins
ShuffledHashJoin.canProcessSafeRows ( )  :  boolean
ShuffledHashJoin.canProcessUnsafeRows ( )  :  boolean
ShuffledHashJoin.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
ShuffledHashJoin.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow> streamIter, org.apache.spark.sql.execution.metric.LongSQLMetric numStreamRows, HashedRelation hashedRelation, org.apache.spark.sql.execution.metric.LongSQLMetric numOutputRows )  :  scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>
ShuffledHashJoin.isUnsafeMode ( )  :  boolean
ShuffledHashJoin.metrics ( )  :  scala.collection.immutable.Map<String,org.apache.spark.sql.execution.metric.LongSQLMetric>
ShuffledHashJoin.outputsUnsafeRows ( )  :  boolean
ShuffledHashJoin.streamSideKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection

spark-sql_2.10-1.5.0.jar, Sort.class
package org.apache.spark.sql.execution
Sort.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Sort.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>

spark-sql_2.10-1.5.0.jar, SparkPlan.class
package org.apache.spark.sql.execution
SparkPlan.canProcessSafeRows ( )  :  boolean
SparkPlan.canProcessUnsafeRows ( )  :  boolean
SparkPlan.doExecute ( ) [abstract]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
SparkPlan.doPrepare ( )  :  void
SparkPlan.longMetric ( String name )  :  metric.LongSQLMetric
SparkPlan.metrics ( )  :  scala.collection.immutable.Map<String,metric.SQLMetric<?,?>>
SparkPlan.newNaturalAscendingOrdering ( scala.collection.Seq<org.apache.spark.sql.types.DataType> dataTypes )  :  scala.math.Ordering<org.apache.spark.sql.catalyst.InternalRow>
SparkPlan.outputOrdering ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>
SparkPlan.outputsUnsafeRows ( )  :  boolean
SparkPlan.prepare ( )  :  void
SparkPlan.requiredChildOrdering ( )  :  scala.collection.Seq<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>>
SparkPlan.unsafeEnabled ( )  :  boolean

spark-sql_2.10-1.5.0.jar, SparkSQLParser.class
package org.apache.spark.sql
SparkSQLParser.DESCRIBE ( )  :  catalyst.AbstractSparkSQLParser.Keyword
SparkSQLParser.EXTENDED ( )  :  catalyst.AbstractSparkSQLParser.Keyword
SparkSQLParser.FUNCTION ( )  :  catalyst.AbstractSparkSQLParser.Keyword
SparkSQLParser.FUNCTIONS ( )  :  catalyst.AbstractSparkSQLParser.Keyword
SparkSQLParser.SparkSQLParser..desc ( )  :  scala.util.parsing.combinator.Parsers.Parser<catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.5.0.jar, SparkStrategies.class
package org.apache.spark.sql.execution
SparkStrategies.Aggregation ( )  :  SparkStrategies.Aggregation.
SparkStrategies.CanBroadcast ( )  :  SparkStrategies.CanBroadcast.
SparkStrategies.EquiJoinSelection ( )  :  SparkStrategies.EquiJoinSelection.
SparkStrategies.TakeOrderedAndProject ( )  :  SparkStrategies.TakeOrderedAndProject.

spark-sql_2.10-1.5.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.cacheManager ( )  :  execution.CacheManager
SQLContext.createDataFrame ( org.apache.spark.rdd.RDD<Row> rowRDD, types.StructType schema, boolean needsConversion )  :  DataFrame
SQLContext.createSession ( )  :  SQLContext.SQLSession
SQLContext.currentSession ( )  :  SQLContext.SQLSession
SQLContext.ddlParser ( )  :  execution.datasources.DDLParser
SQLContext.defaultSession ( )  :  SQLContext.SQLSession
SQLContext.detachSession ( )  :  void
SQLContext.dialectClassName ( )  :  String
SQLContext.getConf ( SQLConf.SQLConfEntry<T> entry )  :  T
SQLContext.getConf ( SQLConf.SQLConfEntry<T> entry, T defaultValue )  :  T
SQLContext.getOrCreate ( org.apache.spark.SparkContext p1 ) [static]  :  SQLContext
SQLContext.getSQLDialect ( )  :  catalyst.ParserDialect
SQLContext.internalCreateDataFrame ( org.apache.spark.rdd.RDD<catalyst.InternalRow> catalystRows, types.StructType schema )  :  DataFrame
SQLContext.listener ( )  :  execution.ui.SQLListener
SQLContext.openSession ( )  :  SQLContext.SQLSession
SQLContext.range ( long end )  :  DataFrame
SQLContext.range ( long start, long end )  :  DataFrame
SQLContext.range ( long start, long end, long step, int numPartitions )  :  DataFrame
SQLContext.read ( )  :  DataFrameReader
SQLContext.setConf ( SQLConf.SQLConfEntry<T> entry, T value )  :  void
SQLContext.setSession ( SQLContext.SQLSession session )  :  void
SQLContext.tlSession ( )  :  ThreadLocal<SQLContext.SQLSession>

spark-sql_2.10-1.5.0.jar, StringContains.class
package org.apache.spark.sql.sources
StringContains.attribute ( )  :  String
StringContains.canEqual ( Object p1 )  :  boolean
StringContains.copy ( String attribute, String value )  :  StringContains
StringContains.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,StringContains>>
StringContains.equals ( Object p1 )  :  boolean
StringContains.hashCode ( )  :  int
StringContains.productArity ( )  :  int
StringContains.productElement ( int p1 )  :  Object
StringContains.productIterator ( )  :  scala.collection.Iterator<Object>
StringContains.productPrefix ( )  :  String
StringContains.StringContains ( String attribute, String value )
StringContains.toString ( )  :  String
StringContains.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,String>,StringContains>
StringContains.value ( )  :  String

spark-sql_2.10-1.5.0.jar, StringEndsWith.class
package org.apache.spark.sql.sources
StringEndsWith.attribute ( )  :  String
StringEndsWith.canEqual ( Object p1 )  :  boolean
StringEndsWith.copy ( String attribute, String value )  :  StringEndsWith
StringEndsWith.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,StringEndsWith>>
StringEndsWith.equals ( Object p1 )  :  boolean
StringEndsWith.hashCode ( )  :  int
StringEndsWith.productArity ( )  :  int
StringEndsWith.productElement ( int p1 )  :  Object
StringEndsWith.productIterator ( )  :  scala.collection.Iterator<Object>
StringEndsWith.productPrefix ( )  :  String
StringEndsWith.StringEndsWith ( String attribute, String value )
StringEndsWith.toString ( )  :  String
StringEndsWith.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,String>,StringEndsWith>
StringEndsWith.value ( )  :  String

spark-sql_2.10-1.5.0.jar, StringStartsWith.class
package org.apache.spark.sql.sources
StringStartsWith.attribute ( )  :  String
StringStartsWith.canEqual ( Object p1 )  :  boolean
StringStartsWith.copy ( String attribute, String value )  :  StringStartsWith
StringStartsWith.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,StringStartsWith>>
StringStartsWith.equals ( Object p1 )  :  boolean
StringStartsWith.hashCode ( )  :  int
StringStartsWith.productArity ( )  :  int
StringStartsWith.productElement ( int p1 )  :  Object
StringStartsWith.productIterator ( )  :  scala.collection.Iterator<Object>
StringStartsWith.productPrefix ( )  :  String
StringStartsWith.StringStartsWith ( String attribute, String value )
StringStartsWith.toString ( )  :  String
StringStartsWith.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,String>,StringStartsWith>
StringStartsWith.value ( )  :  String

spark-sql_2.10-1.5.0.jar, UnaryNode.class
package org.apache.spark.sql.execution
UnaryNode.child ( ) [abstract]  :  SparkPlan
UnaryNode.children ( ) [abstract]  :  scala.collection.Seq<SparkPlan>

spark-sql_2.10-1.5.0.jar, UncacheTableCommand.class
package org.apache.spark.sql.execution
UncacheTableCommand.children ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>

spark-sql_2.10-1.5.0.jar, Union.class
package org.apache.spark.sql.execution
Union.canProcessSafeRows ( )  :  boolean
Union.canProcessUnsafeRows ( )  :  boolean
Union.doExecute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.catalyst.InternalRow>
Union.outputsUnsafeRows ( )  :  boolean

spark-sql_2.10-1.5.0.jar, UserDefinedFunction.class
package org.apache.spark.sql
UserDefinedFunction.copy ( Object f, types.DataType dataType, scala.collection.Seq<types.DataType> inputTypes )  :  UserDefinedFunction
UserDefinedFunction.inputTypes ( )  :  scala.collection.Seq<types.DataType>
UserDefinedFunction.UserDefinedFunction ( Object f, types.DataType dataType, scala.collection.Seq<types.DataType> inputTypes )

spark-sql_2.10-1.5.0.jar, UserDefinedPythonFunction.class
package org.apache.spark.sql
UserDefinedPythonFunction.builder ( scala.collection.Seq<catalyst.expressions.Expression> e )  :  execution.PythonUDF
UserDefinedPythonFunction.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )  :  UserDefinedPythonFunction
UserDefinedPythonFunction.pythonVer ( )  :  String
UserDefinedPythonFunction.UserDefinedPythonFunction ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, String pythonVer, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )

to the top

Removed Methods (902)


spark-sql_2.10-1.3.0.jar, AddExchange.class
package org.apache.spark.sql.execution
AddExchange.AddExchange ( org.apache.spark.sql.SQLContext sqlContext )
AddExchange.andThen ( scala.Function1<AddExchange,A> p1 ) [static]  :  scala.Function1<org.apache.spark.sql.SQLContext,A>
AddExchange.apply ( org.apache.spark.sql.catalyst.trees.TreeNode plan )  :  org.apache.spark.sql.catalyst.trees.TreeNode
AddExchange.apply ( SparkPlan plan )  :  SparkPlan
AddExchange.canEqual ( Object p1 )  :  boolean
AddExchange.compose ( scala.Function1<A,org.apache.spark.sql.SQLContext> p1 ) [static]  :  scala.Function1<A,AddExchange>
AddExchange.copy ( org.apache.spark.sql.SQLContext sqlContext )  :  AddExchange
AddExchange.equals ( Object p1 )  :  boolean
AddExchange.hashCode ( )  :  int
AddExchange.numPartitions ( )  :  int
AddExchange.productArity ( )  :  int
AddExchange.productElement ( int p1 )  :  Object
AddExchange.productIterator ( )  :  scala.collection.Iterator<Object>
AddExchange.productPrefix ( )  :  String
AddExchange.sqlContext ( )  :  org.apache.spark.sql.SQLContext
AddExchange.toString ( )  :  String

spark-sql_2.10-1.3.0.jar, Aggregate.class
package org.apache.spark.sql.execution
Aggregate.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Aggregate.children ( )  :  scala.collection.immutable.List<SparkPlan>
Aggregate.Aggregate..newAggregateBuffer ( )  :  org.apache.spark.sql.catalyst.expressions.AggregateFunction[ ]

spark-sql_2.10-1.3.0.jar, AggregateEvaluation.class
package org.apache.spark.sql.execution
AggregateEvaluation.AggregateEvaluation ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> schema, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> initialValues, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> update, org.apache.spark.sql.catalyst.expressions.Expression result )
AggregateEvaluation.canEqual ( Object p1 )  :  boolean
AggregateEvaluation.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> schema, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> initialValues, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> update, org.apache.spark.sql.catalyst.expressions.Expression result )  :  AggregateEvaluation
AggregateEvaluation.curried ( ) [static]  :  scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<org.apache.spark.sql.catalyst.expressions.Expression,AggregateEvaluation>>>>
AggregateEvaluation.equals ( Object p1 )  :  boolean
AggregateEvaluation.hashCode ( )  :  int
AggregateEvaluation.initialValues ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
AggregateEvaluation.productArity ( )  :  int
AggregateEvaluation.productElement ( int p1 )  :  Object
AggregateEvaluation.productIterator ( )  :  scala.collection.Iterator<Object>
AggregateEvaluation.productPrefix ( )  :  String
AggregateEvaluation.result ( )  :  org.apache.spark.sql.catalyst.expressions.Expression
AggregateEvaluation.schema ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
AggregateEvaluation.toString ( )  :  String
AggregateEvaluation.tupled ( ) [static]  :  scala.Function1<scala.Tuple4<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,org.apache.spark.sql.catalyst.expressions.Expression>,AggregateEvaluation>
AggregateEvaluation.update ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>

spark-sql_2.10-1.3.0.jar, AppendingParquetOutputFormat.class
package org.apache.spark.sql.parquet
AppendingParquetOutputFormat.AppendingParquetOutputFormat ( int offset )

spark-sql_2.10-1.3.0.jar, BatchPythonEvaluation.class
package org.apache.spark.sql.execution
BatchPythonEvaluation.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, BroadcastHashJoin.class
package org.apache.spark.sql.execution.joins
BroadcastHashJoin.curried ( ) [static]  :  scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<package.BuildSide,scala.Function1<org.apache.spark.sql.execution.SparkPlan,scala.Function1<org.apache.spark.sql.execution.SparkPlan,BroadcastHashJoin>>>>>
BroadcastHashJoin.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.Row> streamIter, HashedRelation hashedRelation )  :  scala.collection.Iterator<org.apache.spark.sql.Row>
BroadcastHashJoin.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
BroadcastHashJoin.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
BroadcastHashJoin.streamSideKeyGenerator ( )  :  scala.Function0<org.apache.spark.sql.catalyst.expressions.package.MutableProjection>
BroadcastHashJoin.tupled ( ) [static]  :  scala.Function1<scala.Tuple5<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,package.BuildSide,org.apache.spark.sql.execution.SparkPlan,org.apache.spark.sql.execution.SparkPlan>,BroadcastHashJoin>

spark-sql_2.10-1.3.0.jar, BroadcastLeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
BroadcastLeftSemiJoinHash.BroadcastLeftSemiJoinHash ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right )
BroadcastLeftSemiJoinHash.buildKeys ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
BroadcastLeftSemiJoinHash.buildPlan ( )  :  org.apache.spark.sql.execution.SparkPlan
BroadcastLeftSemiJoinHash.buildSide ( )  :  package.BuildRight.
BroadcastLeftSemiJoinHash.buildSide ( )  :  package.BuildSide
BroadcastLeftSemiJoinHash.buildSideKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection
BroadcastLeftSemiJoinHash.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right )  :  BroadcastLeftSemiJoinHash
BroadcastLeftSemiJoinHash.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.Row> streamIter, HashedRelation hashedRelation )  :  scala.collection.Iterator<org.apache.spark.sql.Row>
BroadcastLeftSemiJoinHash.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
BroadcastLeftSemiJoinHash.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
BroadcastLeftSemiJoinHash.streamedKeys ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
BroadcastLeftSemiJoinHash.streamedPlan ( )  :  org.apache.spark.sql.execution.SparkPlan
BroadcastLeftSemiJoinHash.streamSideKeyGenerator ( )  :  scala.Function0<org.apache.spark.sql.catalyst.expressions.package.MutableProjection>

spark-sql_2.10-1.3.0.jar, BroadcastNestedLoopJoin.class
package org.apache.spark.sql.execution.joins
BroadcastNestedLoopJoin.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
BroadcastNestedLoopJoin.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode

spark-sql_2.10-1.3.0.jar, CachedBatch.class
package org.apache.spark.sql.columnar
CachedBatch.CachedBatch ( byte[ ][ ] buffers, org.apache.spark.sql.Row stats )
CachedBatch.copy ( byte[ ][ ] buffers, org.apache.spark.sql.Row stats )  :  CachedBatch
CachedBatch.stats ( )  :  org.apache.spark.sql.Row

spark-sql_2.10-1.3.0.jar, CachedData.class
package org.apache.spark.sql
CachedData.CachedData ( catalyst.plans.logical.LogicalPlan plan, columnar.InMemoryRelation cachedRepresentation )
CachedData.cachedRepresentation ( )  :  columnar.InMemoryRelation
CachedData.canEqual ( Object p1 )  :  boolean
CachedData.copy ( catalyst.plans.logical.LogicalPlan plan, columnar.InMemoryRelation cachedRepresentation )  :  CachedData
CachedData.curried ( ) [static]  :  scala.Function1<catalyst.plans.logical.LogicalPlan,scala.Function1<columnar.InMemoryRelation,CachedData>>
CachedData.equals ( Object p1 )  :  boolean
CachedData.hashCode ( )  :  int
CachedData.plan ( )  :  catalyst.plans.logical.LogicalPlan
CachedData.productArity ( )  :  int
CachedData.productElement ( int p1 )  :  Object
CachedData.productIterator ( )  :  scala.collection.Iterator<Object>
CachedData.productPrefix ( )  :  String
CachedData.toString ( )  :  String
CachedData.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<catalyst.plans.logical.LogicalPlan,columnar.InMemoryRelation>,CachedData>

spark-sql_2.10-1.3.0.jar, CacheManager.class
package org.apache.spark.sql
CacheManager.CacheManager ( SQLContext sqlContext )
CacheManager.cacheQuery ( DataFrame query, scala.Option<String> tableName, org.apache.spark.storage.StorageLevel storageLevel )  :  void
CacheManager.cacheTable ( String tableName )  :  void
CacheManager.clearCache ( )  :  void
CacheManager.invalidateCache ( catalyst.plans.logical.LogicalPlan plan )  :  void
CacheManager.isCached ( String tableName )  :  boolean
CacheManager.tryUncacheQuery ( DataFrame query, boolean blocking )  :  boolean
CacheManager.uncacheTable ( String tableName )  :  void
CacheManager.useCachedData ( catalyst.plans.logical.LogicalPlan plan )  :  catalyst.plans.logical.LogicalPlan

spark-sql_2.10-1.3.0.jar, CartesianProduct.class
package org.apache.spark.sql.execution.joins
CartesianProduct.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
CartesianProduct.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode

spark-sql_2.10-1.3.0.jar, CaseInsensitiveMap.class
package org.apache.spark.sql.sources
CaseInsensitiveMap.CaseInsensitiveMap ( scala.collection.immutable.Map<String,String> map )

spark-sql_2.10-1.3.0.jar, CatalystArrayContainsNullConverter.class
package org.apache.spark.sql.parquet
CatalystArrayContainsNullConverter.CatalystArrayContainsNullConverter ( org.apache.spark.sql.types.DataType elementType, int index, CatalystConverter parent )

spark-sql_2.10-1.3.0.jar, CatalystArrayConverter.class
package org.apache.spark.sql.parquet
CatalystArrayConverter.CatalystArrayConverter ( org.apache.spark.sql.types.DataType elementType, int index, CatalystConverter parent )

spark-sql_2.10-1.3.0.jar, CatalystConverter.class
package org.apache.spark.sql.parquet
CatalystConverter.ARRAY_CONTAINS_NULL_BAG_SCHEMA_NAME ( ) [static]  :  String
CatalystConverter.ARRAY_ELEMENTS_SCHEMA_NAME ( ) [static]  :  String
CatalystConverter.CatalystConverter ( )
CatalystConverter.clearBuffer ( ) [abstract]  :  void
CatalystConverter.getCurrentRecord ( )  :  org.apache.spark.sql.Row
CatalystConverter.index ( ) [abstract]  :  int
CatalystConverter.isRootConverter ( )  :  boolean
CatalystConverter.MAP_KEY_SCHEMA_NAME ( ) [static]  :  String
CatalystConverter.MAP_SCHEMA_NAME ( ) [static]  :  String
CatalystConverter.MAP_VALUE_SCHEMA_NAME ( ) [static]  :  String
CatalystConverter.parent ( ) [abstract]  :  CatalystConverter
CatalystConverter.readDecimal ( org.apache.spark.sql.types.Decimal dest, parquet.io.api.Binary value, org.apache.spark.sql.types.DecimalType ctype )  :  void
CatalystConverter.readTimestamp ( parquet.io.api.Binary value )  :  java.sql.Timestamp
CatalystConverter.size ( ) [abstract]  :  int
CatalystConverter.THRIFT_ARRAY_ELEMENTS_SCHEMA_NAME_SUFFIX ( ) [static]  :  String
CatalystConverter.updateBinary ( int fieldIndex, parquet.io.api.Binary value )  :  void
CatalystConverter.updateBoolean ( int fieldIndex, boolean value )  :  void
CatalystConverter.updateByte ( int fieldIndex, byte value )  :  void
CatalystConverter.updateDecimal ( int fieldIndex, parquet.io.api.Binary value, org.apache.spark.sql.types.DecimalType ctype )  :  void
CatalystConverter.updateDouble ( int fieldIndex, double value )  :  void
CatalystConverter.updateField ( int p1, Object p2 ) [abstract]  :  void
CatalystConverter.updateFloat ( int fieldIndex, float value )  :  void
CatalystConverter.updateInt ( int fieldIndex, int value )  :  void
CatalystConverter.updateLong ( int fieldIndex, long value )  :  void
CatalystConverter.updateShort ( int fieldIndex, short value )  :  void
CatalystConverter.updateString ( int fieldIndex, String value )  :  void
CatalystConverter.updateTimestamp ( int fieldIndex, parquet.io.api.Binary value )  :  void

spark-sql_2.10-1.3.0.jar, CatalystGroupConverter.class
package org.apache.spark.sql.parquet
CatalystGroupConverter.buffer ( )  :  scala.collection.mutable.ArrayBuffer<org.apache.spark.sql.Row>
CatalystGroupConverter.buffer_.eq ( scala.collection.mutable.ArrayBuffer<org.apache.spark.sql.Row> p1 )  :  void
CatalystGroupConverter.CatalystGroupConverter ( org.apache.spark.sql.catalyst.expressions.Attribute[ ] attributes )
CatalystGroupConverter.CatalystGroupConverter ( org.apache.spark.sql.types.StructField[ ] schema, int index, CatalystConverter parent )
CatalystGroupConverter.CatalystGroupConverter ( org.apache.spark.sql.types.StructField[ ] schema, int index, CatalystConverter parent, scala.collection.mutable.ArrayBuffer<Object> current, scala.collection.mutable.ArrayBuffer<org.apache.spark.sql.Row> buffer )
CatalystGroupConverter.clearBuffer ( )  :  void
CatalystGroupConverter.converters ( )  :  parquet.io.api.Converter[ ]
CatalystGroupConverter.current ( )  :  scala.collection.mutable.ArrayBuffer<Object>
CatalystGroupConverter.current_.eq ( scala.collection.mutable.ArrayBuffer<Object> p1 )  :  void
CatalystGroupConverter.end ( )  :  void
CatalystGroupConverter.getConverter ( int fieldIndex )  :  parquet.io.api.Converter
CatalystGroupConverter.getCurrentRecord ( )  :  org.apache.spark.sql.Row
CatalystGroupConverter.index ( )  :  int
CatalystGroupConverter.parent ( )  :  CatalystConverter
CatalystGroupConverter.schema ( )  :  org.apache.spark.sql.types.StructField[ ]
CatalystGroupConverter.size ( )  :  int
CatalystGroupConverter.start ( )  :  void
CatalystGroupConverter.updateField ( int fieldIndex, Object value )  :  void

spark-sql_2.10-1.3.0.jar, CatalystMapConverter.class
package org.apache.spark.sql.parquet
CatalystMapConverter.CatalystMapConverter ( org.apache.spark.sql.types.StructField[ ] schema, int index, CatalystConverter parent )

spark-sql_2.10-1.3.0.jar, CatalystNativeArrayConverter.class
package org.apache.spark.sql.parquet
CatalystNativeArrayConverter.CatalystNativeArrayConverter ( org.apache.spark.sql.types.NativeType elementType, int index, CatalystConverter parent, int capacity )

spark-sql_2.10-1.3.0.jar, CatalystPrimitiveConverter.class
package org.apache.spark.sql.parquet
CatalystPrimitiveConverter.addBinary ( parquet.io.api.Binary value )  :  void
CatalystPrimitiveConverter.addBoolean ( boolean value )  :  void
CatalystPrimitiveConverter.addDouble ( double value )  :  void
CatalystPrimitiveConverter.addFloat ( float value )  :  void
CatalystPrimitiveConverter.addInt ( int value )  :  void
CatalystPrimitiveConverter.addLong ( long value )  :  void
CatalystPrimitiveConverter.CatalystPrimitiveConverter ( CatalystConverter parent, int fieldIndex )

spark-sql_2.10-1.3.0.jar, CatalystPrimitiveRowConverter.class
package org.apache.spark.sql.parquet
CatalystPrimitiveRowConverter.CatalystPrimitiveRowConverter ( org.apache.spark.sql.catalyst.expressions.Attribute[ ] attributes )

spark-sql_2.10-1.3.0.jar, CatalystPrimitiveStringConverter.class
package org.apache.spark.sql.parquet
CatalystPrimitiveStringConverter.CatalystPrimitiveStringConverter ( CatalystConverter parent, int fieldIndex )

spark-sql_2.10-1.3.0.jar, CatalystStructConverter.class
package org.apache.spark.sql.parquet
CatalystStructConverter.CatalystStructConverter ( org.apache.spark.sql.types.StructField[ ] schema, int index, CatalystConverter parent )

spark-sql_2.10-1.3.0.jar, Column.class
package org.apache.spark.sql
Column.apply ( catalyst.expressions.Expression p1 ) [static]  :  Column
Column.apply ( String p1 ) [static]  :  Column
Column.getItem ( int ordinal )  :  Column
Column.in ( Column... list )  :  Column

spark-sql_2.10-1.3.0.jar, ColumnBuilder.class
package org.apache.spark.sql.columnar
ColumnBuilder.appendFrom ( org.apache.spark.sql.Row p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.3.0.jar, ColumnStats.class
package org.apache.spark.sql.columnar
ColumnStats.collectedStatistics ( ) [abstract]  :  org.apache.spark.sql.Row
ColumnStats.gatherStats ( org.apache.spark.sql.Row p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.3.0.jar, CreateTableUsing.class
package org.apache.spark.sql.sources
CreateTableUsing.allowExisting ( )  :  boolean
CreateTableUsing.canEqual ( Object p1 )  :  boolean
CreateTableUsing.copy ( String tableName, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, String provider, boolean temporary, scala.collection.immutable.Map<String,String> options, boolean allowExisting, boolean managedIfNoPath )  :  CreateTableUsing
CreateTableUsing.CreateTableUsing ( String tableName, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, String provider, boolean temporary, scala.collection.immutable.Map<String,String> options, boolean allowExisting, boolean managedIfNoPath )
CreateTableUsing.curried ( ) [static]  :  scala.Function1<String,scala.Function1<scala.Option<org.apache.spark.sql.types.StructType>,scala.Function1<String,scala.Function1<Object,scala.Function1<scala.collection.immutable.Map<String,String>,scala.Function1<Object,scala.Function1<Object,CreateTableUsing>>>>>>>
CreateTableUsing.equals ( Object p1 )  :  boolean
CreateTableUsing.hashCode ( )  :  int
CreateTableUsing.managedIfNoPath ( )  :  boolean
CreateTableUsing.options ( )  :  scala.collection.immutable.Map<String,String>
CreateTableUsing.productArity ( )  :  int
CreateTableUsing.productElement ( int p1 )  :  Object
CreateTableUsing.productIterator ( )  :  scala.collection.Iterator<Object>
CreateTableUsing.productPrefix ( )  :  String
CreateTableUsing.provider ( )  :  String
CreateTableUsing.tableName ( )  :  String
CreateTableUsing.temporary ( )  :  boolean
CreateTableUsing.tupled ( ) [static]  :  scala.Function1<scala.Tuple7<String,scala.Option<org.apache.spark.sql.types.StructType>,String,Object,scala.collection.immutable.Map<String,String>,Object,Object>,CreateTableUsing>
CreateTableUsing.userSpecifiedSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>

spark-sql_2.10-1.3.0.jar, CreateTableUsingAsSelect.class
package org.apache.spark.sql.sources
CreateTableUsingAsSelect.canEqual ( Object p1 )  :  boolean
CreateTableUsingAsSelect.child ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
CreateTableUsingAsSelect.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
CreateTableUsingAsSelect.copy ( String tableName, String provider, boolean temporary, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan child )  :  CreateTableUsingAsSelect
CreateTableUsingAsSelect.CreateTableUsingAsSelect ( String tableName, String provider, boolean temporary, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan child )
CreateTableUsingAsSelect.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,scala.Function1<Object,scala.Function1<org.apache.spark.sql.SaveMode,scala.Function1<scala.collection.immutable.Map<String,String>,scala.Function1<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,CreateTableUsingAsSelect>>>>>>
CreateTableUsingAsSelect.equals ( Object p1 )  :  boolean
CreateTableUsingAsSelect.hashCode ( )  :  int
CreateTableUsingAsSelect.mode ( )  :  org.apache.spark.sql.SaveMode
CreateTableUsingAsSelect.options ( )  :  scala.collection.immutable.Map<String,String>
CreateTableUsingAsSelect.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
CreateTableUsingAsSelect.productArity ( )  :  int
CreateTableUsingAsSelect.productElement ( int p1 )  :  Object
CreateTableUsingAsSelect.productIterator ( )  :  scala.collection.Iterator<Object>
CreateTableUsingAsSelect.productPrefix ( )  :  String
CreateTableUsingAsSelect.provider ( )  :  String
CreateTableUsingAsSelect.tableName ( )  :  String
CreateTableUsingAsSelect.temporary ( )  :  boolean
CreateTableUsingAsSelect.tupled ( ) [static]  :  scala.Function1<scala.Tuple6<String,String,Object,org.apache.spark.sql.SaveMode,scala.collection.immutable.Map<String,String>,org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>,CreateTableUsingAsSelect>

spark-sql_2.10-1.3.0.jar, CreateTempTableUsing.class
package org.apache.spark.sql.sources
CreateTempTableUsing.canEqual ( Object p1 )  :  boolean
CreateTempTableUsing.copy ( String tableName, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, String provider, scala.collection.immutable.Map<String,String> options )  :  CreateTempTableUsing
CreateTempTableUsing.CreateTempTableUsing ( String tableName, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, String provider, scala.collection.immutable.Map<String,String> options )
CreateTempTableUsing.curried ( ) [static]  :  scala.Function1<String,scala.Function1<scala.Option<org.apache.spark.sql.types.StructType>,scala.Function1<String,scala.Function1<scala.collection.immutable.Map<String,String>,CreateTempTableUsing>>>>
CreateTempTableUsing.equals ( Object p1 )  :  boolean
CreateTempTableUsing.hashCode ( )  :  int
CreateTempTableUsing.options ( )  :  scala.collection.immutable.Map<String,String>
CreateTempTableUsing.productArity ( )  :  int
CreateTempTableUsing.productElement ( int p1 )  :  Object
CreateTempTableUsing.productIterator ( )  :  scala.collection.Iterator<Object>
CreateTempTableUsing.productPrefix ( )  :  String
CreateTempTableUsing.provider ( )  :  String
CreateTempTableUsing.run ( org.apache.spark.sql.SQLContext sqlContext )  :  scala.collection.Seq<scala.runtime.Nothing.>
CreateTempTableUsing.tableName ( )  :  String
CreateTempTableUsing.tupled ( ) [static]  :  scala.Function1<scala.Tuple4<String,scala.Option<org.apache.spark.sql.types.StructType>,String,scala.collection.immutable.Map<String,String>>,CreateTempTableUsing>
CreateTempTableUsing.userSpecifiedSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>

spark-sql_2.10-1.3.0.jar, CreateTempTableUsingAsSelect.class
package org.apache.spark.sql.sources
CreateTempTableUsingAsSelect.canEqual ( Object p1 )  :  boolean
CreateTempTableUsingAsSelect.copy ( String tableName, String provider, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query )  :  CreateTempTableUsingAsSelect
CreateTempTableUsingAsSelect.CreateTempTableUsingAsSelect ( String tableName, String provider, org.apache.spark.sql.SaveMode mode, scala.collection.immutable.Map<String,String> options, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query )
CreateTempTableUsingAsSelect.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,scala.Function1<org.apache.spark.sql.SaveMode,scala.Function1<scala.collection.immutable.Map<String,String>,scala.Function1<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,CreateTempTableUsingAsSelect>>>>>
CreateTempTableUsingAsSelect.equals ( Object p1 )  :  boolean
CreateTempTableUsingAsSelect.hashCode ( )  :  int
CreateTempTableUsingAsSelect.mode ( )  :  org.apache.spark.sql.SaveMode
CreateTempTableUsingAsSelect.options ( )  :  scala.collection.immutable.Map<String,String>
CreateTempTableUsingAsSelect.productArity ( )  :  int
CreateTempTableUsingAsSelect.productElement ( int p1 )  :  Object
CreateTempTableUsingAsSelect.productIterator ( )  :  scala.collection.Iterator<Object>
CreateTempTableUsingAsSelect.productPrefix ( )  :  String
CreateTempTableUsingAsSelect.provider ( )  :  String
CreateTempTableUsingAsSelect.query ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
CreateTempTableUsingAsSelect.run ( org.apache.spark.sql.SQLContext sqlContext )  :  scala.collection.Seq<scala.runtime.Nothing.>
CreateTempTableUsingAsSelect.tableName ( )  :  String
CreateTempTableUsingAsSelect.tupled ( ) [static]  :  scala.Function1<scala.Tuple5<String,String,org.apache.spark.sql.SaveMode,scala.collection.immutable.Map<String,String>,org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>,CreateTempTableUsingAsSelect>

spark-sql_2.10-1.3.0.jar, DataFrame.class
package org.apache.spark.sql
DataFrame.cache ( )  :  RDDApi
DataFrame.collect ( )  :  Object
DataFrame.first ( )  :  Object
DataFrame.persist ( )  :  RDDApi
DataFrame.persist ( org.apache.spark.storage.StorageLevel newLevel )  :  RDDApi
DataFrame.showString ( int numRows )  :  String
DataFrame.take ( int n )  :  Object
DataFrame.unpersist ( )  :  RDDApi
DataFrame.unpersist ( boolean blocking )  :  RDDApi

spark-sql_2.10-1.3.0.jar, DDLParser.class
package org.apache.spark.sql.sources
DDLParser.apply ( String input, boolean exceptionOnError )  :  scala.Option<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan>
DDLParser.DDLParser ( scala.Function1<String,org.apache.spark.sql.catalyst.plans.logical.LogicalPlan> parseQuery )

spark-sql_2.10-1.3.0.jar, DescribeCommand.class
package org.apache.spark.sql.sources
DescribeCommand.canEqual ( Object p1 )  :  boolean
DescribeCommand.copy ( org.apache.spark.sql.catalyst.plans.logical.LogicalPlan table, boolean isExtended )  :  DescribeCommand
DescribeCommand.curried ( ) [static]  :  scala.Function1<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,scala.Function1<Object,DescribeCommand>>
DescribeCommand.DescribeCommand ( org.apache.spark.sql.catalyst.plans.logical.LogicalPlan table, boolean isExtended )
DescribeCommand.equals ( Object p1 )  :  boolean
DescribeCommand.hashCode ( )  :  int
DescribeCommand.isExtended ( )  :  boolean
DescribeCommand.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.AttributeReference>
DescribeCommand.productArity ( )  :  int
DescribeCommand.productElement ( int p1 )  :  Object
DescribeCommand.productIterator ( )  :  scala.collection.Iterator<Object>
DescribeCommand.productPrefix ( )  :  String
DescribeCommand.table ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
DescribeCommand.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,Object>,DescribeCommand>

spark-sql_2.10-1.3.0.jar, Distinct.class
package org.apache.spark.sql.execution
Distinct.canEqual ( Object p1 )  :  boolean
Distinct.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Distinct.child ( )  :  SparkPlan
Distinct.children ( )  :  scala.collection.immutable.List<SparkPlan>
Distinct.children ( )  :  scala.collection.Seq
Distinct.copy ( boolean partial, SparkPlan child )  :  Distinct
Distinct.curried ( ) [static]  :  scala.Function1<Object,scala.Function1<SparkPlan,Distinct>>
Distinct.Distinct ( boolean partial, SparkPlan child )
Distinct.equals ( Object p1 )  :  boolean
Distinct.execute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
Distinct.hashCode ( )  :  int
Distinct.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
Distinct.outputPartitioning ( )  :  org.apache.spark.sql.catalyst.plans.physical.Partitioning
Distinct.partial ( )  :  boolean
Distinct.productArity ( )  :  int
Distinct.productElement ( int p1 )  :  Object
Distinct.productIterator ( )  :  scala.collection.Iterator<Object>
Distinct.productPrefix ( )  :  String
Distinct.requiredChildDistribution ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.physical.Distribution>
Distinct.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<Object,SparkPlan>,Distinct>

spark-sql_2.10-1.3.0.jar, DriverQuirks.class
package org.apache.spark.sql.jdbc
DriverQuirks.DriverQuirks ( )
DriverQuirks.get ( String p1 ) [static]  :  DriverQuirks
DriverQuirks.getCatalystType ( int p1, String p2, int p3, org.apache.spark.sql.types.MetadataBuilder p4 ) [abstract]  :  org.apache.spark.sql.types.DataType
DriverQuirks.getJDBCType ( org.apache.spark.sql.types.DataType p1 ) [abstract]  :  scala.Tuple2<String,scala.Option<Object>>

spark-sql_2.10-1.3.0.jar, Encoder<T>.class
package org.apache.spark.sql.columnar.compression
Encoder<T>.gatherCompressibilityStats ( org.apache.spark.sql.Row p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.3.0.jar, EvaluatePython.class
package org.apache.spark.sql.execution
EvaluatePython.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
EvaluatePython.rowToArray ( org.apache.spark.sql.Row p1, scala.collection.Seq<org.apache.spark.sql.types.DataType> p2 ) [static]  :  Object[ ]

spark-sql_2.10-1.3.0.jar, Except.class
package org.apache.spark.sql.execution
Except.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Except.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode

spark-sql_2.10-1.3.0.jar, Exchange.class
package org.apache.spark.sql.execution
Exchange.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Exchange.children ( )  :  scala.collection.immutable.List<SparkPlan>
Exchange.Exchange..bypassMergeThreshold ( )  :  int
Exchange.sortBasedShuffleOn ( )  :  boolean

spark-sql_2.10-1.3.0.jar, ExecutedCommand.class
package org.apache.spark.sql.execution
ExecutedCommand.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, Expand.class
package org.apache.spark.sql.execution
Expand.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Expand.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, ExternalSort.class
package org.apache.spark.sql.execution
ExternalSort.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
ExternalSort.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, Filter.class
package org.apache.spark.sql.execution
Filter.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Filter.children ( )  :  scala.collection.immutable.List<SparkPlan>
Filter.conditionEvaluator ( )  :  scala.Function1<org.apache.spark.sql.Row,Object>

spark-sql_2.10-1.3.0.jar, Generate.class
package org.apache.spark.sql.execution
Generate.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Generate.children ( )  :  scala.collection.immutable.List<SparkPlan>
Generate.copy ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, SparkPlan child )  :  Generate
Generate.Generate ( org.apache.spark.sql.catalyst.expressions.Generator generator, boolean join, boolean outer, SparkPlan child )
Generate.generatorOutput ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>

spark-sql_2.10-1.3.0.jar, GeneratedAggregate.class
package org.apache.spark.sql.execution
GeneratedAggregate.aggregateExpressions ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression>
GeneratedAggregate.canEqual ( Object p1 )  :  boolean
GeneratedAggregate.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
GeneratedAggregate.child ( )  :  SparkPlan
GeneratedAggregate.children ( )  :  scala.collection.immutable.List<SparkPlan>
GeneratedAggregate.children ( )  :  scala.collection.Seq
GeneratedAggregate.copy ( boolean partial, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> groupingExpressions, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> aggregateExpressions, SparkPlan child )  :  GeneratedAggregate
GeneratedAggregate.curried ( ) [static]  :  scala.Function1<Object,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression>,scala.Function1<SparkPlan,GeneratedAggregate>>>>
GeneratedAggregate.equals ( Object p1 )  :  boolean
GeneratedAggregate.execute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
GeneratedAggregate.GeneratedAggregate ( boolean partial, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> groupingExpressions, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> aggregateExpressions, SparkPlan child )
GeneratedAggregate.groupingExpressions ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
GeneratedAggregate.hashCode ( )  :  int
GeneratedAggregate.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
GeneratedAggregate.outputPartitioning ( )  :  org.apache.spark.sql.catalyst.plans.physical.Partitioning
GeneratedAggregate.partial ( )  :  boolean
GeneratedAggregate.productArity ( )  :  int
GeneratedAggregate.productElement ( int p1 )  :  Object
GeneratedAggregate.productIterator ( )  :  scala.collection.Iterator<Object>
GeneratedAggregate.productPrefix ( )  :  String
GeneratedAggregate.requiredChildDistribution ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.plans.physical.Distribution>
GeneratedAggregate.tupled ( ) [static]  :  scala.Function1<scala.Tuple4<Object,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression>,SparkPlan>,GeneratedAggregate>

spark-sql_2.10-1.3.0.jar, GenericColumnAccessor.class
package org.apache.spark.sql.columnar
GenericColumnAccessor.GenericColumnAccessor ( java.nio.ByteBuffer buffer )

spark-sql_2.10-1.3.0.jar, GenericColumnBuilder.class
package org.apache.spark.sql.columnar
GenericColumnBuilder.GenericColumnBuilder ( )

spark-sql_2.10-1.3.0.jar, GenericColumnStats.class
package org.apache.spark.sql.columnar
GenericColumnStats.GenericColumnStats ( )

spark-sql_2.10-1.3.0.jar, GroupedData.class
package org.apache.spark.sql
GroupedData.GroupedData ( DataFrame df, scala.collection.Seq<catalyst.expressions.Expression> groupingExprs )

spark-sql_2.10-1.3.0.jar, HashedRelation.class
package org.apache.spark.sql.execution.joins
HashedRelation.get ( org.apache.spark.sql.Row p1 ) [abstract]  :  org.apache.spark.util.collection.CompactBuffer<org.apache.spark.sql.Row>

spark-sql_2.10-1.3.0.jar, HashJoin.class
package org.apache.spark.sql.execution.joins
HashJoin.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.Row> p1, HashedRelation p2 ) [abstract]  :  scala.collection.Iterator<org.apache.spark.sql.Row>
HashJoin.streamSideKeyGenerator ( ) [abstract]  :  scala.Function0<org.apache.spark.sql.catalyst.expressions.package.MutableProjection>

spark-sql_2.10-1.3.0.jar, HashOuterJoin.class
package org.apache.spark.sql.execution.joins
HashOuterJoin.canEqual ( Object p1 )  :  boolean
HashOuterJoin.children ( )  :  scala.collection.Seq<org.apache.spark.sql.execution.SparkPlan>
HashOuterJoin.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.catalyst.plans.JoinType joinType, scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> condition, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right )  :  HashOuterJoin
HashOuterJoin.curried ( ) [static]  :  scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<org.apache.spark.sql.catalyst.plans.JoinType,scala.Function1<scala.Option<org.apache.spark.sql.catalyst.expressions.Expression>,scala.Function1<org.apache.spark.sql.execution.SparkPlan,scala.Function1<org.apache.spark.sql.execution.SparkPlan,HashOuterJoin>>>>>>
HashOuterJoin.equals ( Object p1 )  :  boolean
HashOuterJoin.execute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
HashOuterJoin.hashCode ( )  :  int
HashOuterJoin.HashOuterJoin ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.catalyst.plans.JoinType joinType, scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> condition, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right )
HashOuterJoin.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
HashOuterJoin.HashOuterJoin..buildHashTable ( scala.collection.Iterator<org.apache.spark.sql.Row> iter, org.apache.spark.sql.catalyst.expressions.package.Projection keyGenerator )  :  java.util.HashMap<org.apache.spark.sql.Row,org.apache.spark.util.collection.CompactBuffer<org.apache.spark.sql.Row>>
HashOuterJoin.HashOuterJoin..DUMMY_LIST ( )  :  scala.collection.Seq<org.apache.spark.sql.Row>
HashOuterJoin.HashOuterJoin..EMPTY_LIST ( )  :  scala.collection.Seq<org.apache.spark.sql.Row>
HashOuterJoin.HashOuterJoin..fullOuterIterator ( org.apache.spark.sql.Row key, scala.collection.Iterable<org.apache.spark.sql.Row> leftIter, scala.collection.Iterable<org.apache.spark.sql.Row> rightIter, org.apache.spark.sql.catalyst.expressions.JoinedRow joinedRow )  :  scala.collection.Iterator<org.apache.spark.sql.Row>
HashOuterJoin.HashOuterJoin..leftNullRow ( )  :  org.apache.spark.sql.catalyst.expressions.GenericRow
HashOuterJoin.HashOuterJoin..leftOuterIterator ( org.apache.spark.sql.Row key, org.apache.spark.sql.catalyst.expressions.JoinedRow joinedRow, scala.collection.Iterable<org.apache.spark.sql.Row> rightIter )  :  scala.collection.Iterator<org.apache.spark.sql.Row>
HashOuterJoin.HashOuterJoin..rightNullRow ( )  :  org.apache.spark.sql.catalyst.expressions.GenericRow
HashOuterJoin.HashOuterJoin..rightOuterIterator ( org.apache.spark.sql.Row key, scala.collection.Iterable<org.apache.spark.sql.Row> leftIter, org.apache.spark.sql.catalyst.expressions.JoinedRow joinedRow )  :  scala.collection.Iterator<org.apache.spark.sql.Row>
HashOuterJoin.outputPartitioning ( )  :  org.apache.spark.sql.catalyst.plans.physical.Partitioning
HashOuterJoin.productArity ( )  :  int
HashOuterJoin.productElement ( int p1 )  :  Object
HashOuterJoin.productIterator ( )  :  scala.collection.Iterator<Object>
HashOuterJoin.productPrefix ( )  :  String
HashOuterJoin.requiredChildDistribution ( )  :  scala.collection.immutable.List<org.apache.spark.sql.catalyst.plans.physical.ClusteredDistribution>
HashOuterJoin.requiredChildDistribution ( )  :  scala.collection.Seq
HashOuterJoin.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
HashOuterJoin.tupled ( ) [static]  :  scala.Function1<scala.Tuple6<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,org.apache.spark.sql.catalyst.plans.JoinType,scala.Option<org.apache.spark.sql.catalyst.expressions.Expression>,org.apache.spark.sql.execution.SparkPlan,org.apache.spark.sql.execution.SparkPlan>,HashOuterJoin>

spark-sql_2.10-1.3.0.jar, InMemoryColumnarTableScan.class
package org.apache.spark.sql.columnar
InMemoryColumnarTableScan.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, InMemoryRelation.class
package org.apache.spark.sql.columnar
InMemoryRelation.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics )  :  InMemoryRelation
InMemoryRelation.InMemoryRelation ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, boolean useCompression, int batchSize, org.apache.spark.storage.StorageLevel storageLevel, org.apache.spark.sql.execution.SparkPlan child, scala.Option<String> tableName, org.apache.spark.rdd.RDD<CachedBatch> _cachedColumnBuffers, org.apache.spark.sql.catalyst.plans.logical.Statistics _statistics )
InMemoryRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, InsertIntoDataSource.class
package org.apache.spark.sql.sources
InsertIntoDataSource.canEqual ( Object p1 )  :  boolean
InsertIntoDataSource.copy ( LogicalRelation logicalRelation, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query, boolean overwrite )  :  InsertIntoDataSource
InsertIntoDataSource.curried ( ) [static]  :  scala.Function1<LogicalRelation,scala.Function1<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,scala.Function1<Object,InsertIntoDataSource>>>
InsertIntoDataSource.equals ( Object p1 )  :  boolean
InsertIntoDataSource.hashCode ( )  :  int
InsertIntoDataSource.InsertIntoDataSource ( LogicalRelation logicalRelation, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan query, boolean overwrite )
InsertIntoDataSource.logicalRelation ( )  :  LogicalRelation
InsertIntoDataSource.overwrite ( )  :  boolean
InsertIntoDataSource.productArity ( )  :  int
InsertIntoDataSource.productElement ( int p1 )  :  Object
InsertIntoDataSource.productIterator ( )  :  scala.collection.Iterator<Object>
InsertIntoDataSource.productPrefix ( )  :  String
InsertIntoDataSource.query ( )  :  org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
InsertIntoDataSource.run ( org.apache.spark.sql.SQLContext sqlContext )  :  scala.collection.Seq<org.apache.spark.sql.Row>
InsertIntoDataSource.tupled ( ) [static]  :  scala.Function1<scala.Tuple3<LogicalRelation,org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,Object>,InsertIntoDataSource>

spark-sql_2.10-1.3.0.jar, InsertIntoParquetTable.class
package org.apache.spark.sql.parquet
InsertIntoParquetTable.canEqual ( Object p1 )  :  boolean
InsertIntoParquetTable.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
InsertIntoParquetTable.child ( )  :  org.apache.spark.sql.execution.SparkPlan
InsertIntoParquetTable.children ( )  :  scala.collection.immutable.List<org.apache.spark.sql.execution.SparkPlan>
InsertIntoParquetTable.children ( )  :  scala.collection.Seq
InsertIntoParquetTable.copy ( ParquetRelation relation, org.apache.spark.sql.execution.SparkPlan child, boolean overwrite )  :  InsertIntoParquetTable
InsertIntoParquetTable.curried ( ) [static]  :  scala.Function1<ParquetRelation,scala.Function1<org.apache.spark.sql.execution.SparkPlan,scala.Function1<Object,InsertIntoParquetTable>>>
InsertIntoParquetTable.equals ( Object p1 )  :  boolean
InsertIntoParquetTable.execute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
InsertIntoParquetTable.hashCode ( )  :  int
InsertIntoParquetTable.InsertIntoParquetTable ( ParquetRelation relation, org.apache.spark.sql.execution.SparkPlan child, boolean overwrite )
InsertIntoParquetTable.newJobContext ( org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.mapreduce.JobID jobId )  :  org.apache.hadoop.mapreduce.JobContext
InsertIntoParquetTable.newTaskAttemptContext ( org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.mapreduce.TaskAttemptID attemptId )  :  org.apache.hadoop.mapreduce.TaskAttemptContext
InsertIntoParquetTable.newTaskAttemptID ( String jtIdentifier, int jobId, boolean isMap, int taskId, int attemptId )  :  org.apache.hadoop.mapreduce.TaskAttemptID
InsertIntoParquetTable.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
InsertIntoParquetTable.outputPartitioning ( )  :  org.apache.spark.sql.catalyst.plans.physical.Partitioning
InsertIntoParquetTable.overwrite ( )  :  boolean
InsertIntoParquetTable.productArity ( )  :  int
InsertIntoParquetTable.productElement ( int p1 )  :  Object
InsertIntoParquetTable.productIterator ( )  :  scala.collection.Iterator<Object>
InsertIntoParquetTable.productPrefix ( )  :  String
InsertIntoParquetTable.relation ( )  :  ParquetRelation
InsertIntoParquetTable.tupled ( ) [static]  :  scala.Function1<scala.Tuple3<ParquetRelation,org.apache.spark.sql.execution.SparkPlan,Object>,InsertIntoParquetTable>

spark-sql_2.10-1.3.0.jar, IntColumnStats.class
package org.apache.spark.sql.columnar
IntColumnStats.collectedStatistics ( )  :  org.apache.spark.sql.Row
IntColumnStats.gatherStats ( org.apache.spark.sql.Row row, int ordinal )  :  void

spark-sql_2.10-1.3.0.jar, Intersect.class
package org.apache.spark.sql.execution
Intersect.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Intersect.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode

spark-sql_2.10-1.3.0.jar, JDBCPartition.class
package org.apache.spark.sql.jdbc
JDBCPartition.canEqual ( Object p1 )  :  boolean
JDBCPartition.copy ( String whereClause, int idx )  :  JDBCPartition
JDBCPartition.curried ( ) [static]  :  scala.Function1<String,scala.Function1<Object,JDBCPartition>>
JDBCPartition.equals ( Object p1 )  :  boolean
JDBCPartition.hashCode ( )  :  int
JDBCPartition.idx ( )  :  int
JDBCPartition.index ( )  :  int
JDBCPartition.JDBCPartition ( String whereClause, int idx )
JDBCPartition.productArity ( )  :  int
JDBCPartition.productElement ( int p1 )  :  Object
JDBCPartition.productIterator ( )  :  scala.collection.Iterator<Object>
JDBCPartition.productPrefix ( )  :  String
JDBCPartition.toString ( )  :  String
JDBCPartition.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,Object>,JDBCPartition>
JDBCPartition.whereClause ( )  :  String

spark-sql_2.10-1.3.0.jar, JDBCPartitioningInfo.class
package org.apache.spark.sql.jdbc
JDBCPartitioningInfo.canEqual ( Object p1 )  :  boolean
JDBCPartitioningInfo.column ( )  :  String
JDBCPartitioningInfo.copy ( String column, long lowerBound, long upperBound, int numPartitions )  :  JDBCPartitioningInfo
JDBCPartitioningInfo.curried ( ) [static]  :  scala.Function1<String,scala.Function1<Object,scala.Function1<Object,scala.Function1<Object,JDBCPartitioningInfo>>>>
JDBCPartitioningInfo.equals ( Object p1 )  :  boolean
JDBCPartitioningInfo.hashCode ( )  :  int
JDBCPartitioningInfo.JDBCPartitioningInfo ( String column, long lowerBound, long upperBound, int numPartitions )
JDBCPartitioningInfo.lowerBound ( )  :  long
JDBCPartitioningInfo.numPartitions ( )  :  int
JDBCPartitioningInfo.productArity ( )  :  int
JDBCPartitioningInfo.productElement ( int p1 )  :  Object
JDBCPartitioningInfo.productIterator ( )  :  scala.collection.Iterator<Object>
JDBCPartitioningInfo.productPrefix ( )  :  String
JDBCPartitioningInfo.toString ( )  :  String
JDBCPartitioningInfo.tupled ( ) [static]  :  scala.Function1<scala.Tuple4<String,Object,Object,Object>,JDBCPartitioningInfo>
JDBCPartitioningInfo.upperBound ( )  :  long

spark-sql_2.10-1.3.0.jar, JDBCRDD.class
package org.apache.spark.sql.jdbc
JDBCRDD.BinaryConversion ( )  :  JDBCRDD.BinaryConversion.
JDBCRDD.BinaryLongConversion ( )  :  JDBCRDD.BinaryLongConversion.
JDBCRDD.BooleanConversion ( )  :  JDBCRDD.BooleanConversion.
JDBCRDD.compute ( org.apache.spark.Partition thePart, org.apache.spark.TaskContext context )  :  Object
JDBCRDD.DateConversion ( )  :  JDBCRDD.DateConversion.
JDBCRDD.DecimalConversion ( )  :  JDBCRDD.DecimalConversion.
JDBCRDD.DoubleConversion ( )  :  JDBCRDD.DoubleConversion.
JDBCRDD.FloatConversion ( )  :  JDBCRDD.FloatConversion.
JDBCRDD.getConnector ( String p1, String p2 ) [static]  :  scala.Function0<java.sql.Connection>
JDBCRDD.getConversions ( org.apache.spark.sql.types.StructType schema )  :  JDBCRDD.JDBCConversion[ ]
JDBCRDD.getPartitions ( )  :  org.apache.spark.Partition[ ]
JDBCRDD.IntegerConversion ( )  :  JDBCRDD.IntegerConversion.
JDBCRDD.JDBCRDD ( org.apache.spark.SparkContext sc, scala.Function0<java.sql.Connection> getConnection, org.apache.spark.sql.types.StructType schema, String fqTable, String[ ] columns, org.apache.spark.sql.sources.Filter[ ] filters, org.apache.spark.Partition[ ] partitions )
JDBCRDD.LongConversion ( )  :  JDBCRDD.LongConversion.
JDBCRDD.JDBCRDD..columnList ( )  :  String
JDBCRDD.JDBCRDD..compileFilter ( org.apache.spark.sql.sources.Filter f )  :  String
JDBCRDD.JDBCRDD..getWhereClause ( JDBCPartition part )  :  String
JDBCRDD.resolveTable ( String p1, String p2 ) [static]  :  org.apache.spark.sql.types.StructType
JDBCRDD.scanTable ( org.apache.spark.SparkContext p1, org.apache.spark.sql.types.StructType p2, String p3, String p4, String p5, String[ ] p6, org.apache.spark.sql.sources.Filter[ ] p7, org.apache.spark.Partition[ ] p8 ) [static]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
JDBCRDD.StringConversion ( )  :  JDBCRDD.StringConversion.
JDBCRDD.TimestampConversion ( )  :  JDBCRDD.TimestampConversion.

spark-sql_2.10-1.3.0.jar, JDBCRelation.class
package org.apache.spark.sql.jdbc
JDBCRelation.buildScan ( String[ ] requiredColumns, org.apache.spark.sql.sources.Filter[ ] filters )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
JDBCRelation.canEqual ( Object p1 )  :  boolean
JDBCRelation.columnPartition ( JDBCPartitioningInfo p1 ) [static]  :  org.apache.spark.Partition[ ]
JDBCRelation.copy ( String url, String table, org.apache.spark.Partition[ ] parts, org.apache.spark.sql.SQLContext sqlContext )  :  JDBCRelation
JDBCRelation.equals ( Object p1 )  :  boolean
JDBCRelation.hashCode ( )  :  int
JDBCRelation.JDBCRelation ( String url, String table, org.apache.spark.Partition[ ] parts, org.apache.spark.sql.SQLContext sqlContext )
JDBCRelation.parts ( )  :  org.apache.spark.Partition[ ]
JDBCRelation.productArity ( )  :  int
JDBCRelation.productElement ( int p1 )  :  Object
JDBCRelation.productIterator ( )  :  scala.collection.Iterator<Object>
JDBCRelation.productPrefix ( )  :  String
JDBCRelation.schema ( )  :  org.apache.spark.sql.types.StructType
JDBCRelation.sqlContext ( )  :  org.apache.spark.sql.SQLContext
JDBCRelation.table ( )  :  String
JDBCRelation.toString ( )  :  String
JDBCRelation.url ( )  :  String

spark-sql_2.10-1.3.0.jar, JSONRelation.class
package org.apache.spark.sql.json
JSONRelation.buildScan ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
JSONRelation.canEqual ( Object p1 )  :  boolean
JSONRelation.copy ( String path, double samplingRatio, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, org.apache.spark.sql.SQLContext sqlContext )  :  JSONRelation
JSONRelation.equals ( Object other )  :  boolean
JSONRelation.hashCode ( )  :  int
JSONRelation.insert ( org.apache.spark.sql.DataFrame data, boolean overwrite )  :  void
JSONRelation.JSONRelation ( String path, double samplingRatio, scala.Option<org.apache.spark.sql.types.StructType> userSpecifiedSchema, org.apache.spark.sql.SQLContext sqlContext )
JSONRelation.JSONRelation..baseRDD ( )  :  org.apache.spark.rdd.RDD<String>
JSONRelation.path ( )  :  String
JSONRelation.productArity ( )  :  int
JSONRelation.productElement ( int p1 )  :  Object
JSONRelation.productIterator ( )  :  scala.collection.Iterator<Object>
JSONRelation.productPrefix ( )  :  String
JSONRelation.samplingRatio ( )  :  double
JSONRelation.schema ( )  :  org.apache.spark.sql.types.StructType
JSONRelation.sqlContext ( )  :  org.apache.spark.sql.SQLContext
JSONRelation.toString ( )  :  String
JSONRelation.userSpecifiedSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>

spark-sql_2.10-1.3.0.jar, LeftSemiJoinBNL.class
package org.apache.spark.sql.execution.joins
LeftSemiJoinBNL.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
LeftSemiJoinBNL.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode

spark-sql_2.10-1.3.0.jar, LeftSemiJoinHash.class
package org.apache.spark.sql.execution.joins
LeftSemiJoinHash.buildKeys ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
LeftSemiJoinHash.buildPlan ( )  :  org.apache.spark.sql.execution.SparkPlan
LeftSemiJoinHash.buildSide ( )  :  package.BuildRight.
LeftSemiJoinHash.buildSide ( )  :  package.BuildSide
LeftSemiJoinHash.buildSideKeyGenerator ( )  :  org.apache.spark.sql.catalyst.expressions.package.Projection
LeftSemiJoinHash.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right )  :  LeftSemiJoinHash
LeftSemiJoinHash.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.Row> streamIter, HashedRelation hashedRelation )  :  scala.collection.Iterator<org.apache.spark.sql.Row>
LeftSemiJoinHash.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
LeftSemiJoinHash.LeftSemiJoinHash ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys, org.apache.spark.sql.execution.SparkPlan left, org.apache.spark.sql.execution.SparkPlan right )
LeftSemiJoinHash.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
LeftSemiJoinHash.streamedKeys ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
LeftSemiJoinHash.streamedPlan ( )  :  org.apache.spark.sql.execution.SparkPlan
LeftSemiJoinHash.streamSideKeyGenerator ( )  :  scala.Function0<org.apache.spark.sql.catalyst.expressions.package.MutableProjection>

spark-sql_2.10-1.3.0.jar, Limit.class
package org.apache.spark.sql.execution
Limit.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Limit.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, LocalTableScan.class
package org.apache.spark.sql.execution
LocalTableScan.children ( )  :  scala.collection.immutable.Nil.

spark-sql_2.10-1.3.0.jar, LogicalLocalTable.class
package org.apache.spark.sql.execution
LogicalLocalTable.children ( )  :  scala.collection.immutable.Nil.
LogicalLocalTable.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, LogicalRDD.class
package org.apache.spark.sql.execution
LogicalRDD.children ( )  :  scala.collection.immutable.Nil.
LogicalRDD.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation

spark-sql_2.10-1.3.0.jar, LogicalRelation.class
package org.apache.spark.sql.sources
LogicalRelation.andThen ( scala.Function1<LogicalRelation,A> p1 ) [static]  :  scala.Function1<BaseRelation,A>
LogicalRelation.attributeMap ( )  :  org.apache.spark.sql.catalyst.expressions.AttributeMap<org.apache.spark.sql.catalyst.expressions.AttributeReference>
LogicalRelation.canEqual ( Object p1 )  :  boolean
LogicalRelation.compose ( scala.Function1<A,BaseRelation> p1 ) [static]  :  scala.Function1<A,LogicalRelation>
LogicalRelation.copy ( BaseRelation relation )  :  LogicalRelation
LogicalRelation.equals ( Object other )  :  boolean
LogicalRelation.hashCode ( )  :  int
LogicalRelation.LogicalRelation ( BaseRelation relation )
LogicalRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation
LogicalRelation.newInstance ( )  :  LogicalRelation
LogicalRelation.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.AttributeReference>
LogicalRelation.productArity ( )  :  int
LogicalRelation.productElement ( int p1 )  :  Object
LogicalRelation.productIterator ( )  :  scala.collection.Iterator<Object>
LogicalRelation.productPrefix ( )  :  String
LogicalRelation.relation ( )  :  BaseRelation
LogicalRelation.sameResult ( org.apache.spark.sql.catalyst.plans.logical.LogicalPlan otherPlan )  :  boolean
LogicalRelation.simpleString ( )  :  String
LogicalRelation.statistics ( )  :  org.apache.spark.sql.catalyst.plans.logical.Statistics

spark-sql_2.10-1.3.0.jar, MySQLQuirks.class
package org.apache.spark.sql.jdbc
MySQLQuirks.MySQLQuirks ( )

spark-sql_2.10-1.3.0.jar, NanoTime.class
package org.apache.spark.sql.parquet.timestamp
NanoTime.getJulianDay ( )  :  int
NanoTime.getTimeOfDayNanos ( )  :  long
NanoTime.NanoTime ( )
NanoTime.set ( int julianDay, long timeOfDayNanos )  :  NanoTime
NanoTime.toBinary ( )  :  parquet.io.api.Binary

spark-sql_2.10-1.3.0.jar, NativeColumnType<T>.class
package org.apache.spark.sql.columnar
NativeColumnType<T>.dataType ( )  :  T
NativeColumnType<T>.NativeColumnType ( T dataType, int typeId, int defaultSize )  :  public

spark-sql_2.10-1.3.0.jar, NoQuirks.class
package org.apache.spark.sql.jdbc
NoQuirks.NoQuirks ( )

spark-sql_2.10-1.3.0.jar, NullableColumnBuilder.class
package org.apache.spark.sql.columnar
NullableColumnBuilder.appendFrom ( org.apache.spark.sql.Row p1, int p2 ) [abstract]  :  void
NullableColumnBuilder.NullableColumnBuilder..super.appendFrom ( org.apache.spark.sql.Row p1, int p2 ) [abstract]  :  void

spark-sql_2.10-1.3.0.jar, OutputFaker.class
package org.apache.spark.sql.execution
OutputFaker.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, ParquetRelation.class
package org.apache.spark.sql.parquet
ParquetRelation.attributeMap ( )  :  org.apache.spark.sql.catalyst.expressions.AttributeMap<org.apache.spark.sql.catalyst.expressions.Attribute>
ParquetRelation.canEqual ( Object p1 )  :  boolean
ParquetRelation.conf ( )  :  scala.Option<org.apache.hadoop.conf.Configuration>
ParquetRelation.copy ( String path, scala.Option<org.apache.hadoop.conf.Configuration> conf, org.apache.spark.sql.SQLContext sqlContext, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> partitioningAttributes )  :  ParquetRelation
ParquetRelation.create ( String p1, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan p2, org.apache.hadoop.conf.Configuration p3, org.apache.spark.sql.SQLContext p4 ) [static]  :  ParquetRelation
ParquetRelation.createEmpty ( String p1, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> p2, boolean p3, org.apache.hadoop.conf.Configuration p4, org.apache.spark.sql.SQLContext p5 ) [static]  :  ParquetRelation
ParquetRelation.enableLogForwarding ( ) [static]  :  void
ParquetRelation.equals ( Object other )  :  boolean
ParquetRelation.hashCode ( )  :  int
ParquetRelation.newInstance ( )  :  org.apache.spark.sql.catalyst.analysis.MultiInstanceRelation
ParquetRelation.newInstance ( )  :  ParquetRelation
ParquetRelation.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
ParquetRelation.ParquetRelation ( String path, scala.Option<org.apache.hadoop.conf.Configuration> conf, org.apache.spark.sql.SQLContext sqlContext, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> partitioningAttributes )
ParquetRelation.parquetSchema ( )  :  parquet.schema.MessageType
ParquetRelation.partitioningAttributes ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
ParquetRelation.path ( )  :  String
ParquetRelation.productArity ( )  :  int
ParquetRelation.productElement ( int p1 )  :  Object
ParquetRelation.productIterator ( )  :  scala.collection.Iterator<Object>
ParquetRelation.productPrefix ( )  :  String
ParquetRelation.shortParquetCompressionCodecNames ( ) [static]  :  scala.collection.immutable.Map<String,parquet.hadoop.metadata.CompressionCodecName>
ParquetRelation.sqlContext ( )  :  org.apache.spark.sql.SQLContext
ParquetRelation.statistics ( )  :  org.apache.spark.sql.catalyst.plans.logical.Statistics

spark-sql_2.10-1.3.0.jar, ParquetRelation2.class
package org.apache.spark.sql.parquet
ParquetRelation2.buildScan ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> predicates )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
ParquetRelation2.canEqual ( Object p1 )  :  boolean
ParquetRelation2.copy ( scala.collection.Seq<String> paths, scala.collection.immutable.Map<String,String> parameters, scala.Option<org.apache.spark.sql.types.StructType> maybeSchema, scala.Option<PartitionSpec> maybePartitionSpec, org.apache.spark.sql.SQLContext sqlContext )  :  ParquetRelation2
ParquetRelation2.DEFAULT_PARTITION_NAME ( ) [static]  :  String
ParquetRelation2.equals ( Object other )  :  boolean
ParquetRelation2.hashCode ( )  :  int
ParquetRelation2.insert ( org.apache.spark.sql.DataFrame data, boolean overwrite )  :  void
ParquetRelation2.isPartitioned ( )  :  boolean
ParquetRelation2.isTraceEnabled ( )  :  boolean
ParquetRelation2.log ( )  :  org.slf4j.Logger
ParquetRelation2.logDebug ( scala.Function0<String> msg )  :  void
ParquetRelation2.logDebug ( scala.Function0<String> msg, Throwable throwable )  :  void
ParquetRelation2.logError ( scala.Function0<String> msg )  :  void
ParquetRelation2.logError ( scala.Function0<String> msg, Throwable throwable )  :  void
ParquetRelation2.logInfo ( scala.Function0<String> msg )  :  void
ParquetRelation2.logInfo ( scala.Function0<String> msg, Throwable throwable )  :  void
ParquetRelation2.logName ( )  :  String
ParquetRelation2.logTrace ( scala.Function0<String> msg )  :  void
ParquetRelation2.logTrace ( scala.Function0<String> msg, Throwable throwable )  :  void
ParquetRelation2.logWarning ( scala.Function0<String> msg )  :  void
ParquetRelation2.logWarning ( scala.Function0<String> msg, Throwable throwable )  :  void
ParquetRelation2.maybePartitionSpec ( )  :  scala.Option<PartitionSpec>
ParquetRelation2.maybeSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>
ParquetRelation2.MERGE_SCHEMA ( ) [static]  :  String
ParquetRelation2.newJobContext ( org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.mapreduce.JobID jobId )  :  org.apache.hadoop.mapreduce.JobContext
ParquetRelation2.newTaskAttemptContext ( org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.mapreduce.TaskAttemptID attemptId )  :  org.apache.hadoop.mapreduce.TaskAttemptContext
ParquetRelation2.newTaskAttemptID ( String jtIdentifier, int jobId, boolean isMap, int taskId, int attemptId )  :  org.apache.hadoop.mapreduce.TaskAttemptID
ParquetRelation2.org.apache.spark.Logging..log_ ( )  :  org.slf4j.Logger
ParquetRelation2.org.apache.spark.Logging..log__.eq ( org.slf4j.Logger p1 )  :  void
ParquetRelation2.ParquetRelation2..defaultPartitionName ( )  :  String
ParquetRelation2.ParquetRelation2..isSummaryFile ( org.apache.hadoop.fs.Path file )  :  boolean
ParquetRelation2.ParquetRelation2..maybeMetastoreSchema ( )  :  scala.Option<org.apache.spark.sql.types.StructType>
ParquetRelation2.ParquetRelation2..metadataCache ( )  :  ParquetRelation2.MetadataCache
ParquetRelation2.ParquetRelation2..shouldMergeSchemas ( )  :  boolean
ParquetRelation2.parameters ( )  :  scala.collection.immutable.Map<String,String>
ParquetRelation2.ParquetRelation2 ( scala.collection.Seq<String> paths, scala.collection.immutable.Map<String,String> parameters, scala.Option<org.apache.spark.sql.types.StructType> maybeSchema, scala.Option<PartitionSpec> maybePartitionSpec, org.apache.spark.sql.SQLContext sqlContext )
ParquetRelation2.partitionColumns ( )  :  org.apache.spark.sql.types.StructType
ParquetRelation2.partitions ( )  :  scala.collection.Seq<Partition>
ParquetRelation2.partitionSpec ( )  :  PartitionSpec
ParquetRelation2.paths ( )  :  scala.collection.Seq<String>
ParquetRelation2.productArity ( )  :  int
ParquetRelation2.productElement ( int p1 )  :  Object
ParquetRelation2.productIterator ( )  :  scala.collection.Iterator<Object>
ParquetRelation2.productPrefix ( )  :  String
ParquetRelation2.schema ( )  :  org.apache.spark.sql.types.StructType
ParquetRelation2.sizeInBytes ( )  :  long
ParquetRelation2.sparkContext ( )  :  org.apache.spark.SparkContext
ParquetRelation2.sqlContext ( )  :  org.apache.spark.sql.SQLContext
ParquetRelation2.toString ( )  :  String

spark-sql_2.10-1.3.0.jar, ParquetTableScan.class
package org.apache.spark.sql.parquet
ParquetTableScan.attributes ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
ParquetTableScan.canEqual ( Object p1 )  :  boolean
ParquetTableScan.children ( )  :  scala.collection.immutable.Nil.
ParquetTableScan.children ( )  :  scala.collection.Seq
ParquetTableScan.columnPruningPred ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>
ParquetTableScan.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> attributes, ParquetRelation relation, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> columnPruningPred )  :  ParquetTableScan
ParquetTableScan.curried ( ) [static]  :  scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>,scala.Function1<ParquetRelation,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>,ParquetTableScan>>>
ParquetTableScan.equals ( Object p1 )  :  boolean
ParquetTableScan.execute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
ParquetTableScan.hashCode ( )  :  int
ParquetTableScan.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
ParquetTableScan.ParquetTableScan ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> attributes, ParquetRelation relation, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> columnPruningPred )
ParquetTableScan.productArity ( )  :  int
ParquetTableScan.productElement ( int p1 )  :  Object
ParquetTableScan.productIterator ( )  :  scala.collection.Iterator<Object>
ParquetTableScan.productPrefix ( )  :  String
ParquetTableScan.pruneColumns ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> prunedAttributes )  :  ParquetTableScan
ParquetTableScan.relation ( )  :  ParquetRelation
ParquetTableScan.requestedPartitionOrdinals ( )  :  scala.Tuple2<Object,Object>[ ]
ParquetTableScan.tupled ( ) [static]  :  scala.Function1<scala.Tuple3<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>,ParquetRelation,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression>>,ParquetTableScan>

spark-sql_2.10-1.3.0.jar, ParquetTest.class
package org.apache.spark.sql.parquet
ParquetTest.configuration ( ) [abstract]  :  org.apache.hadoop.conf.Configuration
ParquetTest.makeParquetFile ( org.apache.spark.sql.DataFrame p1, java.io.File p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.makeParquetFile ( scala.collection.Seq<T> p1, java.io.File p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.makePartitionDir ( java.io.File p1, String p2, scala.collection.Seq<scala.Tuple2<String,Object>> p3 ) [abstract]  :  java.io.File
ParquetTest.sqlContext ( ) [abstract]  :  org.apache.spark.sql.SQLContext
ParquetTest.withParquetDataFrame ( scala.collection.Seq<T> p1, scala.Function1<org.apache.spark.sql.DataFrame,scala.runtime.BoxedUnit> p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.withParquetFile ( scala.collection.Seq<T> p1, scala.Function1<String,scala.runtime.BoxedUnit> p2, scala.reflect.ClassTag<T> p3, scala.reflect.api.TypeTags.TypeTag<T> p4 ) [abstract]  :  void
ParquetTest.withParquetTable ( scala.collection.Seq<T> p1, String p2, scala.Function0<scala.runtime.BoxedUnit> p3, scala.reflect.ClassTag<T> p4, scala.reflect.api.TypeTags.TypeTag<T> p5 ) [abstract]  :  void
ParquetTest.withSQLConf ( scala.collection.Seq<scala.Tuple2<String,String>> p1, scala.Function0<scala.runtime.BoxedUnit> p2 ) [abstract]  :  void
ParquetTest.withTempDir ( scala.Function1<java.io.File,scala.runtime.BoxedUnit> p1 ) [abstract]  :  void
ParquetTest.withTempPath ( scala.Function1<java.io.File,scala.runtime.BoxedUnit> p1 ) [abstract]  :  void
ParquetTest.withTempTable ( String p1, scala.Function0<scala.runtime.BoxedUnit> p2 ) [abstract]  :  void

spark-sql_2.10-1.3.0.jar, ParquetTypeInfo.class
package org.apache.spark.sql.parquet
ParquetTypeInfo.canEqual ( Object p1 )  :  boolean
ParquetTypeInfo.copy ( parquet.schema.PrimitiveType.PrimitiveTypeName primitiveType, scala.Option<parquet.schema.OriginalType> originalType, scala.Option<parquet.schema.DecimalMetadata> decimalMetadata, scala.Option<Object> length )  :  ParquetTypeInfo
ParquetTypeInfo.curried ( ) [static]  :  scala.Function1<parquet.schema.PrimitiveType.PrimitiveTypeName,scala.Function1<scala.Option<parquet.schema.OriginalType>,scala.Function1<scala.Option<parquet.schema.DecimalMetadata>,scala.Function1<scala.Option<Object>,ParquetTypeInfo>>>>
ParquetTypeInfo.decimalMetadata ( )  :  scala.Option<parquet.schema.DecimalMetadata>
ParquetTypeInfo.equals ( Object p1 )  :  boolean
ParquetTypeInfo.hashCode ( )  :  int
ParquetTypeInfo.length ( )  :  scala.Option<Object>
ParquetTypeInfo.originalType ( )  :  scala.Option<parquet.schema.OriginalType>
ParquetTypeInfo.ParquetTypeInfo ( parquet.schema.PrimitiveType.PrimitiveTypeName primitiveType, scala.Option<parquet.schema.OriginalType> originalType, scala.Option<parquet.schema.DecimalMetadata> decimalMetadata, scala.Option<Object> length )
ParquetTypeInfo.primitiveType ( )  :  parquet.schema.PrimitiveType.PrimitiveTypeName
ParquetTypeInfo.productArity ( )  :  int
ParquetTypeInfo.productElement ( int p1 )  :  Object
ParquetTypeInfo.productIterator ( )  :  scala.collection.Iterator<Object>
ParquetTypeInfo.productPrefix ( )  :  String
ParquetTypeInfo.toString ( )  :  String
ParquetTypeInfo.tupled ( ) [static]  :  scala.Function1<scala.Tuple4<parquet.schema.PrimitiveType.PrimitiveTypeName,scala.Option<parquet.schema.OriginalType>,scala.Option<parquet.schema.DecimalMetadata>,scala.Option<Object>>,ParquetTypeInfo>

spark-sql_2.10-1.3.0.jar, Partition.class
package org.apache.spark.sql.parquet
Partition.canEqual ( Object p1 )  :  boolean
Partition.copy ( org.apache.spark.sql.Row values, String path )  :  Partition
Partition.curried ( ) [static]  :  scala.Function1<org.apache.spark.sql.Row,scala.Function1<String,Partition>>
Partition.equals ( Object p1 )  :  boolean
Partition.hashCode ( )  :  int
Partition.Partition ( org.apache.spark.sql.Row values, String path )
Partition.path ( )  :  String
Partition.productArity ( )  :  int
Partition.productElement ( int p1 )  :  Object
Partition.productIterator ( )  :  scala.collection.Iterator<Object>
Partition.productPrefix ( )  :  String
Partition.toString ( )  :  String
Partition.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<org.apache.spark.sql.Row,String>,Partition>
Partition.values ( )  :  org.apache.spark.sql.Row

spark-sql_2.10-1.3.0.jar, PartitionSpec.class
package org.apache.spark.sql.parquet
PartitionSpec.canEqual ( Object p1 )  :  boolean
PartitionSpec.copy ( org.apache.spark.sql.types.StructType partitionColumns, scala.collection.Seq<Partition> partitions )  :  PartitionSpec
PartitionSpec.curried ( ) [static]  :  scala.Function1<org.apache.spark.sql.types.StructType,scala.Function1<scala.collection.Seq<Partition>,PartitionSpec>>
PartitionSpec.equals ( Object p1 )  :  boolean
PartitionSpec.hashCode ( )  :  int
PartitionSpec.partitionColumns ( )  :  org.apache.spark.sql.types.StructType
PartitionSpec.partitions ( )  :  scala.collection.Seq<Partition>
PartitionSpec.PartitionSpec ( org.apache.spark.sql.types.StructType partitionColumns, scala.collection.Seq<Partition> partitions )
PartitionSpec.productArity ( )  :  int
PartitionSpec.productElement ( int p1 )  :  Object
PartitionSpec.productIterator ( )  :  scala.collection.Iterator<Object>
PartitionSpec.productPrefix ( )  :  String
PartitionSpec.toString ( )  :  String
PartitionSpec.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<org.apache.spark.sql.types.StructType,scala.collection.Seq<Partition>>,PartitionSpec>

spark-sql_2.10-1.3.0.jar, PhysicalRDD.class
package org.apache.spark.sql.execution
PhysicalRDD.children ( )  :  scala.collection.immutable.Nil.
PhysicalRDD.copy ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, org.apache.spark.rdd.RDD<org.apache.spark.sql.Row> rdd )  :  PhysicalRDD
PhysicalRDD.curried ( ) [static]  :  scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>,scala.Function1<org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>,PhysicalRDD>>
PhysicalRDD.PhysicalRDD ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output, org.apache.spark.rdd.RDD<org.apache.spark.sql.Row> rdd )
PhysicalRDD.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>,org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>>,PhysicalRDD>

spark-sql_2.10-1.3.0.jar, PostgresQuirks.class
package org.apache.spark.sql.jdbc
PostgresQuirks.PostgresQuirks ( )

spark-sql_2.10-1.3.0.jar, PreWriteCheck.class
package org.apache.spark.sql.sources
PreWriteCheck.andThen ( scala.Function1<scala.runtime.BoxedUnit,A> g )  :  scala.Function1<org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,A>
PreWriteCheck.andThen.mcDD.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcDF.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcDI.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcDJ.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcFD.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcFF.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcFI.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcFJ.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcID.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcIF.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcII.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcIJ.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcJD.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcJF.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcJI.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcJJ.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcVD.sp ( scala.Function1<scala.runtime.BoxedUnit,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcVF.sp ( scala.Function1<scala.runtime.BoxedUnit,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcVI.sp ( scala.Function1<scala.runtime.BoxedUnit,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcVJ.sp ( scala.Function1<scala.runtime.BoxedUnit,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcZD.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcZF.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcZI.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.andThen.mcZJ.sp ( scala.Function1<Object,A> g )  :  scala.Function1<Object,A>
PreWriteCheck.apply ( Object v1 )  :  Object
PreWriteCheck.apply ( org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan )  :  void
PreWriteCheck.apply.mcDD.sp ( double v1 )  :  double
PreWriteCheck.apply.mcDF.sp ( float v1 )  :  double
PreWriteCheck.apply.mcDI.sp ( int v1 )  :  double
PreWriteCheck.apply.mcDJ.sp ( long v1 )  :  double
PreWriteCheck.apply.mcFD.sp ( double v1 )  :  float
PreWriteCheck.apply.mcFF.sp ( float v1 )  :  float
PreWriteCheck.apply.mcFI.sp ( int v1 )  :  float
PreWriteCheck.apply.mcFJ.sp ( long v1 )  :  float
PreWriteCheck.apply.mcID.sp ( double v1 )  :  int
PreWriteCheck.apply.mcIF.sp ( float v1 )  :  int
PreWriteCheck.apply.mcII.sp ( int v1 )  :  int
PreWriteCheck.apply.mcIJ.sp ( long v1 )  :  int
PreWriteCheck.apply.mcJD.sp ( double v1 )  :  long
PreWriteCheck.apply.mcJF.sp ( float v1 )  :  long
PreWriteCheck.apply.mcJI.sp ( int v1 )  :  long
PreWriteCheck.apply.mcJJ.sp ( long v1 )  :  long
PreWriteCheck.apply.mcVD.sp ( double v1 )  :  void
PreWriteCheck.apply.mcVF.sp ( float v1 )  :  void
PreWriteCheck.apply.mcVI.sp ( int v1 )  :  void
PreWriteCheck.apply.mcVJ.sp ( long v1 )  :  void
PreWriteCheck.apply.mcZD.sp ( double v1 )  :  boolean
PreWriteCheck.apply.mcZF.sp ( float v1 )  :  boolean
PreWriteCheck.apply.mcZI.sp ( int v1 )  :  boolean
PreWriteCheck.apply.mcZJ.sp ( long v1 )  :  boolean
PreWriteCheck.canEqual ( Object p1 )  :  boolean
PreWriteCheck.catalog ( )  :  org.apache.spark.sql.catalyst.analysis.Catalog
PreWriteCheck.compose ( scala.Function1<A,org.apache.spark.sql.catalyst.plans.logical.LogicalPlan> g )  :  scala.Function1<A,scala.runtime.BoxedUnit>
PreWriteCheck.compose.mcDD.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcDF.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcDI.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcDJ.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcFD.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcFF.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcFI.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcFJ.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcID.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcIF.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcII.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcIJ.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcJD.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcJF.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcJI.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcJJ.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcVD.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,scala.runtime.BoxedUnit>
PreWriteCheck.compose.mcVF.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,scala.runtime.BoxedUnit>
PreWriteCheck.compose.mcVI.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,scala.runtime.BoxedUnit>
PreWriteCheck.compose.mcVJ.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,scala.runtime.BoxedUnit>
PreWriteCheck.compose.mcZD.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcZF.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcZI.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.compose.mcZJ.sp ( scala.Function1<A,Object> g )  :  scala.Function1<A,Object>
PreWriteCheck.copy ( org.apache.spark.sql.catalyst.analysis.Catalog catalog )  :  PreWriteCheck
PreWriteCheck.equals ( Object p1 )  :  boolean
PreWriteCheck.failAnalysis ( String msg )  :  scala.runtime.Nothing.
PreWriteCheck.hashCode ( )  :  int
PreWriteCheck.PreWriteCheck ( org.apache.spark.sql.catalyst.analysis.Catalog catalog )
PreWriteCheck.productArity ( )  :  int
PreWriteCheck.productElement ( int p1 )  :  Object
PreWriteCheck.productIterator ( )  :  scala.collection.Iterator<Object>
PreWriteCheck.productPrefix ( )  :  String
PreWriteCheck.toString ( )  :  String

spark-sql_2.10-1.3.0.jar, Project.class
package org.apache.spark.sql.execution
Project.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Project.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, PythonUDF.class
package org.apache.spark.sql.execution
PythonUDF.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )  :  PythonUDF
PythonUDF.eval ( org.apache.spark.sql.Row input )  :  Object
PythonUDF.eval ( org.apache.spark.sql.Row input )  :  scala.runtime.Nothing.
PythonUDF.PythonUDF ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, org.apache.spark.sql.types.DataType dataType, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> children )

spark-sql_2.10-1.3.0.jar, RefreshTable.class
package org.apache.spark.sql.sources
RefreshTable.canEqual ( Object p1 )  :  boolean
RefreshTable.copy ( String databaseName, String tableName )  :  RefreshTable
RefreshTable.curried ( ) [static]  :  scala.Function1<String,scala.Function1<String,RefreshTable>>
RefreshTable.databaseName ( )  :  String
RefreshTable.equals ( Object p1 )  :  boolean
RefreshTable.hashCode ( )  :  int
RefreshTable.productArity ( )  :  int
RefreshTable.productElement ( int p1 )  :  Object
RefreshTable.productIterator ( )  :  scala.collection.Iterator<Object>
RefreshTable.productPrefix ( )  :  String
RefreshTable.RefreshTable ( String databaseName, String tableName )
RefreshTable.run ( org.apache.spark.sql.SQLContext sqlContext )  :  scala.collection.Seq<org.apache.spark.sql.Row>
RefreshTable.tableName ( )  :  String
RefreshTable.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<String,String>,RefreshTable>

spark-sql_2.10-1.3.0.jar, ResolvedDataSource.class
package org.apache.spark.sql.sources
ResolvedDataSource.apply ( org.apache.spark.sql.SQLContext p1, scala.Option<org.apache.spark.sql.types.StructType> p2, String p3, scala.collection.immutable.Map<String,String> p4 ) [static]  :  ResolvedDataSource
ResolvedDataSource.apply ( org.apache.spark.sql.SQLContext p1, String p2, org.apache.spark.sql.SaveMode p3, scala.collection.immutable.Map<String,String> p4, org.apache.spark.sql.DataFrame p5 ) [static]  :  ResolvedDataSource
ResolvedDataSource.canEqual ( Object p1 )  :  boolean
ResolvedDataSource.copy ( Class<?> provider, BaseRelation relation )  :  ResolvedDataSource
ResolvedDataSource.equals ( Object p1 )  :  boolean
ResolvedDataSource.hashCode ( )  :  int
ResolvedDataSource.lookupDataSource ( String p1 ) [static]  :  Class<?>
ResolvedDataSource.productArity ( )  :  int
ResolvedDataSource.productElement ( int p1 )  :  Object
ResolvedDataSource.productIterator ( )  :  scala.collection.Iterator<Object>
ResolvedDataSource.productPrefix ( )  :  String
ResolvedDataSource.provider ( )  :  Class<?>
ResolvedDataSource.relation ( )  :  BaseRelation
ResolvedDataSource.ResolvedDataSource ( Class<?> provider, BaseRelation relation )
ResolvedDataSource.toString ( )  :  String

spark-sql_2.10-1.3.0.jar, RowReadSupport.class
package org.apache.spark.sql.parquet
RowReadSupport.RowReadSupport ( )

spark-sql_2.10-1.3.0.jar, RowRecordMaterializer.class
package org.apache.spark.sql.parquet
RowRecordMaterializer.RowRecordMaterializer ( parquet.schema.MessageType parquetSchema, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> attributes )

spark-sql_2.10-1.3.0.jar, RowWriteSupport.class
package org.apache.spark.sql.parquet
RowWriteSupport.attributes ( )  :  org.apache.spark.sql.catalyst.expressions.Attribute[ ]
RowWriteSupport.attributes_.eq ( org.apache.spark.sql.catalyst.expressions.Attribute[ ] p1 )  :  void
RowWriteSupport.getSchema ( org.apache.hadoop.conf.Configuration p1 ) [static]  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
RowWriteSupport.init ( org.apache.hadoop.conf.Configuration configuration )  :  parquet.hadoop.api.WriteSupport.WriteContext
RowWriteSupport.isTraceEnabled ( )  :  boolean
RowWriteSupport.log ( )  :  org.slf4j.Logger
RowWriteSupport.logDebug ( scala.Function0<String> msg )  :  void
RowWriteSupport.logDebug ( scala.Function0<String> msg, Throwable throwable )  :  void
RowWriteSupport.logError ( scala.Function0<String> msg )  :  void
RowWriteSupport.logError ( scala.Function0<String> msg, Throwable throwable )  :  void
RowWriteSupport.logInfo ( scala.Function0<String> msg )  :  void
RowWriteSupport.logInfo ( scala.Function0<String> msg, Throwable throwable )  :  void
RowWriteSupport.logName ( )  :  String
RowWriteSupport.logTrace ( scala.Function0<String> msg )  :  void
RowWriteSupport.logTrace ( scala.Function0<String> msg, Throwable throwable )  :  void
RowWriteSupport.logWarning ( scala.Function0<String> msg )  :  void
RowWriteSupport.logWarning ( scala.Function0<String> msg, Throwable throwable )  :  void
RowWriteSupport.org.apache.spark.Logging..log_ ( )  :  org.slf4j.Logger
RowWriteSupport.org.apache.spark.Logging..log__.eq ( org.slf4j.Logger p1 )  :  void
RowWriteSupport.prepareForWrite ( parquet.io.api.RecordConsumer recordConsumer )  :  void
RowWriteSupport.RowWriteSupport ( )
RowWriteSupport.setSchema ( scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> p1, org.apache.hadoop.conf.Configuration p2 ) [static]  :  void
RowWriteSupport.SPARK_ROW_SCHEMA ( ) [static]  :  String
RowWriteSupport.write ( Object p1 )  :  void
RowWriteSupport.write ( org.apache.spark.sql.Row record )  :  void
RowWriteSupport.writeArray ( org.apache.spark.sql.types.ArrayType schema, scala.collection.Seq<Object> array )  :  void
RowWriteSupport.writeDecimal ( org.apache.spark.sql.types.Decimal decimal, int precision )  :  void
RowWriteSupport.writeMap ( org.apache.spark.sql.types.MapType schema, scala.collection.immutable.Map<?,Object> map )  :  void
RowWriteSupport.writePrimitive ( org.apache.spark.sql.types.DataType schema, Object value )  :  void
RowWriteSupport.writer ( )  :  parquet.io.api.RecordConsumer
RowWriteSupport.writer_.eq ( parquet.io.api.RecordConsumer p1 )  :  void
RowWriteSupport.writeStruct ( org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.Row struct )  :  void
RowWriteSupport.writeTimestamp ( java.sql.Timestamp ts )  :  void
RowWriteSupport.writeValue ( org.apache.spark.sql.types.DataType schema, Object value )  :  void

spark-sql_2.10-1.3.0.jar, Sample.class
package org.apache.spark.sql.execution
Sample.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Sample.children ( )  :  scala.collection.immutable.List<SparkPlan>
Sample.copy ( double fraction, boolean withReplacement, long seed, SparkPlan child )  :  Sample
Sample.fraction ( )  :  double
Sample.Sample ( double fraction, boolean withReplacement, long seed, SparkPlan child )

spark-sql_2.10-1.3.0.jar, SetCommand.class
package org.apache.spark.sql.execution
SetCommand.copy ( scala.Option<scala.Tuple2<String,scala.Option<String>>> kv, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output )  :  SetCommand
SetCommand.curried ( ) [static]  :  scala.Function1<scala.Option<scala.Tuple2<String,scala.Option<String>>>,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>,SetCommand>>
SetCommand.SetCommand ( scala.Option<scala.Tuple2<String,scala.Option<String>>> kv, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output )
SetCommand.tupled ( ) [static]  :  scala.Function1<scala.Tuple2<scala.Option<scala.Tuple2<String,scala.Option<String>>>,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>>,SetCommand>

spark-sql_2.10-1.3.0.jar, ShuffledHashJoin.class
package org.apache.spark.sql.execution.joins
ShuffledHashJoin.hashJoin ( scala.collection.Iterator<org.apache.spark.sql.Row> streamIter, HashedRelation hashedRelation )  :  scala.collection.Iterator<org.apache.spark.sql.Row>
ShuffledHashJoin.left ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
ShuffledHashJoin.right ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
ShuffledHashJoin.streamSideKeyGenerator ( )  :  scala.Function0<org.apache.spark.sql.catalyst.expressions.package.MutableProjection>

spark-sql_2.10-1.3.0.jar, Sort.class
package org.apache.spark.sql.execution
Sort.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
Sort.children ( )  :  scala.collection.immutable.List<SparkPlan>

spark-sql_2.10-1.3.0.jar, SparkStrategies.class
package org.apache.spark.sql.execution
SparkStrategies.HashJoin ( )  :  SparkStrategies.HashJoin.
SparkStrategies.ParquetOperations ( )  :  SparkStrategies.ParquetOperations.
SparkStrategies.TakeOrdered ( )  :  SparkStrategies.TakeOrdered.

spark-sql_2.10-1.3.0.jar, SQLConf.class
package org.apache.spark.sql
SQLConf.getConf ( String key )  :  String
SQLConf.getConf ( String key, String defaultValue )  :  String
SQLConf.parquetUseDataSourceApi ( )  :  boolean
SQLConf.setConf ( String key, String value )  :  void

spark-sql_2.10-1.3.0.jar, SQLContext.class
package org.apache.spark.sql
SQLContext.cacheManager ( )  :  CacheManager
SQLContext.checkAnalysis ( )  :  catalyst.analysis.CheckAnalysis
SQLContext.createDataFrame ( org.apache.spark.api.java.JavaRDD<Row> rowRDD, java.util.List<String> columns )  :  DataFrame
SQLContext.ddlParser ( )  :  sources.DDLParser

spark-sql_2.10-1.3.0.jar, TakeOrdered.class
package org.apache.spark.sql.execution
TakeOrdered.canEqual ( Object p1 )  :  boolean
TakeOrdered.child ( )  :  org.apache.spark.sql.catalyst.trees.TreeNode
TakeOrdered.child ( )  :  SparkPlan
TakeOrdered.children ( )  :  scala.collection.immutable.List<SparkPlan>
TakeOrdered.children ( )  :  scala.collection.Seq
TakeOrdered.copy ( int limit, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> sortOrder, SparkPlan child )  :  TakeOrdered
TakeOrdered.curried ( ) [static]  :  scala.Function1<Object,scala.Function1<scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>,scala.Function1<SparkPlan,TakeOrdered>>>
TakeOrdered.equals ( Object p1 )  :  boolean
TakeOrdered.execute ( )  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row>
TakeOrdered.executeCollect ( )  :  org.apache.spark.sql.Row[ ]
TakeOrdered.hashCode ( )  :  int
TakeOrdered.limit ( )  :  int
TakeOrdered.ord ( )  :  org.apache.spark.sql.catalyst.expressions.RowOrdering
TakeOrdered.output ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute>
TakeOrdered.outputPartitioning ( )  :  org.apache.spark.sql.catalyst.plans.physical.Partitioning
TakeOrdered.outputPartitioning ( )  :  org.apache.spark.sql.catalyst.plans.physical.SinglePartition.
TakeOrdered.productArity ( )  :  int
TakeOrdered.productElement ( int p1 )  :  Object
TakeOrdered.productIterator ( )  :  scala.collection.Iterator<Object>
TakeOrdered.productPrefix ( )  :  String
TakeOrdered.sortOrder ( )  :  scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>
TakeOrdered.TakeOrdered ( int limit, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> sortOrder, SparkPlan child )
TakeOrdered.tupled ( ) [static]  :  scala.Function1<scala.Tuple3<Object,scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder>,SparkPlan>,TakeOrdered>

spark-sql_2.10-1.3.0.jar, TestGroupWriteSupport.class
package org.apache.spark.sql.parquet
TestGroupWriteSupport.TestGroupWriteSupport ( parquet.schema.MessageType schema )

spark-sql_2.10-1.3.0.jar, UserDefinedFunction.class
package org.apache.spark.sql
UserDefinedFunction.copy ( Object f, types.DataType dataType )  :  UserDefinedFunction
UserDefinedFunction.UserDefinedFunction ( Object f, types.DataType dataType )

spark-sql_2.10-1.3.0.jar, UserDefinedPythonFunction.class
package org.apache.spark.sql
UserDefinedPythonFunction.copy ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )  :  UserDefinedPythonFunction
UserDefinedPythonFunction.UserDefinedPythonFunction ( String name, byte[ ] command, java.util.Map<String,String> envVars, java.util.List<String> pythonIncludes, String pythonExec, java.util.List<org.apache.spark.broadcast.Broadcast<org.apache.spark.api.python.PythonBroadcast>> broadcastVars, org.apache.spark.Accumulator<java.util.List<byte[ ]>> accumulator, types.DataType dataType )

to the top

Problems with Data Types, High Severity (104)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql
[+] CachedData (1)
[+] CacheManager (1)
[+] DataFrame (1)

package org.apache.spark.sql.columnar
[+] ColumnBuilder (1)
[+] ColumnStats (2)
[+] InMemoryColumnarTableScan (1)
[+] InMemoryRelation (1)
[+] NullableColumnBuilder (2)
[+] TimestampColumnStats (1)

package org.apache.spark.sql.columnar.compression
[+] Encoder<T> (1)

package org.apache.spark.sql.execution
[+] AddExchange (1)
[+] Aggregate (1)
[+] AggregateEvaluation (1)
[+] BatchPythonEvaluation (1)
[+] CacheTableCommand (1)
[+] DescribeCommand (1)
[+] Distinct (1)
[+] EvaluatePython (1)
[+] Except (1)
[+] Exchange (1)
[+] ExecutedCommand (1)
[+] Expand (1)
[+] ExplainCommand (1)
[+] ExternalSort (1)
[+] Filter (1)
[+] Generate (1)
[+] GeneratedAggregate (1)
[+] Intersect (1)
[+] Limit (1)
[+] LocalTableScan (1)
[+] LogicalLocalTable (1)
[+] LogicalRDD (1)
[+] OutputFaker (1)
[+] PhysicalRDD (1)
[+] Project (1)
[+] PythonUDF (1)
[+] Sample (1)
[+] SetCommand (1)
[+] ShowTablesCommand (1)
[+] Sort (1)
[+] TakeOrdered (1)
[+] UncacheTableCommand (1)
[+] Union (1)

package org.apache.spark.sql.execution.joins
[+] BroadcastHashJoin (1)
[+] BroadcastLeftSemiJoinHash (2)
[+] BroadcastNestedLoopJoin (1)
[+] CartesianProduct (1)
[+] GeneralHashedRelation (1)
[+] HashJoin (2)
[+] HashOuterJoin (1)
[+] LeftSemiJoinBNL (1)
[+] LeftSemiJoinHash (2)
[+] ShuffledHashJoin (1)
[+] UniqueKeyHashedRelation (1)

package org.apache.spark.sql.jdbc
[+] DriverQuirks (1)
[+] JDBCPartition (1)
[+] JDBCPartitioningInfo (1)
[+] JDBCRDD (1)
[+] JDBCRelation (1)
[+] MySQLQuirks (1)
[+] NoQuirks (1)
[+] PostgresQuirks (1)

package org.apache.spark.sql.json
[+] JSONRelation (1)

package org.apache.spark.sql.parquet
[+] AppendingParquetOutputFormat (1)
[+] CatalystArrayContainsNullConverter (1)
[+] CatalystArrayConverter (1)
[+] CatalystConverter (1)
[+] CatalystGroupConverter (1)
[+] CatalystMapConverter (1)
[+] CatalystNativeArrayConverter (1)
[+] CatalystPrimitiveConverter (1)
[+] CatalystPrimitiveRowConverter (1)
[+] CatalystPrimitiveStringConverter (1)
[+] CatalystStructConverter (1)
[+] InsertIntoParquetTable (1)
[+] ParquetRelation (1)
[+] ParquetRelation2 (1)
[+] ParquetTableScan (1)
[+] ParquetTest (1)
[+] ParquetTypeInfo (1)
[+] Partition (1)
[+] PartitionSpec (1)
[+] RowReadSupport (1)
[+] RowRecordMaterializer (1)
[+] RowWriteSupport (1)
[+] TestGroupWriteSupport (1)

package org.apache.spark.sql.parquet.timestamp
[+] NanoTime (1)

package org.apache.spark.sql.sources
[+] CaseInsensitiveMap (1)
[+] CreateTableUsing (1)
[+] CreateTableUsingAsSelect (1)
[+] CreateTempTableUsing (1)
[+] CreateTempTableUsingAsSelect (1)
[+] DDLParser (1)
[+] DescribeCommand (1)
[+] InsertIntoDataSource (1)
[+] LogicalRelation (1)
[+] PreWriteCheck (1)
[+] RefreshTable (1)
[+] ResolvedDataSource (1)

to the top

Problems with Data Types, Medium Severity (13)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql.columnar
[+] BinaryColumnAccessor (1)
[+] BinaryColumnBuilder (1)
[+] NativeColumnType<T> (1)

package org.apache.spark.sql.execution
[+] Aggregate.ComputedAggregate. (1)
[+] CacheTableCommand (1)
[+] DescribeCommand (1)
[+] ExplainCommand (1)
[+] RunnableCommand (1)
[+] SetCommand (1)
[+] ShowTablesCommand (1)
[+] UnaryNode (1)
[+] UncacheTableCommand (1)

package org.apache.spark.sql.execution.joins
[+] HashJoin (1)

to the top

Problems with Methods, Medium Severity (1)


spark-sql_2.10-1.3.0.jar, SparkPlan
package org.apache.spark.sql.execution
[+] SparkPlan.execute ( ) [abstract]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row> (1)

to the top

Problems with Data Types, Low Severity (36)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql
[+] DataFrame (1)
[+] SQLContext.implicits. (1)

package org.apache.spark.sql.columnar
[+] InMemoryColumnarTableScan (1)
[+] TimestampColumnStats (1)

package org.apache.spark.sql.execution
[+] Aggregate (2)
[+] BatchPythonEvaluation (1)
[+] Except (1)
[+] Exchange (1)
[+] ExecutedCommand (1)
[+] Expand (1)
[+] ExternalSort (1)
[+] Filter (1)
[+] Generate (1)
[+] Intersect (1)
[+] Limit (2)
[+] LocalTableScan (1)
[+] OutputFaker (1)
[+] PhysicalRDD (1)
[+] Project (1)
[+] Sample (1)
[+] Sort (1)
[+] SparkPlan (1)
[+] Union (1)

package org.apache.spark.sql.execution.joins
[+] BroadcastHashJoin (2)
[+] BroadcastLeftSemiJoinHash (1)
[+] BroadcastNestedLoopJoin (1)
[+] CartesianProduct (1)
[+] LeftSemiJoinBNL (1)
[+] LeftSemiJoinHash (3)
[+] ShuffledHashJoin (2)

to the top

Problems with Methods, Low Severity (1)


spark-sql_2.10-1.3.0.jar, SparkPlan
package org.apache.spark.sql.execution
[+] SparkPlan.execute ( ) [abstract]  :  org.apache.spark.rdd.RDD<org.apache.spark.sql.Row> (1)

to the top

Other Changes in Data Types (14)


spark-sql_2.10-1.3.0.jar
package org.apache.spark.sql.columnar
[+] ColumnBuilder (1)
[+] ColumnStats (2)
[+] NullableColumnBuilder (2)

package org.apache.spark.sql.columnar.compression
[+] Encoder<T> (1)

package org.apache.spark.sql.execution
[+] RunnableCommand (1)
[+] SparkPlan (1)
[+] UnaryNode (1)

package org.apache.spark.sql.execution.joins
[+] HashJoin (5)

to the top

Java ARchives (1)


spark-sql_2.10-1.3.0.jar

to the top




Generated on Wed Oct 28 11:09:44 2015 for succinct-0.1.2 by Java API Compliance Checker 1.4.1  
A tool for checking backward compatibility of a Java library API