Index
All Classes and Interfaces|All Packages|Constant Field Values|Serialized Form
A
- abort() - Method in interface org.apache.spark.sql.connector.write.DataWriter
-
Aborts this writer if it is failed.
- abort(long, WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingWrite
-
Aborts this writing job because some data writers are failed and keep failing when retried, or the Spark job fails with some unknown reasons, or
StreamingWrite.commit(long, WriterCommitMessage[])
fails. - abort(Throwable) - Method in interface org.apache.spark.shuffle.api.ShuffleMapOutputWriter
-
Abort all of the writes done by any writers returned by
ShuffleMapOutputWriter.getPartitionWriter(int)
. - abort(WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.BatchWrite
-
Aborts this writing job because some data writers are failed and keep failing when retry, or the Spark job fails with some unknown reasons, or
BatchWrite.onDataWriterCommit(WriterCommitMessage)
fails, orBatchWrite.commit(WriterCommitMessage[])
fails. - abortStagedChanges() - Method in interface org.apache.spark.sql.connector.catalog.StagedTable
-
Abort the changes that were staged, both in metadata and from temporary outputs of this table's writers.
- abs() - Method in class org.apache.spark.sql.types.Decimal
- abs(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- abs(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- abs(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- abs(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- abs(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the absolute value of a numeric value.
- abs(T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- abs(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- abs(T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- abs(T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- abs(T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- absent() - Static method in class org.apache.spark.api.java.Optional
- AbsoluteError - Class in org.apache.spark.mllib.tree.loss
-
Class for absolute error loss calculation (for regression).
- AbsoluteError() - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
- AbstractLauncher<T extends AbstractLauncher<T>> - Class in org.apache.spark.launcher
-
Base class for launcher implementations.
- accept(ES, Function1<ES, List<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- accept(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- accept(Path) - Method in class org.apache.spark.ml.image.SamplePathFilter
- accept(Parsers) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- ACCEPT_ANY_SCHEMA - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table accepts input of any schema in a write operation.
- acceptIf(Function1<Object, Object>, Function1<Object, String>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- acceptMatch(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- acceptSeq(ES, Function1<ES, Iterable<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- AcceptsLatestSeenOffset - Interface in org.apache.spark.sql.connector.read.streaming
-
Indicates that the source accepts the latest seen offset, which requires streaming execution to provide the latest seen offset when restarting the streaming query from checkpoint.
- acceptsType(DataType) - Method in class org.apache.spark.sql.types.ObjectType
- accessNonExistentAccumulatorError(long) - Static method in class org.apache.spark.errors.SparkCoreErrors
- accId() - Method in class org.apache.spark.CleanAccum
- accumCleaned(long) - Method in interface org.apache.spark.CleanerListener
- AccumulableInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi :: Information about an
AccumulatorV2
modified during a task or stage. - AccumulableInfo - Class in org.apache.spark.status.api.v1
- accumulableInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- AccumulableInfoSerializer - Class in org.apache.spark.status.protobuf
- AccumulableInfoSerializer() - Constructor for class org.apache.spark.status.protobuf.AccumulableInfoSerializer
- accumulableInfoToJson(AccumulableInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- accumulables() - Method in class org.apache.spark.scheduler.StageInfo
-
Terminal values of accumulables updated during this stage, including all the user-defined accumulators.
- accumulables() - Method in class org.apache.spark.scheduler.TaskInfo
-
Intermediate updates to accumulables during this task.
- accumulablesToJson(Iterable<AccumulableInfo>, JsonGenerator, boolean) - Static method in class org.apache.spark.util.JsonProtocol
- ACCUMULATOR_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- AccumulatorContext - Class in org.apache.spark.util
-
An internal class used to track accumulators by Spark itself.
- AccumulatorContext() - Constructor for class org.apache.spark.util.AccumulatorContext
- ACCUMULATORS() - Static method in class org.apache.spark.status.TaskIndexNames
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.StageData
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.TaskData
- AccumulatorV2<IN,
OUT> - Class in org.apache.spark.util -
The base class for accumulators, that can accumulate inputs of type
IN
, and produce output of typeOUT
. - AccumulatorV2() - Constructor for class org.apache.spark.util.AccumulatorV2
- accumUpdates() - Method in class org.apache.spark.ExceptionFailure
- accumUpdates() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
- accumUpdates() - Method in class org.apache.spark.TaskKilled
- accuracy() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns accuracy.
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns accuracy
- acos(String) - Static method in class org.apache.spark.sql.functions
- acos(Column) - Static method in class org.apache.spark.sql.functions
- acosh(String) - Static method in class org.apache.spark.sql.functions
- acosh(Column) - Static method in class org.apache.spark.sql.functions
- acquire(Map<String, Object>) - Method in interface org.apache.spark.resource.ResourceAllocator
-
Acquire a sequence of resource addresses (to a launched task), these addresses must be available.
- actionNotAllowedOnTableSincePartitionMetadataNotStoredError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- actionNotAllowedOnTableWithFilesourcePartitionManagementDisabledError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ActivationFunction - Interface in org.apache.spark.ml.ann
-
Trait for functions and their derivatives for functional layers
- active() - Static method in class org.apache.spark.sql.SparkSession
-
Returns the currently active SparkSession, otherwise the default one.
- active() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns a list of active queries associated with this SQLContext
- active() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- ACTIVE - Enum constant in enum class org.apache.spark.status.api.v1.StageStatus
- ACTIVE - Enum constant in enum class org.apache.spark.streaming.StreamingContextState
-
The context has been started, and been not stopped.
- ACTIVE() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- ACTIVE_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- activeIterator() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns an iterator over all the active elements of this vector.
- activeIterator() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns an iterator over all the active elements of this vector.
- activeStages() - Method in class org.apache.spark.status.LiveJob
- activeTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- activeTasks() - Method in class org.apache.spark.status.LiveJob
- activeTasks() - Method in class org.apache.spark.status.LiveStage
- activeTasksPerExecutor() - Method in class org.apache.spark.status.LiveStage
- add(double) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Adds a new data point to the histogram approximation.
- add(double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
- add(long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
- add(Datum) - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
Add a single data point to this aggregator.
- add(IN) - Method in class org.apache.spark.util.AccumulatorV2
-
Takes the inputs and accumulates.
- add(Double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(Long) - Method in class org.apache.spark.sql.util.MapperRowCounter
- add(Long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(Object) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count by one. - add(Object, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count bycount
. - add(String, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata where the dataType is specified as a String. - add(String, String, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata where the dataType is specified as a String. - add(String, String, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the dataType is specified as a String. - add(String, String, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the dataType is specified as a String. - add(String, DataType) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata. - add(String, DataType, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata. - add(String, DataType, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata. - add(String, DataType, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata. - add(org.apache.spark.ml.feature.InstanceBlock) - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- add(Term) - Static method in class org.apache.spark.ml.feature.Dot
- add(Term) - Static method in class org.apache.spark.ml.feature.EmptyTerm
- add(Term) - Method in interface org.apache.spark.ml.feature.Term
-
Creates a summation term by concatenation of terms.
- add(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Adds the given block matrix
other
tothis
block matrix:this + other
. - add(Vector) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
Adds a new document.
- add(Vector) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Add a new sample to this summarizer, and update the statistical summary.
- add(StructField) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field. - add(Tuple2<Vector, Object>) - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
Add a new training instance to this ExpectationAggregator, update the weights, means and covariances for each distributions, and update the log likelihood.
- add(T) - Method in class org.apache.spark.sql.util.SQLOpenHashSet
- add(T) - Method in class org.apache.spark.util.CollectionAccumulator
- add_months(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is
numMonths
afterstartDate
. - add_months(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is
numMonths
afterstartDate
. - ADD_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- ADD_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAddresses(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAddressesBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAllAddresses(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAllAttempts(Iterable<? extends StoreTypes.ApplicationAttemptInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAllBlacklistedInStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- addAllBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- addAllBytesWritten(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- addAllChildClusters(Iterable<? extends StoreTypes.RDDOperationClusterWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addAllChildNodes(Iterable<? extends StoreTypes.RDDOperationNode>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addAllClasspathEntries(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addAllCorruptMergedBlockChunks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- addAllDataDistribution(Iterable<? extends StoreTypes.RDDDataDistribution>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addAllDiskBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- addAllDiskBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- addAllDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- addAllEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addAllEdges(Iterable<? extends StoreTypes.SparkPlanGraphEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addAllExcludedInStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- addAllExecutorCpuTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- addAllExecutorDeserializeCpuTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- addAllExecutorDeserializeTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- addAllExecutorMetrics(Iterable<? extends StoreTypes.ExecutorMetrics>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addAllExecutorRunTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- addAllExecutors(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addAllFailedTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- addAllFetchWaitTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- addAllGettingResultTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- addAllHadoopProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addAllIncomingEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addAllInputBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- addAllInputRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- addAllJobIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- addAllJobTags(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- addAllJvmGcTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- addAllKilledTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- addAllLocalBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- addAllLocalMergedBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- addAllLocalMergedBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- addAllLocalMergedChunksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- addAllMemoryBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- addAllMemoryBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- addAllMergedFetchFallbackCount(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addAllMetricsProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addAllNodes(Iterable<? extends StoreTypes.SparkPlanGraphNodeWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addAllNodes(Iterable<? extends StoreTypes.SparkPlanGraphNodeWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addAllOutgoingEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addAllOutputBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- addAllOutputRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- addAllPartitions(Iterable<? extends StoreTypes.RDDPartitionInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addAllPeakExecutionMemory(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- addAllRddIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- addAllReadBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- addAllReadRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- addAllRecordsRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- addAllRecordsWritten(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- addAllRemoteBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- addAllRemoteBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- addAllRemoteBytesReadToDisk(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- addAllRemoteMergedBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- addAllRemoteMergedBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- addAllRemoteMergedChunksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- addAllRemoteMergedReqsDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- addAllRemoteReqsDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- addAllResourceProfiles(Iterable<? extends StoreTypes.ResourceProfileInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addAllResultSerializationTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- addAllResultSize(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- addAllSchedulerDelay(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- addAllShuffleRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- addAllShuffleReadRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- addAllShuffleWrite(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- addAllShuffleWriteRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- addAllSkippedStages(Iterable<? extends Integer>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- addAllSources(Iterable<? extends StoreTypes.SourceProgress>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addAllSparkProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addAllStageIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- addAllStageIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- addAllStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- addAllStateOperators(Iterable<? extends StoreTypes.StateOperatorProgress>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addAllSucceededTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- addAllSystemProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addAllTaskTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- addAllTotalBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- addAllWriteBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- addAllWriteRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- addAllWriteTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- addAppArgs(String...) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds command line arguments for the application.
- addAppArgs(String...) - Method in class org.apache.spark.launcher.SparkLauncher
- addArchive(String) - Method in class org.apache.spark.SparkContext
-
:: Experimental :: Add an archive to be downloaded and unpacked with this Spark job on every node.
- addArtifact(byte[], String) - Method in class org.apache.spark.sql.api.SparkSession
-
Add a single in-memory artifact to the session while preserving the directory structure specified by
target
under the session's working directory of that particular file extension. - addArtifact(byte[], String) - Method in class org.apache.spark.sql.SparkSession
- addArtifact(String) - Method in class org.apache.spark.sql.api.SparkSession
-
Add a single artifact to the current session.
- addArtifact(String) - Method in class org.apache.spark.sql.SparkSession
- addArtifact(String, String) - Method in class org.apache.spark.sql.api.SparkSession
-
Add a single artifact to the session while preserving the directory structure specified by
target
under the session's working directory of that particular file extension. - addArtifact(String, String) - Method in class org.apache.spark.sql.SparkSession
- addArtifact(URI) - Method in class org.apache.spark.sql.api.SparkSession
-
Add a single artifact to the current session.
- addArtifact(URI) - Method in class org.apache.spark.sql.SparkSession
- addArtifact(Path, Path, Option<String>, boolean) - Method in class org.apache.spark.sql.artifact.ArtifactManager
-
Add and prepare a staged artifact (i.e an artifact that has been rebuilt locally from bytes over the wire) for use.
- addArtifacts(URI...) - Method in class org.apache.spark.sql.api.SparkSession
-
Add one or more artifacts to the session.
- addArtifacts(URI...) - Method in class org.apache.spark.sql.SparkSession
- addArtifacts(Seq<URI>) - Method in class org.apache.spark.sql.api.SparkSession
-
Add one or more artifacts to the session.
- addArtifacts(Seq<URI>) - Method in class org.apache.spark.sql.SparkSession
- addAttempts(int, StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(int, StoreTypes.ApplicationAttemptInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(StoreTypes.ApplicationAttemptInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttemptsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttemptsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addBin(double, double, int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Set a particular histogram bin with index.
- addBinary(byte[]) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count by one. - addBinary(byte[], long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count bycount
. - addBlacklistedInStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- addBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- addBytesWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- addCatalogInCacheTableAsSelectNotAllowedError(String, SqlBaseParser.CacheTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- addChildClusters(int, StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(int, StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClustersBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClustersBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildNodes(int, StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(int, StoreTypes.RDDOperationNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(StoreTypes.RDDOperationNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChunk(ShuffleBlockChunkId, RoaringBitmap) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the
iterator.next()
is invoked and the iterator processes a response of typeShuffleBlockFetcherIterator.PushMergedLocalMetaFetchResult
. - addClasspathEntries(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntriesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntriesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addColumn(String[], DataType) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding an optional column.
- addColumn(String[], DataType, boolean) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding a column.
- addColumn(String[], DataType, boolean, String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding a column.
- addColumn(String[], DataType, boolean, String, TableChange.ColumnPosition, ColumnDefaultValue) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding a column.
- addColumnWithV1TableCannotSpecifyNotNullError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- addCorruptMergedBlockChunks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- addDataDistribution(int, StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(int, StoreTypes.RDDDataDistribution.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(StoreTypes.RDDDataDistribution.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistributionBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistributionBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDirectoryError(Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
- addDiskBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- addDiskBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- addDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- addEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(int, StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(int, StoreTypes.SparkPlanGraphEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(StoreTypes.SparkPlanGraphEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addExcludedInStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- addExecutorCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- addExecutorDeserializeCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- addExecutorDeserializeTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- addExecutorMetrics(int, StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(int, StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorRunTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- addExecutors(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addExecutorsBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addFailedTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- addFetchWaitTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- addFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a file to be submitted with the application.
- addFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
- addFile(String) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFilesWithAbsolutePathUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- addFilter(ServletContextHandler, String, Map<String, String>) - Static method in class org.apache.spark.ui.JettyUtils
- addGettingResultTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- addGrid(BooleanParam) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a boolean param with true and false.
- addGrid(DoubleParam, double[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a double param with multiple values.
- addGrid(FloatParam, float[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a float param with multiple values.
- addGrid(IntParam, int[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds an int param with multiple values.
- addGrid(LongParam, long[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a long param with multiple values.
- addGrid(Param<T>, Iterable<T>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a param with multiple values (overwrites if the input param exists).
- addHadoopProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addIncomingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addInputBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- addInputRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- addJar(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
- addJar(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a jar file to be submitted with the application.
- addJar(String) - Method in class org.apache.spark.launcher.SparkLauncher
- addJar(String) - Method in class org.apache.spark.SparkContext
-
Adds a JAR dependency for all tasks to be executed on this
SparkContext
in the future. - addJarsToClassPath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
- addJarToClasspath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
- addJobIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- addJobTag(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a tag to be assigned to all the jobs started by this thread.
- addJobTag(String) - Method in class org.apache.spark.SparkContext
-
Add a tag to be assigned to all the jobs started by this thread.
- addJobTags(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- addJobTags(Set<String>) - Method in class org.apache.spark.SparkContext
-
Add multiple tags to be assigned to all the jobs started by this thread.
- addJobTagsBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- addJvmGcTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- addKey(String) - Method in class org.apache.spark.types.variant.VariantBuilder
- addKilledTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- addListener(L) - Method in interface org.apache.spark.util.ListenerBus
-
Add a listener to listen events.
- addListener(SparkAppHandle.Listener) - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Adds a listener to be notified of changes to the handle's information.
- addListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Register a
StreamingQueryListener
to receive up-calls for life cycle events ofStreamingQuery
. - addLocalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- addLocalConfiguration(String, int, int, int, JobConf) - Static method in class org.apache.spark.rdd.HadoopRDD
-
Add Hadoop configuration specific to a single partition and attempt.
- addLocalDirectoryError(Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
- addLocalMergedBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- addLocalMergedBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- addLocalMergedChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- addLong(long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count by one. - addLong(long, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count bycount
. - addMapOutput(int, MapStatus) - Method in class org.apache.spark.ShuffleStatus
-
Register a map output.
- addMemoryBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- addMemoryBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- addMergedFetchFallbackCount(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- addMergeResult(int, org.apache.spark.scheduler.MergeStatus) - Method in class org.apache.spark.ShuffleStatus
-
Register a merge result.
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(TaskMetrics, TaskMetrics) - Static method in class org.apache.spark.status.LiveEntityHelpers
-
Add m2 values to m1.
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addNaN() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
- addNewDefaultColumnToExistingTableNotAllowed(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- addNewFunctionMismatchedWithFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNull() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
- addOutgoingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutputBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- addOutputRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- addPartition(LiveRDDPartition) - Method in class org.apache.spark.status.RDDPartitionSeq
- addPartitions(int, StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(int, StoreTypes.RDDPartitionInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(StoreTypes.RDDPartitionInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitionsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitionsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartToPGroup(Partition, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- addPeakExecutionMemory(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- addPyFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a python file / zip / egg to be submitted with the application.
- addPyFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- addRddIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- addReadBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- addReadRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- addRecordsRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- addRecordsWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- addRemoteBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- addRemoteBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- addRemoteBytesReadToDisk(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- addRemoteMergedBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- addRemoteMergedBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- addRemoteMergedChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- addRemoteMergedReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- addRemoteReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- addRequest(TaskResourceRequest) - Method in class org.apache.spark.resource.TaskResourceRequests
-
Add a certain
TaskResourceRequest
to the request set. - addResourceProfiles(int, StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(int, StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfilesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfilesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- address() - Method in class org.apache.spark.BarrierTaskInfo
- address() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- ADDRESS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- addresses() - Method in class org.apache.spark.resource.ResourceInformation
- addresses() - Method in class org.apache.spark.resource.ResourceInformationJson
- ADDRESSES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- addResultSerializationTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- addResultSize(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- addSchedulable(Schedulable) - Method in interface org.apache.spark.scheduler.Schedulable
- addSchedulerDelay(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- addShuffleRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- addShuffleReadRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- addShuffleWrite(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- addShuffleWriteRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- addShutdownHook(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with the given priority.
- addShutdownHook(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with default priority.
- addSkippedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- addSources(int, StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(int, StoreTypes.SourceProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(StoreTypes.SourceProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSourcesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSourcesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSparkArg(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a no-value argument to the Spark invocation.
- addSparkArg(String) - Method in class org.apache.spark.launcher.SparkLauncher
- addSparkArg(String, String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds an argument with a value to the Spark invocation.
- addSparkArg(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
- addSparkListener(SparkListenerInterface) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Register a listener to receive up-calls from events that happen during execution.
- addSparkProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addStageIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- addStageIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- addStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- addStateOperators(int, StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(int, StoreTypes.StateOperatorProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(StoreTypes.StateOperatorProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperatorsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperatorsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Add a
StreamingListener
object for receiving system events related to streaming. - addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Add a
StreamingListener
object for receiving system events related to streaming. - addString(String) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count by one. - addString(String, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments
item
's count bycount
. - addSucceededTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- addSystemProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addTag(String) - Method in class org.apache.spark.sql.api.SparkSession
-
Add a tag to be assigned to all the operations started by this thread in this session.
- addTag(String) - Method in class org.apache.spark.sql.SparkSession
- addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.BarrierTaskContext
- addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.TaskContext
-
Adds a (Java friendly) listener to be executed on task completion.
- addTaskCompletionListener(Function1<TaskContext, U>) - Method in class org.apache.spark.TaskContext
-
Adds a listener in the form of a Scala closure to be executed on task completion.
- addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.BarrierTaskContext
- addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure (which includes completion listener failure, if the task body did not already fail).
- addTaskFailureListener(Function2<TaskContext, Throwable, BoxedUnit>) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure (which includes completion listener failure, if the task body did not already fail).
- addTaskResourceRequests(SparkConf, TaskResourceRequests) - Static method in class org.apache.spark.resource.ResourceUtils
- addTaskSetManager(Schedulable, Properties) - Method in interface org.apache.spark.scheduler.SchedulableBuilder
- addTaskTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- addTime() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- addTime() - Method in class org.apache.spark.status.api.v1.ProcessSummary
- addTotalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- addURL(URL) - Method in class org.apache.spark.util.MutableURLClassLoader
- AddWebUIFilter(String, Map<String, String>, String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
- AddWebUIFilter$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
- addWriteBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- addWriteRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- addWriteTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- advisoryPartitionSizeInBytes() - Method in interface org.apache.spark.sql.connector.write.RequiresDistributionAndOrdering
-
Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.
- aes_decrypt(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of
input
. - aes_decrypt(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of
input
. - aes_decrypt(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of
input
. - aes_decrypt(Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of
input
using AES inmode
withpadding
. - aes_encrypt(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of
input
. - aes_encrypt(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of
input
. - aes_encrypt(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of
input
. - aes_encrypt(Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of
input
. - aes_encrypt(Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of
input
using AES in givenmode
with the specifiedpadding
. - aesCryptoError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- aesModeUnsupportedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- aesUnsupportedAad(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- aesUnsupportedIv(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- after(String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange.ColumnPosition
- AFTSurvivalRegression - Class in org.apache.spark.ml.regression
-
Fit a parametric survival regression model named accelerated failure time (AFT) model (see Accelerated failure time model (Wikipedia)) based on the Weibull distribution of the survival time.
- AFTSurvivalRegression() - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
- AFTSurvivalRegression(String) - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
- AFTSurvivalRegressionModel - Class in org.apache.spark.ml.regression
-
Model produced by
AFTSurvivalRegression
. - AFTSurvivalRegressionParams - Interface in org.apache.spark.ml.regression
-
Params for accelerated failure time (AFT) regression.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
(Java-specific) Compute aggregates by specifying a map from column name to aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- agg(Column, Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(Column, Column...) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- agg(Column, Column...) - Method in class org.apache.spark.sql.Dataset
- agg(Column, Column...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- agg(TypedColumn<V, U1>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregation, returning a
Dataset
of tuples for each unique key and the result of computing this aggregation over all elements in the group. - agg(TypedColumn<V, U1>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key and the result of computing these aggregations over all elements in the group. - agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key and the result of computing these aggregations over all elements in the group. - agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key and the result of computing these aggregations over all elements in the group. - agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key and the result of computing these aggregations over all elements in the group. - agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key and the result of computing these aggregations over all elements in the group. - agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key and the result of computing these aggregations over all elements in the group. - agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>, TypedColumn<V, U8>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key and the result of computing these aggregations over all elements in the group. - agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>, TypedColumn<V, U8>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- agg(Map<String, String>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying a map from column name to aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying the column names and aggregate methods.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.Dataset
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- aggregate(Column, Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
- aggregate(Column, Column, Function2<Column, Column, Column>, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- AggregatedDialect - Class in org.apache.spark.sql.jdbc
-
AggregatedDialect can unify multiple dialects into one virtual Dialect.
- AggregatedDialect(List<JdbcDialect>) - Constructor for class org.apache.spark.sql.jdbc.AggregatedDialect
- aggregateExpressionRequiredForPivotError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- aggregateExpressions() - Method in record class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
Returns the value of the
aggregateExpressions
record component. - AggregateFunc - Interface in org.apache.spark.sql.connector.expressions.aggregate
-
Base class of the Aggregate Functions.
- AggregateFunction<S extends Serializable,
R> - Interface in org.apache.spark.sql.connector.catalog.functions -
Interface for a function that produces a result value by aggregating over multiple input rows.
- aggregateInAggregateFilterError(Expression, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - Method in class org.apache.spark.graphx.Graph
-
Aggregates values from the neighboring edges and vertices of each vertex.
- aggregateMessagesWithActiveSet(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, Option<Tuple2<VertexRDD<?>, EdgeDirection>>, ClassTag<A>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- aggregateTaskMetrics(long[]) - Method in class org.apache.spark.sql.connector.metric.CustomAvgMetric
- aggregateTaskMetrics(long[]) - Method in interface org.apache.spark.sql.connector.metric.CustomMetric
-
Given an array of task metric values, returns aggregated final metric value.
- aggregateTaskMetrics(long[]) - Method in class org.apache.spark.sql.connector.metric.CustomSumMetric
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Aggregates vertices in
messages
that have the same ids usingreduceFunc
, returning a VertexRDD co-indexed withthis
. - AggregatingEdgeContext<VD,
ED, A> - Class in org.apache.spark.graphx.impl - AggregatingEdgeContext(Function2<A, A, A>, Object, BitSet) - Constructor for class org.apache.spark.graphx.impl.AggregatingEdgeContext
- Aggregation - Record Class in org.apache.spark.sql.connector.expressions.aggregate
-
Aggregation in SQL statement.
- Aggregation(AggregateFunc[], Expression[]) - Constructor for record class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
Creates an instance of a
Aggregation
record class. - aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVC
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegression
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- aggregationDepth() - Method in interface org.apache.spark.ml.param.shared.HasAggregationDepth
-
Param for suggested depth for treeAggregate (>= 2).
- aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegression
- aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- aggregationFunctionAppliedOnNonNumericColumnError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- aggregationFunctionAppliedOnNonNumericColumnError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- aggregationNotAllowedInMergeCondition(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- aggregator() - Method in class org.apache.spark.ShuffleDependency
- Aggregator<K,
V, C> - Class in org.apache.spark -
:: DeveloperApi :: A set of functions used to aggregate data.
- Aggregator<IN,
BUF, OUT> - Class in org.apache.spark.sql.expressions -
A base class for user-defined aggregations, which can be used in
Dataset
operations to take all of the elements of a group and reduce them to a single value. - Aggregator() - Constructor for class org.apache.spark.sql.expressions.Aggregator
- Aggregator(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Constructor for class org.apache.spark.Aggregator
- aic() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
- algo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- algo() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
- algo() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- algo() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
- Algo - Class in org.apache.spark.mllib.tree.configuration
-
Enum to select the algorithm for the decision tree
- Algo() - Constructor for class org.apache.spark.mllib.tree.configuration.Algo
- algorithm() - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
- alias(String) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with an alias set.
- alias(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- alias(String) - Method in class org.apache.spark.sql.Dataset
- alias(Symbol) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- alias(Symbol) - Method in class org.apache.spark.sql.Dataset
- aliasesNumberNotMatchUDTFOutputError(int, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- aliasNumberNotMatchColumnNumberError(int, int, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- All - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose all the fields (source, edge, and destination).
- ALL_GATHER() - Static method in class org.apache.spark.RequestMethod
- ALL_REMOVALS_TIME_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- ALL_UPDATES_TIME_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- allAvailable() - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
- allGather(String) - Method in class org.apache.spark.BarrierTaskContext
-
:: Experimental :: Blocks until all tasks in the same stage have reached this routine.
- AllJobsCancelled - Class in org.apache.spark.scheduler
- AllJobsCancelled() - Constructor for class org.apache.spark.scheduler.AllJobsCancelled
- allocate(int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Sets the number of histogram bins to use for approximating data.
- allocator() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
- AllReceiverIds - Class in org.apache.spark.streaming.scheduler
-
A message used by ReceiverTracker to ask all receiver's ids still stored in ReceiverTrackerEndpoint.
- AllReceiverIds() - Constructor for class org.apache.spark.streaming.scheduler.AllReceiverIds
- allRemovalsTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- allSources() - Static method in class org.apache.spark.metrics.source.StaticSources
-
The set of all static sources.
- allSupportedExecutorResources() - Static method in class org.apache.spark.resource.ResourceProfile
-
Return all supported Spark built-in executor resources, custom resources like GPUs/FPGAs are excluded.
- allUpdatesTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- alpha() - Method in class org.apache.spark.ml.recommendation.ALS
- alpha() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for the alpha parameter in the implicit preference formulation (nonnegative).
- alpha() - Method in class org.apache.spark.mllib.random.WeibullGenerator
- ALS - Class in org.apache.spark.ml.recommendation
-
Alternating Least Squares (ALS) matrix factorization.
- ALS - Class in org.apache.spark.mllib.recommendation
-
Alternating Least Squares matrix factorization.
- ALS() - Constructor for class org.apache.spark.ml.recommendation.ALS
- ALS() - Constructor for class org.apache.spark.mllib.recommendation.ALS
-
Constructs an ALS instance with default parameters: {numBlocks: -1, rank: 10, iterations: 10, lambda: 0.01, implicitPrefs: false, alpha: 1.0}.
- ALS(String) - Constructor for class org.apache.spark.ml.recommendation.ALS
- ALS.InBlock$ - Class in org.apache.spark.ml.recommendation
- ALS.LeastSquaresNESolver - Interface in org.apache.spark.ml.recommendation
-
Trait for least squares solvers applied to the normal equation.
- ALS.Rating<ID> - Class in org.apache.spark.ml.recommendation
-
Rating class for better code readability.
- ALS.Rating$ - Class in org.apache.spark.ml.recommendation
- ALS.RatingBlock$ - Class in org.apache.spark.ml.recommendation
- ALSModel - Class in org.apache.spark.ml.recommendation
-
Model fitted by ALS.
- ALSModelParams - Interface in org.apache.spark.ml.recommendation
-
Common params for ALS and ALSModel.
- ALSParams - Interface in org.apache.spark.ml.recommendation
-
Common params for ALS.
- alterAddColNotSupportDatasourceTableError(Object, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterAddColNotSupportViewError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterDatabaseLocationUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterNamespace(String[], NamespaceChange...) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- alterNamespace(String[], NamespaceChange...) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
Apply a set of metadata changes to a namespace in the catalog.
- alterTable(String, Seq<TableChange>, int) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Alter an existing table.
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- alterTable(Identifier, TableChange...) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- alterTable(Identifier, TableChange...) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Apply a set of
changes
to a table in the catalog. - alterTableChangeColumnNotSupportedForColumnTypeError(String, StructField, StructField, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterTableRecoverPartitionsNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterTableSerDePropertiesNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterTableSetSerdeForSpecificPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterTableSetSerdeNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterV2TableSetLocationWithPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- alterView(Identifier, ViewChange...) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Apply
changes
to a view in the catalog. - AlwaysFalse - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that always evaluates to
false
. - AlwaysFalse - Class in org.apache.spark.sql.sources
-
A filter that always evaluates to
false
. - AlwaysFalse() - Constructor for class org.apache.spark.sql.connector.expressions.filter.AlwaysFalse
- AlwaysFalse() - Constructor for class org.apache.spark.sql.sources.AlwaysFalse
- AlwaysTrue - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that always evaluates to
true
. - AlwaysTrue - Class in org.apache.spark.sql.sources
-
A filter that always evaluates to
true
. - AlwaysTrue() - Constructor for class org.apache.spark.sql.connector.expressions.filter.AlwaysTrue
- AlwaysTrue() - Constructor for class org.apache.spark.sql.sources.AlwaysTrue
- am() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
- ambiguousAttributesInSelfJoinError(Seq<AttributeReference>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ambiguousColumnOrFieldError(Seq<String>, int) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- ambiguousColumnOrFieldError(Seq<String>, int, Origin) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- ambiguousColumnOrFieldError(Seq<String>, int, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ambiguousColumnReferences(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ambiguousLateralColumnAliasError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ambiguousLateralColumnAliasError(Seq<String>, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ambiguousReferenceError(String, Seq<Attribute>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ambiguousReferenceToFieldsError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ambiguousRelationAliasNameInNestedCTEError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- amount() - Method in class org.apache.spark.resource.ExecutorResourceRequest
- amount() - Method in class org.apache.spark.resource.ResourceRequest
- amount() - Method in class org.apache.spark.resource.TaskResourceRequest
- AMOUNT() - Static method in class org.apache.spark.resource.ResourceUtils
- AMOUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- AMOUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- AnalysisException - Exception in org.apache.spark.sql
-
Thrown when a query fails to analyze, usually because the query itself is invalid.
- AnalysisException(String, Map<String, String>) - Constructor for exception org.apache.spark.sql.AnalysisException
- AnalysisException(String, Map<String, String>, QueryContext[], String) - Constructor for exception org.apache.spark.sql.AnalysisException
- AnalysisException(String, Map<String, String>, QueryContext[], Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
- AnalysisException(String, Map<String, String>, Origin) - Constructor for exception org.apache.spark.sql.AnalysisException
- AnalysisException(String, Map<String, String>, Origin, Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
- AnalysisException(String, Map<String, String>, Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
- analyzeTableNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- analyzeTableNotSupportedOnViewsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- analyzingColumnStatisticsNotSupportedForColumnTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- and(Column) - Method in class org.apache.spark.sql.Column
-
Boolean AND.
- And - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that evaluates to
true
iff bothleft
andright
evaluate totrue
. - And - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff bothleft
orright
evaluate totrue
. - And(Predicate, Predicate) - Constructor for class org.apache.spark.sql.connector.expressions.filter.And
- And(Filter, Filter) - Constructor for class org.apache.spark.sql.sources.And
- ANOVATest - Class in org.apache.spark.ml.stat
-
ANOVA Test for continuous data.
- ANOVATest() - Constructor for class org.apache.spark.ml.stat.ANOVATest
- ansiDateTimeError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- ansiDateTimeParseError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- ansiIllegalArgumentError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- antecedent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
- any(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if at least one value of
e
is true. - ANY() - Static method in class org.apache.spark.scheduler.TaskLocality
- any_value(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns some value of
e
for a group of rows. - any_value(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns some value of
e
for a group of rows. - AnyDataType - Class in org.apache.spark.sql.types
-
An
AbstractDataType
that matches any concrete data types. - AnyDataType() - Constructor for class org.apache.spark.sql.types.AnyDataType
- anyNull() - Method in interface org.apache.spark.sql.Row
-
Returns true if there are any NULL values in this row.
- anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- AnyTimestampType - Class in org.apache.spark.sql.types
- AnyTimestampType() - Constructor for class org.apache.spark.sql.types.AnyTimestampType
- AnyTimestampTypeExpression - Class in org.apache.spark.sql.types
- AnyTimestampTypeExpression() - Constructor for class org.apache.spark.sql.types.AnyTimestampTypeExpression
- ApiHelper - Class in org.apache.spark.ui.jobs
- ApiHelper() - Constructor for class org.apache.spark.ui.jobs.ApiHelper
- ApiRequestContext - Interface in org.apache.spark.status.api.v1
- APP_SPARK_VERSION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- appAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- append() - Method in class org.apache.spark.sql.DataFrameWriterV2
-
Append the contents of the data frame to the output table.
- Append - Enum constant in enum class org.apache.spark.sql.SaveMode
-
Append mode means that when saving a DataFrame to a data source, if data/table already exists, contents of the DataFrame are expected to be appended to existing data.
- Append() - Static method in class org.apache.spark.sql.streaming.OutputMode
-
OutputMode in which only the new rows in the streaming DataFrame/Dataset will be written to the sink.
- appendBias(Vector) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Returns a new vector with
1.0
(bias) appended to the input vector. - appendBinary(byte[]) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendBoolean(boolean) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendColumn(StructType, String, DataType, boolean) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- appendColumn(StructType, StructField) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- appendDate(int) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendDayTimeInterval(long, byte, byte) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendDecimal(BigDecimal) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendDouble(double) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendFloat(float) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendList(Object) - Method in interface org.apache.spark.sql.streaming.ListState
-
Append an entire list to the existing value
- appendLong(long) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendNull() - Method in class org.apache.spark.types.variant.VariantBuilder
- appendString(String) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendTimestamp(long) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendTimestampNtz(long) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendValue(S) - Method in interface org.apache.spark.sql.streaming.ListState
-
Append an entry to the list
- appendVariant(Variant) - Method in class org.apache.spark.types.variant.VariantBuilder
- appendYearMonthInterval(long, byte, byte) - Method in class org.apache.spark.types.variant.VariantBuilder
- AppHistoryServerPlugin - Interface in org.apache.spark.status
-
An interface for creating history listeners(to replay event logs) defined in other modules like SQL, and setup the UI of the plugin to rebuild the history UI.
- appId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- appId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
- appId() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
- appId() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
- appId() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
- APPLICATION_EXECUTOR_LIMIT() - Static method in class org.apache.spark.ui.ToolTips
- APPLICATION_MASTER() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
- applicationAttemptId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get the attempt ID for this run, if the cluster manager supports multiple attempts.
- applicationAttemptId() - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Get an application's attempt ID associated with the job.
- applicationAttemptId() - Method in class org.apache.spark.SparkContext
- ApplicationAttemptInfo - Class in org.apache.spark.status.api.v1
- applicationEndFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- applicationEndToJson(SparkListenerApplicationEnd, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- ApplicationEnvironmentInfo - Class in org.apache.spark.status.api.v1
- applicationId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get an application ID associated with the job.
- applicationId() - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Get an application ID associated with the job.
- applicationId() - Method in class org.apache.spark.SparkContext
-
A unique identifier for the Spark application.
- ApplicationInfo - Class in org.apache.spark.status.api.v1
- APPLICATIONS() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
- applicationStartFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- applicationStartToJson(SparkListenerApplicationStart, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- ApplicationStatus - Enum Class in org.apache.spark.status.api.v1
- apply() - Static method in class org.apache.spark.ml.TransformEnd
- apply() - Static method in class org.apache.spark.ml.TransformStart
- apply() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
- apply() - Static method in class org.apache.spark.scheduler.local.StopExecutor
- apply() - Static method in class org.apache.spark.sql.jdbc.DatabricksDialect
- apply() - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
- apply() - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
- apply() - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
- apply() - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
- apply() - Static method in class org.apache.spark.sql.jdbc.SnowflakeDialect
- apply() - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
- apply() - Static method in class org.apache.spark.sql.Observation
-
Observation constructor for creating an anonymous observation.
- apply() - Static method in class org.apache.spark.sql.scripting.SqlScriptingInterpreter
- apply() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- apply() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- apply(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi :: Create a new StorageLevel object.
- apply(boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi :: Create a new StorageLevel object without setting useOffHeap.
- apply(byte) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- apply(byte) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- apply(double) - Static method in class org.apache.spark.sql.types.Decimal
- apply(int) - Static method in class org.apache.spark.ErrorMessageFormat
- apply(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its index.
- apply(int) - Method in class org.apache.spark.ml.linalg.DenseVector
- apply(int) - Method in class org.apache.spark.ml.linalg.SparseVector
- apply(int) - Method in interface org.apache.spark.ml.linalg.Vector
-
Gets the value of the ith element.
- apply(int) - Method in class org.apache.spark.mllib.linalg.DenseVector
- apply(int) - Method in class org.apache.spark.mllib.linalg.SparseVector
- apply(int) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Gets the value of the ith element.
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.Algo
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- apply(int) - Static method in class org.apache.spark.rdd.CheckpointState
- apply(int) - Static method in class org.apache.spark.rdd.DeterministicLevel
- apply(int) - Static method in class org.apache.spark.RequestMethod
- apply(int) - Static method in class org.apache.spark.scheduler.SchedulingMode
- apply(int) - Static method in class org.apache.spark.scheduler.TaskLocality
- apply(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- apply(int) - Static method in class org.apache.spark.sql.types.Decimal
- apply(int) - Method in class org.apache.spark.sql.types.StructType
- apply(int) - Method in class org.apache.spark.status.RDDPartitionSeq
- apply(int) - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- apply(int) - Static method in class org.apache.spark.TaskState
- apply(int, int) - Method in class org.apache.spark.ml.linalg.DenseMatrix
- apply(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.ml.linalg.SparseMatrix
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- apply(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- apply(int, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi :: Create a new StorageLevel object from its integer representation.
- apply(int, int, int, int) - Method in interface org.apache.spark.types.variant.VariantUtil.ArrayHandler
- apply(int, int, int, int, int, int) - Method in interface org.apache.spark.types.variant.VariantUtil.ObjectHandler
- apply(int, Node) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- apply(int, Node) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
- apply(int, Predict, double, boolean) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Construct a node with nodeIndex, predict, impurity and isLeaf parameters.
- apply(long) - Static method in class org.apache.spark.sql.types.Decimal
- apply(long) - Static method in class org.apache.spark.streaming.Milliseconds
- apply(long) - Static method in class org.apache.spark.streaming.Minutes
- apply(long) - Static method in class org.apache.spark.streaming.Seconds
- apply(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
- apply(long, TaskMetrics) - Static method in class org.apache.spark.scheduler.RuntimePercentage
- apply(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>, Function2<Object, Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
- apply(DenseMatrix<Object>, DenseMatrix<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
- apply(ObjectInput) - Static method in class org.apache.spark.storage.BlockManagerId
- apply(ObjectInput) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi :: Read StorageLevel object from ObjectInput stream.
- apply(Object) - Method in class org.apache.spark.sql.Column
-
Extracts a value or values from a complex type.
- apply(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its name.
- apply(String) - Method in class org.apache.spark.sql.api.Dataset
-
Selects column based on the column name and returns it as a
Column
. - apply(String) - Static method in class org.apache.spark.sql.Observation
-
Observation constructor for creating a named observation.
- apply(String) - Static method in class org.apache.spark.sql.types.Decimal
- apply(String) - Static method in class org.apache.spark.sql.types.StringType
- apply(String) - Method in class org.apache.spark.sql.types.StructType
-
Extracts the
StructField
with the given name. - apply(String) - Static method in class org.apache.spark.storage.BlockId
- apply(String, long, Enumeration.Value, ByteBuffer, int, Map<String, Map<String, Object>>) - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
-
Alternate factory method that takes a ByteBuffer directly for the data field
- apply(String, String, int, Option<String>) - Static method in class org.apache.spark.storage.BlockManagerId
-
Returns a
BlockManagerId
for the given configuration. - apply(String, Expression...) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a logical transform for applying a named transform.
- apply(String, Seq<Expression>) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- apply(String, Option<Object>, Map<String, String>) - Static method in class org.apache.spark.sql.streaming.SinkProgress
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
- apply(BigInteger) - Static method in class org.apache.spark.sql.types.Decimal
- apply(Graph<VD, ED>, A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<VD>, ClassTag<ED>, ClassTag<A>) - Static method in class org.apache.spark.graphx.Pregel
-
Execute a Pregel-like iterative vertex-parallel abstraction.
- apply(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from a VertexRDD and an EdgeRDD with arbitrary replicated vertices.
- apply(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Gets the value of the input param or its default value if it does not exist.
- apply(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
-
Constructs the FamilyAndLink object from a parameter map
- apply(Split) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
- apply(BinaryConfusionMatrix) - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryClassificationMetricComputer
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Precision
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Recall
- apply(Predict) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
- apply(Predict) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
- apply(Split) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- apply(Split) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
- apply(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from edges, setting referenced vertices to
defaultVertexAttr
. - apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, Function2<VD, VD, VD>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a
VertexRDD
from an RDD of vertex-attribute pairs. - apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a
VertexRDD
from an RDD of vertex-attribute pairs. - apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of vertices and edges with attributes.
- apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from vertices and edges, setting missing vertices to
defaultVertexAttr
. - apply(RDD<Tuple2<Object, VD>>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a standalone
VertexRDD
(one that is not set up for efficient joins with anEdgeRDD
) from an RDD of vertex-attribute pairs. - apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Creates a
Column
for this UDAF using givenColumn
s as input arguments. - apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns an expression that invokes the UDF, using the given arguments.
- apply(Dataset<Row>, Seq<Expression>, RelationalGroupedDataset.GroupType) - Static method in class org.apache.spark.sql.RelationalGroupedDataset
- apply(Row) - Method in class org.apache.spark.mllib.clustering.KMeansModel.Cluster$
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
- apply(SparkSessionExtensions) - Method in class org.apache.spark.sql.ml.InternalFunctionRegistration
- apply(SparkSession, LogicalPlan, Encoder<T>) - Static method in class org.apache.spark.sql.Dataset
- apply(DataType) - Static method in class org.apache.spark.sql.types.ArrayType
-
Construct a
ArrayType
object with the given element type. - apply(DataType, DataType) - Static method in class org.apache.spark.sql.types.MapType
-
Construct a
MapType
object with the given key type and value type. - apply(ThreadStackTrace[]) - Static method in class org.apache.spark.ui.flamegraph.FlamegraphNode
- apply(Seq<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values passed as variable-length arguments.
- apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Creates a
Column
for this UDAF using givenColumn
s as input arguments. - apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns an expression that invokes the UDF, using the given arguments.
- apply(Set<String>) - Method in class org.apache.spark.sql.types.StructType
-
Returns a
StructType
containingStructField
s of the given names, preserving the original order of fields. - apply(IterableOnce<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values.
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
- apply(BigInt) - Static method in class org.apache.spark.sql.types.Decimal
- apply(T1) - Static method in class org.apache.spark.CleanAccum
- apply(T1) - Static method in class org.apache.spark.CleanBroadcast
- apply(T1) - Static method in class org.apache.spark.CleanCheckpoint
- apply(T1) - Static method in class org.apache.spark.CleanRDD
- apply(T1) - Static method in class org.apache.spark.CleanShuffle
- apply(T1) - Static method in class org.apache.spark.CleanSparkListener
- apply(T1) - Static method in class org.apache.spark.ErrorSubInfo
- apply(T1) - Static method in class org.apache.spark.ExecutorRegistered
- apply(T1) - Static method in class org.apache.spark.ExecutorRemoved
- apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceEnd
- apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceStart
- apply(T1) - Static method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerLogStart
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerResourceProfileAdded
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
- apply(T1) - Static method in class org.apache.spark.sql.sources.IsNotNull
- apply(T1) - Static method in class org.apache.spark.sql.sources.IsNull
- apply(T1) - Static method in class org.apache.spark.sql.sources.Not
- apply(T1) - Static method in class org.apache.spark.sql.streaming.TTLConfig
- apply(T1) - Static method in class org.apache.spark.sql.types.CharType
- apply(T1) - Static method in class org.apache.spark.sql.types.VarcharType
- apply(T1) - Static method in class org.apache.spark.status.api.v1.StackTrace
- apply(T1) - Static method in class org.apache.spark.storage.TaskResultBlockId
- apply(T1) - Static method in class org.apache.spark.streaming.Duration
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
- apply(T1, T2) - Static method in class org.apache.spark.ContextBarrierId
- apply(T1, T2) - Static method in class org.apache.spark.ml.clustering.ClusterData
- apply(T1, T2) - Static method in class org.apache.spark.ml.feature.LabeledPoint
- apply(T1, T2) - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
- apply(T1, T2) - Static method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
- apply(T1, T2) - Static method in class org.apache.spark.mllib.stat.test.BinarySample
- apply(T1, T2) - Static method in class org.apache.spark.resource.ResourceInformationJson
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.ExcludedExecutor
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
Deprecated.
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
Deprecated.
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
- apply(T1, T2) - Static method in class org.apache.spark.sql.jdbc.JdbcType
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.And
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualNullSafe
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualTo
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThan
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.In
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThan
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.Or
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringContains
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringEndsWith
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringStartsWith
- apply(T1, T2) - Static method in class org.apache.spark.status.api.v1.sql.Metric
- apply(T1, T2) - Static method in class org.apache.spark.storage.BroadcastBlockId
- apply(T1, T2) - Static method in class org.apache.spark.storage.CacheId
- apply(T1, T2) - Static method in class org.apache.spark.storage.PythonStreamBlockId
- apply(T1, T2) - Static method in class org.apache.spark.storage.RDDBlockId
- apply(T1, T2) - Static method in class org.apache.spark.storage.StreamBlockId
- apply(T1, T2, T3) - Static method in class org.apache.spark.ErrorInfo
- apply(T1, T2, T3) - Static method in class org.apache.spark.ExecutorLostFailure
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.recommendation.Rating
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.KillTask
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.StatusUpdate
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedEqualNullSafe
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedEqualTo
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedGreaterThan
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedGreaterThanOrEqual
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedIn
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedLessThan
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedLessThanOrEqual
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedStringContains
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedStringEndsWith
- apply(T1, T2, T3) - Static method in class org.apache.spark.sql.sources.CollatedStringStartsWith
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleBlockId
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleChecksumBlockId
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleDataBlockId
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleMergedBlockId
- apply(T1, T2, T3) - Static method in class org.apache.spark.TaskCommitDenied
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.ErrorStateInfo
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.tree.model.Split
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.sql.types.StructField
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.status.api.v1.sql.Node
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockBatchId
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockChunkId
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedDataBlockId
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShufflePushBlockId
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.TaskKilled
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.mllib.feature.VocabWord
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.FetchFailed
- apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.AccumulableInfo
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.status.api.v1.ApplicationInfo
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.ExceptionFailure
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- apply(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15) - Static method in class org.apache.spark.status.api.v1.ThreadStackTrace
- applyClusterByChanges(Map<String, String>, StructType, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply ClusterBy changes to a Java map and return the result.
- applyClusterByChanges(Transform[], StructType, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply ClusterBy changes to the partitioning transforms and return the result.
- applyClusterByChanges(Map<String, String>, StructType, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply ClusterBy changes to a map and return the result.
- ApplyInPlace - Class in org.apache.spark.ml.ann
-
Implements in-place application of functions in the arrays
- ApplyInPlace() - Constructor for class org.apache.spark.ml.ann.ApplyInPlace
- applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a Java map and return the result.
- applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a map and return the result.
- applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a Java map and return the result.
- applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a map and return the result.
- applySchema(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use createDataFrame instead. Since 1.3.0.
- applySchema(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use createDataFrame instead. Since 1.3.0.
- applySchema(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use createDataFrame instead. Since 1.3.0.
- applySchema(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use createDataFrame instead. Since 1.3.0.
- applySchemaChanges(StructType, Seq<TableChange>, Option<String>, String) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply schema changes to a schema and return the result.
- appName() - Method in class org.apache.spark.api.java.JavaSparkContext
- appName() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- appName() - Method in class org.apache.spark.SparkContext
- appName(String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a name for the application, which will be shown in the Spark web UI.
- approx_count_distinct(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(String, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_percentile(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate
percentile
of the numeric columncol
which is the smallest value in the orderedcol
values (sorted from least to greatest) such that no more thanpercentage
ofcol
values is less than the value or equal to that value. - approxCountDistinct(String) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use approx_count_distinct. Since 2.1.0.
- approxCountDistinct(String, double) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use approx_count_distinct. Since 2.1.0.
- approxCountDistinct(Column) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use approx_count_distinct. Since 2.1.0.
- approxCountDistinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use approx_count_distinct. Since 2.1.0.
- ApproxHist() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- ApproximateEvaluator<U,
R> - Interface in org.apache.spark.partial -
An object that computes a function incrementally by merging in results of type U from multiple tasks.
- approxNearestNeighbors(Dataset<?>, Vector, int) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
Overloaded method for approxNearestNeighbors.
- approxNearestNeighbors(Dataset<?>, Vector, int, String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
Given a large dataset and an item, approximately find at most k items which have the closest distance to the item.
- approxQuantile(String[], double[], double) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Calculates the approximate quantiles of numerical columns of a DataFrame.
- approxQuantile(String[], double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- approxQuantile(String, double[], double) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Calculates the approximate quantiles of a numerical column of a DataFrame.
- approxSimilarityJoin(Dataset<?>, Dataset<?>, double) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
Overloaded method for approxSimilarityJoin.
- approxSimilarityJoin(Dataset<?>, Dataset<?>, double, String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
Join two datasets to approximately find all pairs of rows whose distance are smaller than the threshold.
- appSparkVersion() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- AppStatusUtils - Class in org.apache.spark.status
- AppStatusUtils() - Constructor for class org.apache.spark.status.AppStatusUtils
- archives() - Method in class org.apache.spark.SparkContext
- AreaUnderCurve - Class in org.apache.spark.mllib.evaluation
-
Computes the area under the curve (AUC) using the trapezoidal rule.
- AreaUnderCurve() - Constructor for class org.apache.spark.mllib.evaluation.AreaUnderCurve
- areaUnderPR() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the precision-recall curve.
- areaUnderROC() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Computes the area under the receiver operating characteristic (ROC) curve.
- areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
- areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- areaUnderROC() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- areaUnderROC() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- areaUnderROC() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the receiver operating characteristic (ROC) curve.
- argmax() - Method in class org.apache.spark.ml.linalg.DenseVector
- argmax() - Method in class org.apache.spark.ml.linalg.SparseVector
- argmax() - Method in interface org.apache.spark.ml.linalg.Vector
-
Find the index of a maximal element.
- argmax() - Method in class org.apache.spark.mllib.linalg.DenseVector
- argmax() - Method in class org.apache.spark.mllib.linalg.SparseVector
- argmax() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Find the index of a maximal element.
- arguments() - Method in class org.apache.spark.sql.connector.expressions.ClusterByTransform
- arguments() - Method in interface org.apache.spark.sql.connector.expressions.Transform
-
Returns the arguments passed to the transform function.
- arithmeticOverflowError(ArithmeticException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- arithmeticOverflowError(String, String, QueryContext) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- arithmeticOverflowError$default$2() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- arithmeticOverflowError$default$3() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- ARPACK - Class in org.apache.spark.mllib.linalg
-
ARPACK routines for MLlib's vectors and matrices.
- ARPACK() - Constructor for class org.apache.spark.mllib.linalg.ARPACK
- array() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- array(String, String...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(DataType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type array. - array(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- ARRAY - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- ARRAY - Static variable in class org.apache.spark.types.variant.VariantUtil
- array_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a list of objects with duplicates.
- array_append(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns an ARRAY containing all elements from the source ARRAY as well as the new element.
- array_compact(Column) - Static method in class org.apache.spark.sql.functions
-
Remove all null elements from the given array.
- array_contains(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns null if the array is null, true if the array contains
value
, and false otherwise. - array_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Removes duplicate values from the array.
- array_except(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the first array but not in the second array, without duplicates.
- array_insert(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Adds an item into a given array at a specified position
- array_intersect(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the intersection of the given two arrays, without duplicates.
- array_join(Column, String) - Static method in class org.apache.spark.sql.functions
-
Concatenates the elements of
column
using thedelimiter
. - array_join(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Concatenates the elements of
column
using thedelimiter
. - array_max(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the maximum value in the array.
- array_min(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the minimum value in the array.
- array_position(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Locates the position of the first occurrence of the value in the given array as long.
- array_prepend(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns an array containing value as well as all elements from array.
- array_remove(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Remove all elements that equal to element from the given array.
- array_repeat(Column, int) - Static method in class org.apache.spark.sql.functions
-
Creates an array containing the left argument repeated the number of times given by the right argument.
- array_repeat(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Creates an array containing the left argument repeated the number of times given by the right argument.
- array_size(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the total number of elements in the array.
- array_sort(Column) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array in ascending order.
- array_sort(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array based on the given comparator function.
- array_to_vector(Column) - Static method in class org.apache.spark.ml.functions
-
Converts a column of array of numeric type into a column of dense vectors in MLlib.
- array_union(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the union of the given two arrays, without duplicates.
- arrayComponentTypeUnsupportedError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- arrayFunctionWithElementsExceedLimitError(String, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- arrayHeader(boolean, int) - Static method in class org.apache.spark.types.variant.VariantUtil
- ArrayImplicits - Class in org.apache.spark.util
-
Implicit methods related to Scala Array.
- ArrayImplicits() - Constructor for class org.apache.spark.util.ArrayImplicits
- ArrayImplicits.SparkArrayOps<T> - Class in org.apache.spark.util
- arrayLengthGt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check that the array length is greater than lowerBound.
- arrays_overlap(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns
true
ifa1
anda2
have at least one non-null element in common. - arrays_zip(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
- arrays_zip(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
- arraySize() - Method in class org.apache.spark.types.variant.Variant
- ArrayType - Class in org.apache.spark.sql.types
- ArrayType(DataType, boolean) - Constructor for class org.apache.spark.sql.types.ArrayType
- arrayValues() - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
- ArrowColumnVector - Class in org.apache.spark.sql.vectorized
-
A column vector backed by Apache Arrow.
- ArrowColumnVector(ValueVector) - Constructor for class org.apache.spark.sql.vectorized.ArrowColumnVector
- ArrowUtils - Class in org.apache.spark.sql.util
- ArrowUtils() - Constructor for class org.apache.spark.sql.util.ArrowUtils
- ARTIFACT_DIRECTORY_PREFIX() - Static method in class org.apache.spark.sql.artifact.ArtifactManager
- ArtifactManager - Class in org.apache.spark.sql.artifact
-
This class handles the storage of artifacts as well as preparing the artifacts for use.
- ArtifactManager(SparkSession) - Constructor for class org.apache.spark.sql.artifact.ArtifactManager
- ArtifactUtils - Class in org.apache.spark.sql.util
- ArtifactUtils() - Constructor for class org.apache.spark.sql.util.ArtifactUtils
- as(String) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with an alias set.
- as(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(String) - Method in class org.apache.spark.sql.Dataset
- as(String[]) - Method in class org.apache.spark.sql.Column
-
Assigns the given aliases to the results of a table generating function.
- as(String, Metadata) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias with metadata.
- as(Encoder<K>, Encoder<T>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Returns a
KeyValueGroupedDataset
where the data is grouped by the grouping expressions of currentRelationalGroupedDataset
. - as(Encoder<K>, Encoder<T>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- as(Encoder<U>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset where each record has been mapped on to the specified type.
- as(Encoder<U>) - Method in class org.apache.spark.sql.Column
-
Provides a type hint about the expected return value of this column.
- as(Encoder<U>) - Method in class org.apache.spark.sql.Dataset
- as(Seq<String>) - Method in class org.apache.spark.sql.Column
-
(Scala-specific) Assigns the given aliases to the results of a table generating function.
- as(Symbol) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- as(Symbol) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(Symbol) - Method in class org.apache.spark.sql.Dataset
- asBinary() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Convenient method for casting to binary logistic regression summary.
- asBinary() - Method in interface org.apache.spark.ml.classification.RandomForestClassificationSummary
-
Convenient method for casting to BinaryRandomForestClassificationSummary.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts the instance to a breeze vector.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the instance to a breeze vector.
- asc() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column.
- asc(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column.
- asc_nulls_first() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column, and null values return before non-null values.
- asc_nulls_first(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column, and null values return before non-null values.
- asc_nulls_last() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column, and null values appear after non-null values.
- asc_nulls_last(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column, and null values appear after non-null values.
- asCaseSensitiveMap() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
Returns the original case-sensitive map.
- ASCENDING - Enum constant in enum class org.apache.spark.sql.connector.expressions.SortDirection
- ascii(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the numeric value of the first character of the string column, and returns the result as an int column.
- asFunctionCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
- asFunctionIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- asFunctionIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
- asIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
- asin(String) - Static method in class org.apache.spark.sql.functions
- asin(Column) - Static method in class org.apache.spark.sql.functions
- asinh(String) - Static method in class org.apache.spark.sql.functions
- asinh(Column) - Static method in class org.apache.spark.sql.functions
- asInteraction() - Static method in class org.apache.spark.ml.feature.Dot
- asInteraction() - Method in interface org.apache.spark.ml.feature.InteractableTerm
-
Convert to ColumnInteraction to wrap all interactions.
- asIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator.
- asJavaPairRDD() - Method in class org.apache.spark.api.r.PairwiseRRDD
- asJavaRDD() - Method in class org.apache.spark.api.r.RRDD
- asJavaRDD() - Method in class org.apache.spark.api.r.StringRRDD
- ask(Object) - Method in interface org.apache.spark.api.plugin.PluginContext
-
Send an RPC to the plugin's driver-side component.
- asKeyValueIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator over key-value pairs.
- AskPermissionToCommitOutput - Class in org.apache.spark.scheduler
- AskPermissionToCommitOutput(int, int, int, int) - Constructor for class org.apache.spark.scheduler.AskPermissionToCommitOutput
- askRpcTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the default Spark timeout to use for RPC ask operations.
- askStandaloneSchedulerToShutDownExecutorsError(Exception) - Static method in class org.apache.spark.errors.SparkCoreErrors
- askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
- askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
- asML() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- asML() - Method in class org.apache.spark.mllib.linalg.DenseVector
- asML() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convert this matrix to the new mllib-local representation.
- asML() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- asML() - Method in class org.apache.spark.mllib.linalg.SparseVector
- asML() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Convert this vector to the new mllib-local representation.
- asMultipart() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.FunctionIdentifierHelper
- asMultipartIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- asNamespaceCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
- asNondeterministic() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Updates UserDefinedFunction to nondeterministic.
- asNonNullable() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Updates UserDefinedFunction to non-nullable.
- asNullable() - Method in class org.apache.spark.sql.types.ObjectType
- asProcedureCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
- asRDDId() - Method in class org.apache.spark.storage.BlockId
- asSchema() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.ColumnsHelper
- assert_true(Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if the condition is true, and throws an exception otherwise.
- assert_true(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if the condition is true; throws an exception with the error message otherwise.
- assertExceptionMsg(Throwable, String, boolean, ClassTag<E>) - Static method in class org.apache.spark.TestUtils
-
Asserts that exception message contains the message.
- assertNotSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs did not spill.
- assertSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs spilled.
- assignClusters(Dataset<?>) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
-
Run the PIC algorithm and returns a cluster assignment for each input vertex.
- assignedAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
-
Sequence of currently assigned resource addresses.
- Assignment(long, int) - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
- Assignment$() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
- assignments() - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
- associationRules() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
Get association rules fitted using the minConfidence.
- AssociationRules - Class in org.apache.spark.ml.fpm
- AssociationRules - Class in org.apache.spark.mllib.fpm
-
Generates association rules from a
RDD[FreqItemset[Item}
. - AssociationRules() - Constructor for class org.apache.spark.ml.fpm.AssociationRules
- AssociationRules() - Constructor for class org.apache.spark.mllib.fpm.AssociationRules
-
Constructs a default instance with default parameters {minConfidence = 0.8}.
- AssociationRules.Rule<Item> - Class in org.apache.spark.mllib.fpm
-
An association rule between sets of items.
- asTableCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
- asTableIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- asTableIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
- AsTableIdentifier() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier
- AsTableIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
- AsTableIdentifier$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
- asTableIdentifierOpt(Option<String>) - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
-
Tries to convert catalog identifier to the table identifier.
- asTerms() - Static method in class org.apache.spark.ml.feature.Dot
- asTerms() - Static method in class org.apache.spark.ml.feature.EmptyTerm
- asTerms() - Method in interface org.apache.spark.ml.feature.Term
-
Default representation of a single Term as a part of summed terms.
- asTransform() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
- asTransform() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.ClusterByHelper
- asTransforms() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.PartitionTypeHelper
- AsyncEventQueue - Class in org.apache.spark.scheduler
-
An asynchronous queue for events.
- AsyncEventQueue(String, SparkConf, LiveListenerBusMetrics, LiveListenerBus) - Constructor for class org.apache.spark.scheduler.AsyncEventQueue
- AsyncRDDActions<T> - Class in org.apache.spark.rdd
-
A set of asynchronous RDD actions available through an implicit conversion.
- AsyncRDDActions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.AsyncRDDActions
- atan(String) - Static method in class org.apache.spark.sql.functions
- atan(Column) - Static method in class org.apache.spark.sql.functions
- atan2(double, String) - Static method in class org.apache.spark.sql.functions
- atan2(double, Column) - Static method in class org.apache.spark.sql.functions
- atan2(String, double) - Static method in class org.apache.spark.sql.functions
- atan2(String, String) - Static method in class org.apache.spark.sql.functions
- atan2(String, Column) - Static method in class org.apache.spark.sql.functions
- atan2(Column, double) - Static method in class org.apache.spark.sql.functions
- atan2(Column, String) - Static method in class org.apache.spark.sql.functions
- atan2(Column, Column) - Static method in class org.apache.spark.sql.functions
- atanh(String) - Static method in class org.apache.spark.sql.functions
- atanh(Column) - Static method in class org.apache.spark.sql.functions
- attempt() - Method in class org.apache.spark.status.api.v1.TaskData
- ATTEMPT() - Static method in class org.apache.spark.status.TaskIndexNames
- ATTEMPT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- ATTEMPT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- attemptId() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- attemptId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
- attemptId() - Method in class org.apache.spark.status.api.v1.StageData
- attemptNumber() - Method in class org.apache.spark.BarrierTaskContext
- attemptNumber() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
- attemptNumber() - Method in class org.apache.spark.scheduler.StageInfo
- attemptNumber() - Method in class org.apache.spark.scheduler.TaskInfo
- attemptNumber() - Method in class org.apache.spark.TaskCommitDenied
- attemptNumber() - Method in class org.apache.spark.TaskContext
-
How many times this task has been attempted.
- attempts() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
- ATTEMPTS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- AtTimestamp(Date) - Constructor for class org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
- attr() - Method in class org.apache.spark.graphx.Edge
- attr() - Method in class org.apache.spark.graphx.EdgeContext
-
The attribute associated with the edge.
- attr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- attribute() - Method in class org.apache.spark.sql.sources.CollatedEqualNullSafe
- attribute() - Method in class org.apache.spark.sql.sources.CollatedEqualTo
- attribute() - Method in class org.apache.spark.sql.sources.CollatedGreaterThan
- attribute() - Method in class org.apache.spark.sql.sources.CollatedGreaterThanOrEqual
- attribute() - Method in class org.apache.spark.sql.sources.CollatedIn
- attribute() - Method in class org.apache.spark.sql.sources.CollatedLessThan
- attribute() - Method in class org.apache.spark.sql.sources.CollatedLessThanOrEqual
- attribute() - Method in class org.apache.spark.sql.sources.CollatedStringContains
- attribute() - Method in class org.apache.spark.sql.sources.CollatedStringEndsWith
- attribute() - Method in class org.apache.spark.sql.sources.CollatedStringStartsWith
- attribute() - Method in class org.apache.spark.sql.sources.EqualNullSafe
- attribute() - Method in class org.apache.spark.sql.sources.EqualTo
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThan
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
- attribute() - Method in class org.apache.spark.sql.sources.In
- attribute() - Method in class org.apache.spark.sql.sources.IsNotNull
- attribute() - Method in class org.apache.spark.sql.sources.IsNull
- attribute() - Method in class org.apache.spark.sql.sources.LessThan
- attribute() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
- attribute() - Method in class org.apache.spark.sql.sources.StringContains
- attribute() - Method in class org.apache.spark.sql.sources.StringEndsWith
- attribute() - Method in class org.apache.spark.sql.sources.StringStartsWith
- Attribute - Class in org.apache.spark.ml.attribute
-
Abstract class for ML attributes.
- Attribute() - Constructor for class org.apache.spark.ml.attribute.Attribute
- AttributeFactory - Interface in org.apache.spark.ml.attribute
-
Trait for ML attribute factories.
- AttributeGroup - Class in org.apache.spark.ml.attribute
-
Attributes that describe a vector ML column.
- AttributeGroup(String) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group without attribute info.
- AttributeGroup(String, int) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group knowing only the number of attributes.
- AttributeGroup(String, Attribute[]) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group with attributes.
- AttributeKeys - Class in org.apache.spark.ml.attribute
-
Keys used to store attributes.
- AttributeKeys() - Constructor for class org.apache.spark.ml.attribute.AttributeKeys
- attributeNameSyntaxError(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- attributes() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Optional array of attributes.
- attributes() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- attributes() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- attributes() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- ATTRIBUTES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- ATTRIBUTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- AttributeType - Class in org.apache.spark.ml.attribute
-
An enum-like type for attribute types:
AttributeType$.Numeric
,AttributeType$.Nominal
, andAttributeType$.Binary
. - AttributeType(String) - Constructor for class org.apache.spark.ml.attribute.AttributeType
- attrType() - Method in class org.apache.spark.ml.attribute.Attribute
-
Attribute type.
- attrType() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- attrType() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- attrType() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- attrType() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- available() - Method in class org.apache.spark.io.NioBufferedFileInputStream
- available() - Method in class org.apache.spark.io.ReadAheadInputStream
- available() - Method in class org.apache.spark.storage.BufferReleasingInputStream
- availableAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
-
Sequence of currently available resource addresses which are not fully assigned.
- AvailableNow() - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger that processes all available data at the start of the query in one or multiple batches, then terminates the query.
- Average() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- avg() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the average of elements added to the accumulator.
- avg() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the average of elements added to the accumulator.
- avg(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(String...) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- avg(MapFunction<T, Double>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Deprecated.Average aggregate function.
- avg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(Seq<String>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- avg(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Deprecated.Average aggregate function.
- Avg - Class in org.apache.spark.sql.connector.expressions.aggregate
-
An aggregate function that returns the mean of all the values in a group.
- Avg(Expression, boolean) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Avg
- avgEventRate() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- avgInputRate() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- avgLen() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
- avgMetrics() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- avgProcessingTime() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- avgSchedulingDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- avgTotalDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- AvroCompressionCodec - Enum Class in org.apache.spark.sql.avro
-
A mapper class from Spark supported avro compression codecs to avro compression codecs.
- avroIncompatibleReadError(String, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- AvroMatchedField$() - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroMatchedField$
- avroNotLoadedSqlFunctionsUnusable(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- avroOptionsException(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- AvroSchemaHelper(Schema, StructType, Seq<String>, Seq<String>, boolean) - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
- AvroUtils - Class in org.apache.spark.sql.avro
- AvroUtils() - Constructor for class org.apache.spark.sql.avro.AvroUtils
- AvroUtils.AvroMatchedField$ - Class in org.apache.spark.sql.avro
- AvroUtils.AvroSchemaHelper - Class in org.apache.spark.sql.avro
-
Helper class to perform field lookup/matching on Avro schemas.
- AvroUtils.RowReader - Interface in org.apache.spark.sql.avro
- awaitAnyTermination() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the creation of the context, or since
resetTerminated()
was called. - awaitAnyTermination(long) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the creation of the context, or since
resetTerminated()
was called. - awaitReady(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to
Await.ready()
. - awaitResult(Future<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
- awaitResult(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.SparkThreadUtils
-
Preferred alternative to
Await.result()
. - awaitResult(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to
Await.result()
. - awaitResultNoSparkExceptionConversion(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.SparkThreadUtils
- awaitTermination() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Waits for the termination of
this
query, either byquery.stop()
or by an exception. - awaitTermination() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Wait for the execution to stop.
- awaitTermination() - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Wait for the execution to stop.
- awaitTermination(long) - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Waits for the termination of
this
query, either byquery.stop()
or by an exception. - awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Wait for the execution to stop.
- awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Wait for the execution to stop.
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y += a * x
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y += a * x
B
- BACKUP_STANDALONE_MASTER_PREFIX() - Static method in class org.apache.spark.util.Utils
-
An identifier that backup masters use in their responses.
- balanceSlack() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- barrier() - Method in class org.apache.spark.BarrierTaskContext
-
:: Experimental :: Sets a global barrier and waits until all tasks in this stage hit this barrier.
- barrier() - Method in class org.apache.spark.rdd.RDD
-
:: Experimental :: Marks the current stage as a barrier stage, where Spark must launch all tasks together.
- BARRIER() - Static method in class org.apache.spark.RequestMethod
- BARRIER_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- BarrierCoordinatorMessage - Interface in org.apache.spark
- barrierStageWithDynamicAllocationError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- barrierStageWithRDDChainPatternError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- BarrierTaskContext - Class in org.apache.spark
-
:: Experimental :: A
TaskContext
with extra contextual info and tooling for tasks in a barrier stage. - BarrierTaskInfo - Class in org.apache.spark
-
:: Experimental :: Carries all task infos of a barrier task.
- base64(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the BASE64 encoding of a binary column and returns it as a string column.
- BaseAppResource - Interface in org.apache.spark.status.api.v1
-
Base class for resource handlers that use app-specific data.
- baseOn(ParamMap) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- baseOn(ParamPair<?>...) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- baseOn(Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- BaseReadWrite - Interface in org.apache.spark.ml.util
-
Trait for
MLWriter
andMLReader
. - BaseRelation - Class in org.apache.spark.sql.sources
-
Represents a collection of tuples with a known schema.
- BaseRelation() - Constructor for class org.apache.spark.sql.sources.BaseRelation
- baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SparkSession
-
Convert a
BaseRelation
created for external data sources into aDataFrame
. - baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SQLContext
- BaseRRDD<T,
U> - Class in org.apache.spark.api.r - BaseRRDD(RDD<T>, int, byte[], String, String, byte[], Broadcast<Object>[], ClassTag<T>, ClassTag<U>) - Constructor for class org.apache.spark.api.r.BaseRRDD
- BaseStreamingAppResource - Interface in org.apache.spark.status.api.v1.streaming
-
Base class for streaming API handlers, provides easy access to the streaming listener that holds the app's information.
- BASIC_TYPE_BITS - Static variable in class org.apache.spark.types.variant.VariantUtil
- BASIC_TYPE_MASK - Static variable in class org.apache.spark.types.variant.VariantUtil
- BasicBlockReplicationPolicy - Class in org.apache.spark.storage
- BasicBlockReplicationPolicy() - Constructor for class org.apache.spark.storage.BasicBlockReplicationPolicy
- basicCredentials(String, String) - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
Use a basic AWS keypair for long-lived authorization.
- basicSparkPage(HttpServletRequest, Function0<Seq<Node>>, String, boolean) - Static method in class org.apache.spark.ui.UIUtils
-
Returns a page with the spark css/js and a simple format.
- Batch - Interface in org.apache.spark.sql.connector.read
-
A physical representation of a data source scan for batch queries.
- BATCH_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- BATCH_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- BATCH_READ - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table supports reads in batch execution mode.
- BATCH_WRITE - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table supports append writes in batch execution mode.
- batchDuration() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- BATCHES() - Static method in class org.apache.spark.mllib.clustering.StreamingKMeans
- batchId() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- batchId() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
- BatchInfo - Class in org.apache.spark.status.api.v1.streaming
- BatchInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi :: Class having information on completed batches.
- BatchInfo(Time, Map<Object, StreamInputInfo>, long, Option<Object>, Option<Object>, Map<Object, OutputOperationInfo>) - Constructor for class org.apache.spark.streaming.scheduler.BatchInfo
- batchInfos() - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
- batchMetadataFileNotFoundError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- BatchStatus - Enum Class in org.apache.spark.status.api.v1.streaming
- batchTime() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- batchTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
- batchTime() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- BatchWrite - Interface in org.apache.spark.sql.connector.write
-
An interface that defines how to write the data to data source for batch processing.
- batchWriteCapabilityError(Table, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- bbos() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
- bean(Class<T>) - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder for Java Bean of type T.
- beforeFetch(Connection, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Override connection specific properties to run before a select is made.
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- beforeFetch(Connection, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- BernoulliCellSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi :: A sampler based on Bernoulli trials for partitioning a data sequence.
- BernoulliCellSampler(double, double, boolean) - Constructor for class org.apache.spark.util.random.BernoulliCellSampler
- BernoulliSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi :: A sampler based on Bernoulli trials.
- BernoulliSampler(double, ClassTag<T>) - Constructor for class org.apache.spark.util.random.BernoulliSampler
- bestModel() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- bestModel() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- beta() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
The beta value, which controls precision vs recall weighting, used in
"weightedFMeasure"
,"fMeasureByLabel"
. - beta() - Method in class org.apache.spark.mllib.random.WeibullGenerator
- between(Object, Object) - Method in class org.apache.spark.sql.Column
-
True if the current column is between the lower bound and upper bound, inclusive.
- bin(String) - Static method in class org.apache.spark.sql.functions
-
An expression that returns the string representation of the binary value of the given long column.
- bin(Column) - Static method in class org.apache.spark.sql.functions
-
An expression that returns the string representation of the binary value of the given long column.
- Binarizer - Class in org.apache.spark.ml.feature
-
Binarize a column of continuous features given a threshold.
- Binarizer() - Constructor for class org.apache.spark.ml.feature.Binarizer
- Binarizer(String) - Constructor for class org.apache.spark.ml.feature.Binarizer
- binary() - Method in class org.apache.spark.ml.feature.CountVectorizer
- binary() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- binary() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Binary toggle to control the output vector values.
- binary() - Method in class org.apache.spark.ml.feature.HashingTF
-
Binary toggle to control term frequency counts.
- binary() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type binary. - Binary() - Static method in class org.apache.spark.ml.attribute.AttributeType
-
Binary type.
- BINARY - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- BINARY - Static variable in class org.apache.spark.types.variant.VariantUtil
- BINARY() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for arrays of bytes.
- BINARY_DOUBLE() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
- BINARY_FLOAT() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
- binaryArithmeticCauseOverflowError(short, String, short) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- BinaryAttribute - Class in org.apache.spark.ml.attribute
-
A binary attribute.
- BinaryClassificationEvaluator - Class in org.apache.spark.ml.evaluation
-
Evaluator for binary classification, which expects input columns rawPrediction, label and an optional weight column.
- BinaryClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- BinaryClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- BinaryClassificationMetricComputer - Interface in org.apache.spark.mllib.evaluation.binary
-
Trait for a binary classification evaluation metric computer.
- BinaryClassificationMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for binary classification.
- BinaryClassificationMetrics(RDD<? extends Product>, int) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
- BinaryClassificationMetrics(RDD<Tuple2<Object, Object>>) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Defaults
numBins
to 0. - BinaryClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary classification results for a given model.
- binaryColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
- BinaryConfusionMatrix - Interface in org.apache.spark.mllib.evaluation.binary
-
Trait for a binary confusion matrix.
- binaryFiles(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
- binaryFiles(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
- binaryFiles(String, int) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop-readable dataset as PortableDataStream for each file (useful for binary data)
- binaryFormatError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- binaryLabelValidator() - Static method in class org.apache.spark.mllib.util.DataValidators
-
Function to check if labels used for classification are either zero or one.
- BinaryLogisticRegressionSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary logistic regression results for a given model.
- BinaryLogisticRegressionSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary logistic regression results for a given model.
- BinaryLogisticRegressionSummaryImpl(Dataset<Row>, String, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
- BinaryLogisticRegressionTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary logistic regression training results.
- BinaryLogisticRegressionTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary logistic regression training results.
- BinaryLogisticRegressionTrainingSummaryImpl(Dataset<Row>, String, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummaryImpl
- BinaryRandomForestClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for BinaryRandomForestClassification results for a given model.
- BinaryRandomForestClassificationSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary RandomForestClassification for a given model.
- BinaryRandomForestClassificationSummaryImpl(Dataset<Row>, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- BinaryRandomForestClassificationTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for BinaryRandomForestClassification training results.
- BinaryRandomForestClassificationTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary RandomForestClassification training results.
- BinaryRandomForestClassificationTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationTrainingSummaryImpl
- binaryRecords(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Load data from a flat binary file, assuming the length of each record is constant.
- binaryRecords(String, int, Configuration) - Method in class org.apache.spark.SparkContext
-
Load data from a flat binary file, assuming the length of each record is constant.
- binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files with fixed record lengths, yielding byte arrays
- binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files, assuming a fixed length per record, generating one byte array per record.
- BinarySample - Class in org.apache.spark.mllib.stat.test
-
Class that represents the group and value of a sample.
- BinarySample(boolean, double) - Constructor for class org.apache.spark.mllib.stat.test.BinarySample
- binarySummary() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Gets summary of model on training set.
- binarySummary() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
Gets summary of model on training set.
- BinaryType - Class in org.apache.spark.sql.types
-
The data type representing
Array[Byte]
values. - BinaryType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the BinaryType object.
- BinaryType() - Constructor for class org.apache.spark.sql.types.BinaryType
- bind(StructType) - Method in interface org.apache.spark.sql.connector.catalog.functions.UnboundFunction
-
Bind this function to an input type.
- bind(StructType) - Method in interface org.apache.spark.sql.connector.catalog.procedures.UnboundProcedure
-
Binds this procedure to input types.
- Binomial$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- BinomialBounds - Class in org.apache.spark.util.random
-
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact sample size with high confidence when sampling without replacement.
- BinomialBounds() - Constructor for class org.apache.spark.util.random.BinomialBounds
- bins() - Method in interface org.apache.spark.sql.connector.read.colstats.Histogram
- BisectingKMeans - Class in org.apache.spark.ml.clustering
-
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques" by Steinbach, Karypis, and Kumar, with modification to fit Spark.
- BisectingKMeans - Class in org.apache.spark.mllib.clustering
-
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques" by Steinbach, Karypis, and Kumar, with modification to fit Spark.
- BisectingKMeans() - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
- BisectingKMeans() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeans
-
Constructs with the default configuration
- BisectingKMeans(String) - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
- BisectingKMeansModel - Class in org.apache.spark.ml.clustering
-
Model fitted by BisectingKMeans.
- BisectingKMeansModel - Class in org.apache.spark.mllib.clustering
-
Clustering model produced by
BisectingKMeans
. - BisectingKMeansModel(ClusteringTreeNode) - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel
- BisectingKMeansModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.clustering
- BisectingKMeansModel.SaveLoadV2_0$ - Class in org.apache.spark.mllib.clustering
- BisectingKMeansModel.SaveLoadV3_0$ - Class in org.apache.spark.mllib.clustering
- BisectingKMeansParams - Interface in org.apache.spark.ml.clustering
-
Common params for BisectingKMeans and BisectingKMeansModel
- BisectingKMeansSummary - Class in org.apache.spark.ml.clustering
-
Summary of BisectingKMeans.
- bit_and(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the bitwise AND of all non-null input values, or null if none.
- bit_count(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of bits that are set in the argument expr as an unsigned 64-bit integer, or NULL if the argument is NULL.
- bit_get(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the bit (0 or 1) at the specified position.
- bit_length(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the bit length for the specified string column.
- bit_or(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the bitwise OR of all non-null input values, or null if none.
- bit_xor(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the bitwise XOR of all non-null input values, or null if none.
- bitmap_bit_position(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the bucket number for the given input column.
- bitmap_bucket_number(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the bit position for the given input column.
- bitmap_construct_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a bitmap with the positions of the bits set from all the values from the input column.
- bitmap_count(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of set bits in the input bitmap.
- bitmap_or_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a bitmap that is the bitwise OR of all of the bitmaps from the input column.
- bitPositionRangeError(String, int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- bitSize() - Method in class org.apache.spark.util.sketch.BloomFilter
-
Returns the number of bits in the underlying bit array.
- bitwise_not(Column) - Static method in class org.apache.spark.sql.functions
-
Computes bitwise NOT (~) of a number.
- bitwiseAND(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise AND of this expression with another expression.
- bitwiseNOT(Column) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use bitwise_not. Since 3.2.0.
- bitwiseOR(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise OR of this expression with another expression.
- bitwiseXOR(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise XOR of this expression with another expression.
- BLACKLISTED_IN_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- blacklistedInStages() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
Deprecated.use excludedInStages instead. Since 3.1.0.
- BLAS - Class in org.apache.spark.ml.linalg
-
BLAS routines for MLlib's vectors and matrices.
- BLAS - Class in org.apache.spark.mllib.linalg
-
BLAS routines for MLlib's vectors and matrices.
- BLAS() - Constructor for class org.apache.spark.ml.linalg.BLAS
- BLAS() - Constructor for class org.apache.spark.mllib.linalg.BLAS
- BLOCK_NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- BlockData - Interface in org.apache.spark.storage
-
Abstracts away how blocks are stored and provides different ways to read the underlying block data.
- blockDoesNotExistError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- blockedByLock() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- blockedByThreadId() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- BlockEvictionHandler - Interface in org.apache.spark.storage.memory
- BlockGeneratorListener - Interface in org.apache.spark.streaming.receiver
-
Listener object for BlockGenerator events
- blockHaveBeenRemovedError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocations
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetRDDBlockVisibility
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.MarkRDDBlockAsVisible
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBlock
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockTaskInfo
- blockId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
- blockId() - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockStoreResult
- BlockId - Class in org.apache.spark.storage
-
:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file.
- BlockId() - Constructor for class org.apache.spark.storage.BlockId
- blockIds() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
- BlockInfoWrapper - Class in org.apache.spark.storage
- BlockInfoWrapper(BlockInfo, Lock) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
- BlockInfoWrapper(BlockInfo, Lock, Condition) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
- BlockLocationsAndStatus(Seq<BlockManagerId>, BlockStatus, Option<String[]>) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
- BlockLocationsAndStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus$
- blockManager() - Method in class org.apache.spark.SparkEnv
- blockManagerAddedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- blockManagerAddedToJson(SparkListenerBlockManagerAdded, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- BlockManagerHeartbeat(BlockManagerId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
- BlockManagerHeartbeat$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
- blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetPeers
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetReplicateInfoForRDDBlocks
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- blockManagerId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
- BlockManagerId - Class in org.apache.spark.storage
-
:: DeveloperApi :: This class represent a unique identifier for a BlockManager.
- BlockManagerId() - Constructor for class org.apache.spark.storage.BlockManagerId
- blockManagerIdCache() - Static method in class org.apache.spark.storage.BlockManagerId
-
The max cache size is hardcoded to 10000, since the size of a BlockManagerId object is about 48B, the total memory cost should be below 1MB which is feasible.
- blockManagerIdFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- blockManagerIdToJson(BlockManagerId, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- BlockManagerMessages - Class in org.apache.spark.storage
- BlockManagerMessages() - Constructor for class org.apache.spark.storage.BlockManagerMessages
- BlockManagerMessages.BlockLocationsAndStatus - Class in org.apache.spark.storage
-
The response message of
GetLocationsAndStatus
request. - BlockManagerMessages.BlockLocationsAndStatus$ - Class in org.apache.spark.storage
- BlockManagerMessages.BlockManagerHeartbeat - Class in org.apache.spark.storage
- BlockManagerMessages.BlockManagerHeartbeat$ - Class in org.apache.spark.storage
- BlockManagerMessages.DecommissionBlockManager$ - Class in org.apache.spark.storage
- BlockManagerMessages.DecommissionBlockManagers - Class in org.apache.spark.storage
- BlockManagerMessages.DecommissionBlockManagers$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetBlockStatus - Class in org.apache.spark.storage
- BlockManagerMessages.GetBlockStatus$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetExecutorEndpointRef - Class in org.apache.spark.storage
- BlockManagerMessages.GetExecutorEndpointRef$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetLocations - Class in org.apache.spark.storage
- BlockManagerMessages.GetLocations$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetLocationsAndStatus - Class in org.apache.spark.storage
- BlockManagerMessages.GetLocationsAndStatus$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetLocationsMultipleBlockIds - Class in org.apache.spark.storage
- BlockManagerMessages.GetLocationsMultipleBlockIds$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetMatchingBlockIds - Class in org.apache.spark.storage
- BlockManagerMessages.GetMatchingBlockIds$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetMemoryStatus$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetPeers - Class in org.apache.spark.storage
- BlockManagerMessages.GetPeers$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetRDDBlockVisibility - Class in org.apache.spark.storage
- BlockManagerMessages.GetRDDBlockVisibility$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetReplicateInfoForRDDBlocks - Class in org.apache.spark.storage
- BlockManagerMessages.GetReplicateInfoForRDDBlocks$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetShufflePushMergerLocations - Class in org.apache.spark.storage
- BlockManagerMessages.GetShufflePushMergerLocations$ - Class in org.apache.spark.storage
- BlockManagerMessages.GetStorageStatus$ - Class in org.apache.spark.storage
- BlockManagerMessages.IsExecutorAlive - Class in org.apache.spark.storage
- BlockManagerMessages.IsExecutorAlive$ - Class in org.apache.spark.storage
- BlockManagerMessages.MarkRDDBlockAsVisible - Class in org.apache.spark.storage
- BlockManagerMessages.MarkRDDBlockAsVisible$ - Class in org.apache.spark.storage
- BlockManagerMessages.RegisterBlockManager - Class in org.apache.spark.storage
- BlockManagerMessages.RegisterBlockManager$ - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveBlock - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveBlock$ - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveBroadcast - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveBroadcast$ - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveExecutor - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveExecutor$ - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveRdd - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveRdd$ - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveShuffle - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveShuffle$ - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveShufflePushMergerLocation - Class in org.apache.spark.storage
- BlockManagerMessages.RemoveShufflePushMergerLocation$ - Class in org.apache.spark.storage
- BlockManagerMessages.ReplicateBlock - Class in org.apache.spark.storage
- BlockManagerMessages.ReplicateBlock$ - Class in org.apache.spark.storage
- BlockManagerMessages.StopBlockManagerMaster$ - Class in org.apache.spark.storage
- BlockManagerMessages.ToBlockManagerMaster - Interface in org.apache.spark.storage
- BlockManagerMessages.ToBlockManagerMasterStorageEndpoint - Interface in org.apache.spark.storage
- BlockManagerMessages.TriggerHeapHistogram$ - Class in org.apache.spark.storage
-
Driver to Executor message to get a heap histogram.
- BlockManagerMessages.TriggerThreadDump$ - Class in org.apache.spark.storage
-
Driver to Executor message to trigger a thread dump.
- BlockManagerMessages.UpdateBlockInfo - Class in org.apache.spark.storage
- BlockManagerMessages.UpdateBlockInfo$ - Class in org.apache.spark.storage
- BlockManagerMessages.UpdateRDDBlockTaskInfo - Class in org.apache.spark.storage
- BlockManagerMessages.UpdateRDDBlockTaskInfo$ - Class in org.apache.spark.storage
- BlockManagerMessages.UpdateRDDBlockVisibility - Class in org.apache.spark.storage
- BlockManagerMessages.UpdateRDDBlockVisibility$ - Class in org.apache.spark.storage
- blockManagerRemovedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- blockManagerRemovedToJson(SparkListenerBlockManagerRemoved, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- BlockMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a distributed matrix in blocks of local matrices.
- BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Alternate constructor for BlockMatrix without the input of the number of rows and columns.
- BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- blockName() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
- blockName() - Method in class org.apache.spark.status.LiveRDDPartition
- blockNotFoundError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- BlockNotFoundException - Exception in org.apache.spark.storage
- BlockNotFoundException(String) - Constructor for exception org.apache.spark.storage.BlockNotFoundException
- BlockReplicationPolicy - Interface in org.apache.spark.storage
-
::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks.
- BlockReplicationUtils - Class in org.apache.spark.storage
- BlockReplicationUtils() - Constructor for class org.apache.spark.storage.BlockReplicationUtils
- blocks() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- blockSize() - Method in interface org.apache.spark.ml.param.shared.HasBlockSize
-
Param for block size for stacking input data in matrices.
- blockSize() - Method in class org.apache.spark.ml.recommendation.ALS
- blockSize() - Method in class org.apache.spark.ml.recommendation.ALSModel
- BlockStatus - Class in org.apache.spark.storage
- BlockStatus(StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockStatus
- blockStatusFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- blockStatusQueryReturnedNullError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- blockStatusToJson(BlockStatus, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- blockUpdatedInfo() - Method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
- BlockUpdatedInfo - Class in org.apache.spark.storage
-
:: DeveloperApi :: Stores information about a block status in a block manager.
- BlockUpdatedInfo(BlockManagerId, BlockId, StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockUpdatedInfo
- blockUpdatedInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- blockUpdatedInfoToJson(BlockUpdatedInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- blockUpdateFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- blockUpdateToJson(SparkListenerBlockUpdated, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- bloomFilter(String, long, double) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(String, long, long) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(Column, long, double) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(Column, long, long) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- BloomFilter - Class in org.apache.spark.util.sketch
-
A Bloom filter is a space-efficient probabilistic data structure that offers an approximate containment test with one-sided error: if it claims that an item is contained in it, this might be in error, but if it claims that an item is not contained in it, then this is definitely true.
- BloomFilter() - Constructor for class org.apache.spark.util.sketch.BloomFilter
- BloomFilter.Version - Enum Class in org.apache.spark.util.sketch
- bmAddress() - Method in class org.apache.spark.FetchFailed
- bool_and(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if all values of
e
are true. - bool_or(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if at least one value of
e
is true. - BOOLEAN - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- BOOLEAN() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable boolean type.
- booleanColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
- BooleanParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Boolean]
for Java. - BooleanParam(String, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
- BooleanParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
- booleanStatementWithEmptyRow(Origin, String) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- BooleanType - Class in org.apache.spark.sql.types
-
The data type representing
Boolean
values. - BooleanType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the BooleanType object.
- BooleanType() - Constructor for class org.apache.spark.sql.types.BooleanType
- BooleanTypeExpression - Class in org.apache.spark.sql.types
- BooleanTypeExpression() - Constructor for class org.apache.spark.sql.types.BooleanTypeExpression
- boost(RDD<org.apache.spark.ml.feature.Instance>, RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, boolean, long, String, Option<org.apache.spark.ml.util.Instrumentation>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Internal method for performing regression using trees as base learners.
- BoostingStrategy - Class in org.apache.spark.mllib.tree.configuration
-
Configuration options for
GradientBoostedTrees
. - BoostingStrategy(Strategy, Loss, int, double, double) - Constructor for class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- bootstrap() - Method in interface org.apache.spark.ml.tree.RandomForestParams
-
Whether bootstrap samples are used when building trees.
- Both - Enum constant in enum class org.apache.spark.graphx.impl.EdgeActiveness
-
Both vertices must be active.
- Both() - Static method in class org.apache.spark.graphx.EdgeDirection
-
Edges originating from *and* arriving at a vertex of interest.
- boundaries() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
Boundaries in increasing order for which predictions are known.
- boundaries() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
- BoundedDouble - Class in org.apache.spark.partial
-
A Double value with error bars and associated confidence.
- BoundedDouble(double, double, double, double) - Constructor for class org.apache.spark.partial.BoundedDouble
- BoundFunction - Interface in org.apache.spark.sql.connector.catalog.functions
-
Represents a function that is bound to an input type.
- BoundProcedure - Interface in org.apache.spark.sql.connector.catalog.procedures
-
A procedure that is bound to input types.
- BreezeUtil - Class in org.apache.spark.ml.ann
-
In-place DGEMM and DGEMV for Breeze
- BreezeUtil() - Constructor for class org.apache.spark.ml.ann.BreezeUtil
- broadcast(DS) - Static method in class org.apache.spark.sql.functions
-
Marks a DataFrame as small enough for use in broadcast joins.
- broadcast(T) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Broadcast a read-only variable to the cluster, returning a
Broadcast
object for reading it in distributed functions. - broadcast(T, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Broadcast a read-only variable to the cluster, returning a
Broadcast
object for reading it in distributed functions. - Broadcast<T> - Class in org.apache.spark.broadcast
-
A broadcast variable.
- Broadcast(long, ClassTag<T>) - Constructor for class org.apache.spark.broadcast.Broadcast
- BROADCAST() - Static method in class org.apache.spark.storage.BlockId
- BroadcastBlockId - Class in org.apache.spark.storage
- BroadcastBlockId(long, String) - Constructor for class org.apache.spark.storage.BroadcastBlockId
- broadcastCleaned(long) - Method in interface org.apache.spark.CleanerListener
- BroadcastFactory - Interface in org.apache.spark.broadcast
-
An interface for all the broadcast implementations in Spark (to allow multiple broadcast implementations).
- broadcastId() - Method in class org.apache.spark.CleanBroadcast
- broadcastId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
- broadcastId() - Method in class org.apache.spark.storage.BroadcastBlockId
- broadcastManager() - Method in class org.apache.spark.SparkEnv
- bround(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the column
e
rounded to 0 decimal places with HALF_EVEN round mode. - bround(Column, int) - Static method in class org.apache.spark.sql.functions
-
Round the value of
e
toscale
decimal places with HALF_EVEN round mode ifscale
is greater than or equal to 0 or at integral part whenscale
is less than 0. - bround(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Round the value of
e
toscale
decimal places with HALF_EVEN round mode ifscale
is greater than or equal to 0 or at integral part whenscale
is less than 0. - btrim(Column) - Static method in class org.apache.spark.sql.functions
-
Removes the leading and trailing space characters from
str
. - btrim(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Remove the leading and trailing
trim
characters fromstr
. - bucket(int, String...) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a bucket transform for one or more columns.
- bucket(int, Column) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) A transform for any type that partitions by a hash of the input column.
- bucket(int, Column) - Method in class org.apache.spark.sql.functions.partitioning$
-
(Scala-specific) A transform for any type that partitions by a hash of the input column.
- bucket(int, NamedReference[]) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- bucket(int, NamedReference[], NamedReference[]) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- bucket(Column, Column) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) A transform for any type that partitions by a hash of the input column.
- bucket(Column, Column) - Method in class org.apache.spark.sql.functions.partitioning$
-
(Scala-specific) A transform for any type that partitions by a hash of the input column.
- bucketBy(int, String, String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Buckets the output by the given columns.
- bucketBy(int, String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Buckets the output by the given columns.
- bucketByAndSortByUnsupportedByOperationError(String) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- bucketByAndSortByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- bucketByUnsupportedByOperationError(String) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- bucketByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- BucketedRandomProjectionLSH - Class in org.apache.spark.ml.feature
-
This
BucketedRandomProjectionLSH
implements Locality Sensitive Hashing functions for Euclidean distance metrics. - BucketedRandomProjectionLSH() - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- BucketedRandomProjectionLSH(String) - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- BucketedRandomProjectionLSHModel - Class in org.apache.spark.ml.feature
-
Model produced by
BucketedRandomProjectionLSH
, where multiple random vectors are stored. - BucketedRandomProjectionLSHParams - Interface in org.apache.spark.ml.feature
-
Params for
BucketedRandomProjectionLSH
. - bucketingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- Bucketizer - Class in org.apache.spark.ml.feature
-
Bucketizer
maps a column of continuous features to a column of feature buckets. - Bucketizer() - Constructor for class org.apache.spark.ml.feature.Bucketizer
- Bucketizer(String) - Constructor for class org.apache.spark.ml.feature.Bucketizer
- bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- bucketLength() - Method in interface org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
-
The length of each hash bucket, a larger bucket lowers the false negative rate.
- bucketSortingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- BucketSpecHelper(BucketSpec) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
- buffer() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
- bufferEncoder() - Method in class org.apache.spark.sql.expressions.Aggregator
-
Specifies the
Encoder
for the intermediate value type. - BufferReleasingInputStream - Class in org.apache.spark.storage
-
Helper class that ensures a ManagedBuffer is released upon InputStream.close() and also detects stream corruption if streamCompressedOrEncrypted is true
- BufferReleasingInputStream(InputStream, ShuffleBlockFetcherIterator, BlockId, int, BlockManagerId, boolean, boolean, Option<CheckedInputStream>) - Constructor for class org.apache.spark.storage.BufferReleasingInputStream
- bufferSchema() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.A
StructType
represents data types of values in the aggregation buffer. - build() - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Builds and returns all combinations of parameters specified by the param grid.
- build() - Method in class org.apache.spark.resource.ResourceProfileBuilder
- build() - Method in class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Builder
-
Builds the stored procedure parameter.
- build() - Method in interface org.apache.spark.sql.connector.read.ScanBuilder
- build() - Method in interface org.apache.spark.sql.connector.write.DeltaWriteBuilder
- build() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperationBuilder
-
Returns a
RowLevelOperation
that controls how Spark rewrites data for DELETE, UPDATE, MERGE commands. - build() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
-
Returns a logical
Write
shared between batch and streaming. - build() - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLQueryBuilder
- build() - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Build the final SQL query that following dialect's SQL syntax.
- build() - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLQueryBuilder
- build() - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLQueryBuilder
- build() - Method in class org.apache.spark.sql.jdbc.OracleDialect.OracleSQLQueryBuilder
- build() - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Builds the
Metadata
instance. - build() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- build() - Method in interface org.apache.spark.storage.memory.MemoryEntryBuilder
- build() - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
Returns the appropriate instance of
SparkAWSCredentials
given the configured parameters. - build(DecisionTreeModel, int) - Method in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
-
Create
EnsembleModelReadWrite.EnsembleNodeData
instances for the given tree. - build(Node, int) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
-
Create
DecisionTreeModelReadWrite.NodeData
instances for this node and all children. - build(Expression) - Method in class org.apache.spark.sql.connector.util.V2ExpressionSQLBuilder
- build(Expression) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLBuilder
- builder() - Static method in class org.apache.spark.sql.SparkSession
-
Creates a
SparkSession.Builder
for constructing aSparkSession
. - Builder() - Constructor for class org.apache.spark.sql.SparkSession.Builder
- Builder() - Constructor for class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
- buildErrorResponse(Response.Status, String) - Static method in class org.apache.spark.ui.UIUtils
- buildExecutionPlan(CompoundBody, SparkSession) - Method in class org.apache.spark.sql.scripting.SqlScriptingInterpreter
-
Build execution plan and return statements that need to be executed, wrapped in the execution node.
- buildFilter(Seq<Expression>, Seq<Attribute>) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Builds a function that can be used to filter batches prior to being decompressed.
- buildFilter(Seq<Expression>, Seq<Attribute>) - Method in class org.apache.spark.sql.columnar.SimpleMetricsCachedBatchSerializer
- buildForBatch() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
-
Deprecated.use
WriteBuilder.build()
instead. - buildForStreaming() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
-
Deprecated.use
WriteBuilder.build()
instead. - buildIvySettings(Option<String>, Option<String>, boolean, PrintStream) - Static method in class org.apache.spark.util.MavenUtils
-
Build Ivy Settings using options with default resolvers
- buildLocationMetadata(Seq<Path>, int) - Static method in class org.apache.spark.util.Utils
-
Convert a sequence of
Path
s to a metadata string. - buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- buildPlanNormalizationRules(SparkSession) - Method in class org.apache.spark.sql.SparkSessionExtensions
- buildPools() - Method in interface org.apache.spark.scheduler.SchedulableBuilder
- buildReaderUnsupportedForFileFormatError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- buildScan() - Method in interface org.apache.spark.sql.sources.TableScan
- buildScan(String[]) - Method in interface org.apache.spark.sql.sources.PrunedScan
- buildScan(String[], Filter[]) - Method in interface org.apache.spark.sql.sources.PrunedFilteredScan
- buildScan(Seq<Attribute>, Seq<Expression>) - Method in interface org.apache.spark.sql.sources.CatalystScan
- buildTreeFromNodes(DecisionTreeModelReadWrite.NodeData[], String) - Static method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
-
Given all data for all nodes in a tree, rebuild the tree.
- BY_NAME_METADATA_KEY - Static variable in interface org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter
-
A field metadata key that indicates whether an argument is passed by name.
- BYTE() - Static method in class org.apache.spark.api.r.SerializationFormats
- BYTE() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable byte type.
- ByteExactNumeric - Class in org.apache.spark.sql.types
- ByteExactNumeric() - Constructor for class org.apache.spark.sql.types.ByteExactNumeric
- BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.input$
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.output$
- BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.shuffleWrite$
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetricDistributions
- bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetrics
- bytesToString(long) - Static method in class org.apache.spark.util.Utils
-
Convert a quantity in bytes to a human-readable string such as "4.0 MiB".
- bytesToString(BigInt) - Static method in class org.apache.spark.util.Utils
- byteStringAsBytes(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsGb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsKb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsMb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetricDistributions
- bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetrics
- bytesWritten() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
- bytesWritten(long) - Method in interface org.apache.spark.util.logging.RollingPolicy
-
Notify that bytes have been written
- ByteType - Class in org.apache.spark.sql.types
-
The data type representing
Byte
values. - ByteType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the ByteType object.
- ByteType() - Constructor for class org.apache.spark.sql.types.ByteType
- ByteTypeExpression - Class in org.apache.spark.sql.types
- ByteTypeExpression() - Constructor for class org.apache.spark.sql.types.ByteTypeExpression
- BZIP2 - Enum constant in enum class org.apache.spark.sql.avro.AvroCompressionCodec
C
- cache() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Persist this RDD with the default storage level (
MEMORY_ONLY
). - cache() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Persist this RDD with the default storage level (
MEMORY_ONLY
). - cache() - Method in class org.apache.spark.api.java.JavaRDD
-
Persist this RDD with the default storage level (
MEMORY_ONLY
). - cache() - Method in class org.apache.spark.graphx.Graph
-
Caches the vertices and edges associated with this graph at the previously-specified target storage levels, which default to
MEMORY_ONLY
. - cache() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
Persists the edge partitions using
targetStorageLevel
, which defaults to MEMORY_ONLY. - cache() - Method in class org.apache.spark.graphx.impl.GraphImpl
- cache() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
Persists the vertex partitions at
targetStorageLevel
, which defaults to MEMORY_ONLY. - cache() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Caches the underlying RDD.
- cache() - Method in class org.apache.spark.rdd.RDD
-
Persist this RDD with the default storage level (
MEMORY_ONLY
). - cache() - Method in class org.apache.spark.sql.api.Dataset
-
Persist this Dataset with the default storage level (
MEMORY_AND_DISK
). - cache() - Method in class org.apache.spark.sql.Dataset
- cache() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- cache() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- cache() - Method in class org.apache.spark.streaming.dstream.DStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- CACHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- CACHED_PARTITIONS() - Static method in class org.apache.spark.ui.storage.ToolTips
- CachedBatch - Interface in org.apache.spark.sql.columnar
-
Basic interface that all cached batches of data must support.
- CachedBatchSerializer - Interface in org.apache.spark.sql.columnar
-
Provides APIs that handle transformations of SQL data associated with the cache/persist APIs.
- CacheId - Class in org.apache.spark.storage
- CacheId(String, String) - Constructor for class org.apache.spark.storage.CacheId
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.GBTClassifier
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.GBTRegressor
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- cacheNodeIds() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
If false, the algorithm will pass trees to executors to match instances with nodes.
- cacheSize() - Method in interface org.apache.spark.SparkExecutorInfo
- cacheSize() - Method in class org.apache.spark.SparkExecutorInfoImpl
- cacheTable(String) - Method in class org.apache.spark.sql.api.Catalog
-
Caches the specified table in-memory.
- cacheTable(String) - Method in class org.apache.spark.sql.SQLContext
-
Caches the specified table in-memory.
- cacheTable(String, StorageLevel) - Method in class org.apache.spark.sql.api.Catalog
-
Caches the specified table with the given storage level.
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
-
information calculation for multiclass classification
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
-
information calculation for multiclass classification
- calculate(double[], double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
-
information calculation for multiclass classification
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
-
information calculation for multiclass classification
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
-
variance calculation
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
-
variance calculation
- calculate(double, double, double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
-
information calculation for regression
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
-
variance calculation
- calculateAmountAndPartsForFraction(double) - Static method in class org.apache.spark.resource.ResourceUtils
- calculateNumberOfPartitions(long, int, int) - Method in class org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
-
Calculate the number of partitions to use in saving the model.
- CalendarInterval - Class in org.apache.spark.unsafe.types
-
The class representing calendar intervals.
- CalendarInterval(int, int, long) - Constructor for class org.apache.spark.unsafe.types.CalendarInterval
- CalendarIntervalType - Class in org.apache.spark.sql.types
-
The data type representing calendar intervals.
- CalendarIntervalType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the CalendarIntervalType object.
- CalendarIntervalType() - Constructor for class org.apache.spark.sql.types.CalendarIntervalType
- call() - Method in interface org.apache.spark.api.java.function.Function0
- call() - Method in interface org.apache.spark.sql.api.java.UDF0
- call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.ForeachPartitionFunction
- call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.MapPartitionsFunction
- call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsFunction
- call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.MapGroupsFunction
- call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsWithStateFunction
- call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.MapGroupsWithStateFunction
- call(K, Iterator<V1>, Iterator<V2>) - Method in interface org.apache.spark.api.java.function.CoGroupFunction
- call(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.procedures.BoundProcedure
-
Executes this procedure with the given input.
- call(T) - Method in interface org.apache.spark.api.java.function.DoubleFlatMapFunction
- call(T) - Method in interface org.apache.spark.api.java.function.DoubleFunction
- call(T) - Method in interface org.apache.spark.api.java.function.FilterFunction
- call(T) - Method in interface org.apache.spark.api.java.function.FlatMapFunction
- call(T) - Method in interface org.apache.spark.api.java.function.ForeachFunction
- call(T) - Method in interface org.apache.spark.api.java.function.MapFunction
- call(T) - Method in interface org.apache.spark.api.java.function.PairFlatMapFunction
- call(T) - Method in interface org.apache.spark.api.java.function.PairFunction
- call(T) - Method in interface org.apache.spark.api.java.function.VoidFunction
- call(T1) - Method in interface org.apache.spark.api.java.function.Function
- call(T1) - Method in interface org.apache.spark.sql.api.java.UDF1
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.FlatMapFunction2
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.Function2
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.VoidFunction2
- call(T1, T2) - Method in interface org.apache.spark.sql.api.java.UDF2
- call(T1, T2, T3) - Method in interface org.apache.spark.api.java.function.Function3
- call(T1, T2, T3) - Method in interface org.apache.spark.sql.api.java.UDF3
- call(T1, T2, T3, T4) - Method in interface org.apache.spark.api.java.function.Function4
- call(T1, T2, T3, T4) - Method in interface org.apache.spark.sql.api.java.UDF4
- call(T1, T2, T3, T4, T5) - Method in interface org.apache.spark.sql.api.java.UDF5
- call(T1, T2, T3, T4, T5, T6) - Method in interface org.apache.spark.sql.api.java.UDF6
- call(T1, T2, T3, T4, T5, T6, T7) - Method in interface org.apache.spark.sql.api.java.UDF7
- call(T1, T2, T3, T4, T5, T6, T7, T8) - Method in interface org.apache.spark.sql.api.java.UDF8
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9) - Method in interface org.apache.spark.sql.api.java.UDF9
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10) - Method in interface org.apache.spark.sql.api.java.UDF10
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11) - Method in interface org.apache.spark.sql.api.java.UDF11
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12) - Method in interface org.apache.spark.sql.api.java.UDF12
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13) - Method in interface org.apache.spark.sql.api.java.UDF13
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14) - Method in interface org.apache.spark.sql.api.java.UDF14
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15) - Method in interface org.apache.spark.sql.api.java.UDF15
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16) - Method in interface org.apache.spark.sql.api.java.UDF16
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17) - Method in interface org.apache.spark.sql.api.java.UDF17
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18) - Method in interface org.apache.spark.sql.api.java.UDF18
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19) - Method in interface org.apache.spark.sql.api.java.UDF19
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20) - Method in interface org.apache.spark.sql.api.java.UDF20
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21) - Method in interface org.apache.spark.sql.api.java.UDF21
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22) - Method in interface org.apache.spark.sql.api.java.UDF22
- call(T, T) - Method in interface org.apache.spark.api.java.function.ReduceFunction
- call_function(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Call a SQL function.
- call_function(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Call a SQL function.
- call_udf(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- call_udf(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- callSite() - Method in interface org.apache.spark.QueryContext
- callSite() - Method in class org.apache.spark.storage.RDDInfo
- CALLSITE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- callUDF(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- callUDF(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use call_udf.
- cancel() - Method in interface org.apache.spark.FutureAction
-
Cancels the execution of this action.
- cancel(Option<String>) - Method in class org.apache.spark.ComplexFutureAction
- cancel(Option<String>) - Method in interface org.apache.spark.FutureAction
-
Cancels the execution of this action with an optional reason.
- cancel(Option<String>) - Method in class org.apache.spark.SimpleFutureAction
- cancelAllJobs() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel all jobs that have been scheduled or are running.
- cancelAllJobs() - Method in class org.apache.spark.SparkContext
-
Cancel all jobs that have been scheduled or are running.
- cancelJob(int) - Method in class org.apache.spark.SparkContext
-
Cancel a given job if it's scheduled or running.
- cancelJob(int, String) - Method in class org.apache.spark.SparkContext
-
Cancel a given job if it's scheduled or running.
- cancelJobGroup(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel active jobs for the specified group.
- cancelJobGroup(String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs for the specified group.
- cancelJobGroup(String, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel active jobs for the specified group.
- cancelJobGroup(String, String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs for the specified group.
- cancelJobGroupAndFutureJobs(String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs for the specified group, as well as the future jobs in this job group.
- cancelJobGroupAndFutureJobs(String, String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs for the specified group, as well as the future jobs in this job group.
- cancelJobsWithTag(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel active jobs that have the specified tag.
- cancelJobsWithTag(String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs that have the specified tag.
- cancelJobsWithTag(String, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel active jobs that have the specified tag.
- cancelJobsWithTag(String, String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs that have the specified tag.
- cancelStage(int) - Method in class org.apache.spark.SparkContext
-
Cancel a given stage and all jobs associated with it.
- cancelStage(int, String) - Method in class org.apache.spark.SparkContext
-
Cancel a given stage and all jobs associated with it.
- canCreate(String) - Method in interface org.apache.spark.scheduler.ExternalClusterManager
-
Check if this cluster manager instance can create scheduler components for a certain master URL.
- canDeleteWhere(Predicate[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsDelete
- canDeleteWhere(Predicate[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsDeleteV2
-
Checks whether it is possible to delete data from a data source table that matches filter expressions.
- canDeleteWhere(Filter[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsDelete
-
Checks whether it is possible to delete data from a data source table that matches filter expressions.
- canEqual(Object) - Static method in class org.apache.spark.ExpireDeadHosts
- canEqual(Object) - Static method in class org.apache.spark.metrics.DirectPoolMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- canEqual(Object) - Static method in class org.apache.spark.metrics.JVMHeapMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.MappedPoolMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- canEqual(Object) - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- canEqual(Object) - Static method in class org.apache.spark.ml.feature.Dot
- canEqual(Object) - Static method in class org.apache.spark.ml.feature.EmptyTerm
- canEqual(Object) - Static method in class org.apache.spark.Resubmitted
- canEqual(Object) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- canEqual(Object) - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- canEqual(Object) - Static method in class org.apache.spark.scheduler.JobSucceeded
- canEqual(Object) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- canEqual(Object) - Static method in class org.apache.spark.scheduler.StopCoordinator
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- canEqual(Object) - Static method in class org.apache.spark.sql.types.BinaryType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.BooleanType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ByteType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DateType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DateTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DoubleType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.FloatType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.IntegerType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.LongType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.LongTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.NullType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ShortType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.StringType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.StringTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.TimestampNTZType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.TimestampType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- canEqual(Object) - Static method in class org.apache.spark.sql.types.VariantType
- canEqual(Object) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- canEqual(Object) - Static method in class org.apache.spark.StopMapOutputTracker
- canEqual(Object) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- canEqual(Object) - Static method in class org.apache.spark.Success
- canEqual(Object) - Static method in class org.apache.spark.TaskResultLost
- canEqual(Object) - Static method in class org.apache.spark.TaskSchedulerIsSet
- canEqual(Object) - Static method in class org.apache.spark.UnknownReason
- canEqual(Object) - Method in class org.apache.spark.util.MutablePair
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Check if this dialect instance can handle a certain jdbc url.
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.SnowflakeDialect
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.TeradataDialect
- canHandle(Driver, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcConnectionProvider
-
Checks if this connection provider instance can handle the connection initiated by the driver.
- cannotAcquireMemoryToBuildLongHashedRelationError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotAcquireMemoryToBuildUnsafeHashedRelationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotAddMultiPartitionsOnNonatomicPartitionTableError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotAllocateMemoryToGrowBytesToBytesMapError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotAlterCollationBucketColumn(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotAlterPartitionColumn(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotAlterTableWithAlterViewError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotAlterTempViewWithSchemaBindingError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotAlterViewWithAlterTableError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotAssignEventTimeColumn() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotBroadcastTableOverMaxTableBytesError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotBroadcastTableOverMaxTableRowsError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotBuildHashedRelationLargerThan8GError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotBuildHashedRelationWithUniqueKeysExceededError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCastError(DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCastFromNullTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotChangeDecimalPrecisionError(Decimal, int, int, QueryContext) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- cannotChangeDecimalPrecisionError(Decimal, int, int, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotChangeStorageLevelError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- cannotCleanReservedNamespacePropertyError(String, ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- cannotCleanReservedTablePropertyError(String, ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- cannotClearOutputDirectoryError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotClearPartitionDirectoryError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCloneOrCopyReadOnlySQLConfError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCompareCostWithTargetCostError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotConvertCatalystTypeToProtobufTypeError(Seq<String>, String, DataType, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotConvertCatalystValueToProtobufEnumTypeError(Seq<String>, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotConvertDataTypeToParquetTypeError(StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotConvertOrcTimestampNTZToTimestampLTZError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotConvertOrcTimestampToTimestampNTZError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotConvertProtobufTypeToCatalystTypeError(String, DataType, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotConvertProtobufTypeToSqlTypeError(String, Seq<String>, String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotConvertSqlTypeToProtobufError(String, DataType, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateArrayWithElementsExceedLimitError(long, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCreateColumnarReaderError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCreateDatabaseWithSameNameAsPreservedDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateJDBCNamespaceUsingProviderError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateJDBCTableUsingLocationError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateJDBCTableUsingProviderError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateJDBCTableWithPartitionsError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCreateParquetConverterForDataTypeError(DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCreateParquetConverterForDecimalTypeError(DecimalType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCreateParquetConverterForTypeError(DecimalType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCreateStagingDirError(String, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotCreateTableWithBothProviderAndSerdeError(Option<String>, Option<SerdeInfo>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateTempViewUsingHiveDataSourceError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateViewNotEnoughColumnsError(TableIdentifier, Seq<String>, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotCreateViewTooManyColumnsError(TableIdentifier, Seq<String>, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotDeleteTableWhereFiltersError(Table, Predicate[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotDropBuiltinFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotDropDefaultDatabaseError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotDropMultiPartitionsOnNonatomicPartitionTableError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotDropNonemptyDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotDropNonemptyNamespaceError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotEvaluateExpressionError(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotExecuteStreamingRelationExecError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotFetchTablesOfDatabaseError(String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotFindCatalogToHandleIdentifierError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotFindCatalystTypeInProtobufSchemaError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotFindColumnError(String, String[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotFindColumnInRelationOutputError(String, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotFindConstructorForTypeError(String) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotFindConstructorForTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotFindDescriptorFileError(String, Throwable) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- cannotFindDescriptorFileError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotFindEncoderForTypeError(String) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotFindEncoderForTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotFindPartitionColumnInPartitionSchemaError(StructField, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotFindProtobufFieldInCatalystError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotGenerateCodeForExpressionError(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotGenerateCodeForIncomparableTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotGetEventTimeWatermarkError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotGetJdbcTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotGetOuterPointerForInnerClassError(Class<?>) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotGetOuterPointerForInnerClassError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotGetSQLConfInSchedulerEventLoopThreadError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotHaveCircularReferencesInBeanClassError(Class<?>) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotHaveCircularReferencesInBeanClassError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotHaveCircularReferencesInClassError(String) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotHaveCircularReferencesInClassError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotInstantiateAbstractCatalogPluginClassError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotInterpolateClassIntoCodeBlockError(Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotLoadClassNotOnClassPathError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotLoadClassWhenRegisteringFunctionError(String, FunctionIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotLoadStore(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotLoadUserDefinedTypeError(String, String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- cannotMergeClassWithOtherClassError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotMergeDecimalTypesWithIncompatibleScaleError(int, int) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- cannotMergeIncompatibleDataTypesError(DataType, DataType) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- cannotModifyValueOfSparkConfigError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotModifyValueOfStaticConfigError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotMutateReadOnlySQLConfError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotOperateOnHiveDataSourceFilesError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotOverwritePathBeingReadFromError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotOverwriteTableThatIsBeingReadFromError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotParseDecimalError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotParseIntervalError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotParseJsonArraysAsStructsError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotParseJSONFieldError(JsonParser, JsonToken, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotParseJSONFieldError(String, String, JsonToken, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotParseStatisticAsPercentileError(String, NumberFormatException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotParseStringAsDataTypeError(String, String, String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotParseStringAsDataTypeError(String, String, DataType) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotParseValueTypeError(String, String, SqlBaseParser.TypeConstructorContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- cannotPassTypedColumnInUntypedSelectError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotPurgeAsBreakInternalStateError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotReadCheckpoint(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotReadFilesError(Throwable, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotReadFooterForFileError(Path, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotRecognizeHiveTypeError(ParseException, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotRefreshBuiltInFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotRefreshTempFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotRemovePartitionDirError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotRemoveReservedPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotRenameTableAcrossSchemaError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotRenameTableWithAlterViewError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotRenameTempViewToExistingTableError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotRenameTempViewWithDatabaseSpecifiedError(TableIdentifier, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotReplaceMissingTableError(Identifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotReplaceMissingTableError(Identifier, Option<Throwable>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotResolveAttributeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotResolveColumnGivenInputColumnsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotResolveColumnNameAmongAttributesError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotResolveDataFrameColumn(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotResolveStarExpandGivenInputColumnsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotResolveWindowReferenceError(String, SqlBaseParser.WindowClauseContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- cannotRestorePermissionsForPathError(FsPermission, Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotRetrieveTableOrViewNotInSameDatabaseError(Seq<QualifiedTableName>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotRunSubmitMapStageOnZeroPartitionRDDError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- cannotSafelyMergeSerdePropertiesError(Map<String, String>, Map<String, String>, Set<String>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotSaveBlockOnDecommissionedExecutorError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- cannotSaveIntervalIntoExternalStorageError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotSetJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotSetTimeoutDurationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotSetTimeoutTimestampError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotSpecifyBothJdbcTableNameAndQueryError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotSpecifyDatabaseForTempViewError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotSpecifyWindowFrameError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotTerminateGeneratorError(UnresolvedGenerator) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotTranslateExpressionToSourceFilterError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotTranslateNonNullValueForFieldError(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotUnsetJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotUseAllColumnsForPartitionColumnsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotUseIntervalTypeInTableSchemaError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotUseInvalidJavaIdentifierAsFieldNameError(String, WalkedTypePath) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotUseInvalidJavaIdentifierAsFieldNameError(String, WalkedTypePath) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotUseKryoSerialization() - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- cannotUseKryoSerialization() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- cannotUseMapSideCombiningWithArrayKeyError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- cannotUsePreservedDatabaseAsCurrentDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotWriteDataToRelationsWithMultiplePathsError(Seq<Path>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotWriteNotEnoughColumnsToTableError(String, Seq<String>, Seq<Attribute>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cannotWriteTooManyColumnsToTableError(String, Seq<String>, Seq<Attribute>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- canonicalName() - Method in interface org.apache.spark.sql.connector.catalog.functions.BoundFunction
-
Returns the canonical name of this function, used to determine if functions are equivalent.
- canonicalName() - Method in class org.apache.spark.sql.connector.expressions.aggregate.UserDefinedAggregateFunc
- canonicalName() - Method in class org.apache.spark.sql.connector.expressions.UserDefinedScalarFunc
- CanonicalRandomVertexCut$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
- canOnlyZipRDDsWithSamePartitionSizeError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- canOverwrite(Predicate[]) - Method in interface org.apache.spark.sql.connector.write.SupportsOverwrite
- canOverwrite(Predicate[]) - Method in interface org.apache.spark.sql.connector.write.SupportsOverwriteV2
-
Checks whether it is possible to overwrite data from a data source table that matches filter expressions.
- canOverwrite(Filter[]) - Method in interface org.apache.spark.sql.connector.write.SupportsOverwrite
-
Checks whether it is possible to overwrite data from a data source table that matches filter expressions.
- canRenameConflictingMetadataColumns() - Method in interface org.apache.spark.sql.connector.catalog.SupportsMetadataColumns
-
Determines how this data source handles name conflicts between metadata and data columns.
- canUpCast(DataType, DataType) - Static method in class org.apache.spark.sql.types.UpCastRule
-
Returns true iff we can safely up-cast the
from
type toto
type without any truncating or precision lose or possible runtime failures. - capabilities() - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- capabilities() - Method in interface org.apache.spark.sql.connector.catalog.Table
-
Returns the set of capabilities for this table.
- capabilities() - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
- cardinality() - Method in class org.apache.spark.util.sketch.BloomFilter
- cardinality(Column) - Static method in class org.apache.spark.sql.functions
-
Returns length of array or map.
- cartesian(JavaRDDLike<U, ?>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in
this
and b is inother
. - cartesian(RDD<U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in
this
and b is inother
. - CaseInsensitiveStringMap - Class in org.apache.spark.sql.util
-
Case-insensitive map of string keys to string values.
- CaseInsensitiveStringMap(Map<String, String>) - Constructor for class org.apache.spark.sql.util.CaseInsensitiveStringMap
- caseSensitive() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
Whether to do a case sensitive comparison over the stop words.
- CaseStatementExec - Class in org.apache.spark.sql.scripting
-
Executable node for CaseStatement.
- CaseStatementExec(Seq<SingleStatementExec>, Seq<CompoundBodyExec>, Option<CompoundBodyExec>, SparkSession) - Constructor for class org.apache.spark.sql.scripting.CaseStatementExec
- cast(String) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type, using the canonical string representation of the type.
- cast(DataType) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type.
- Cast - Class in org.apache.spark.sql.connector.expressions
-
Represents a cast expression in the public logical expression API.
- Cast(Expression, DataType) - Constructor for class org.apache.spark.sql.connector.expressions.Cast
-
Deprecated.
- Cast(Expression, DataType, DataType) - Constructor for class org.apache.spark.sql.connector.expressions.Cast
- castingCauseOverflowError(Object, DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- castingCauseOverflowError(String, DataType, DataType) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- castingCauseOverflowErrorInTableInsert(DataType, DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- castPartitionSpec(String, DataType, SQLConf) - Static method in class org.apache.spark.sql.util.PartitioningUtils
- catalog() - Method in class org.apache.spark.sql.api.SparkSession
-
Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.
- catalog() - Method in class org.apache.spark.sql.catalog.Database
- catalog() - Method in class org.apache.spark.sql.catalog.Function
- catalog() - Method in class org.apache.spark.sql.catalog.Table
- catalog() - Method in class org.apache.spark.sql.SparkSession
- Catalog - Class in org.apache.spark.sql.api
-
Catalog interface for Spark.
- Catalog - Class in org.apache.spark.sql.catalog
- Catalog() - Constructor for class org.apache.spark.sql.api.Catalog
- Catalog() - Constructor for class org.apache.spark.sql.catalog.Catalog
- CatalogAndIdentifier() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier
- CatalogAndIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
- CatalogAndIdentifier$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier$
- CatalogAndMultipartIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
- CatalogAndNamespace() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace
- CatalogAndNamespace() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
- CatalogAndNamespace$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace$
- CatalogExtension - Interface in org.apache.spark.sql.connector.catalog
-
An API to extend the Spark built-in session catalog.
- catalogFailToCallPublicNoArgConstructorError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- catalogFailToFindPublicNoArgConstructorError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- CatalogHelper(CatalogPlugin) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
- catalogManager() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
- CatalogMetadata - Class in org.apache.spark.sql.catalog
-
A catalog in Spark, as returned by the
listCatalogs
method defined inCatalog
. - CatalogMetadata(String, String) - Constructor for class org.apache.spark.sql.catalog.CatalogMetadata
- catalogNotFoundError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- CatalogNotFoundException - Exception in org.apache.spark.sql.connector.catalog
- CatalogNotFoundException(String, Map<String, String>) - Constructor for exception org.apache.spark.sql.connector.catalog.CatalogNotFoundException
- CatalogNotFoundException(String, Map<String, String>, Throwable) - Constructor for exception org.apache.spark.sql.connector.catalog.CatalogNotFoundException
- catalogOperationNotSupported(CatalogPlugin, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- CatalogPlugin - Interface in org.apache.spark.sql.connector.catalog
-
A marker interface to provide a catalog implementation for Spark.
- catalogPluginClassNotFoundForCatalogError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- catalogPluginClassNotImplementedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- Catalogs - Class in org.apache.spark.sql.connector.catalog
- Catalogs() - Constructor for class org.apache.spark.sql.connector.catalog.Catalogs
- catalogString() - Method in class org.apache.spark.sql.types.ArrayType
- catalogString() - Static method in class org.apache.spark.sql.types.BinaryType
- catalogString() - Static method in class org.apache.spark.sql.types.BooleanType
- catalogString() - Static method in class org.apache.spark.sql.types.ByteType
- catalogString() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- catalogString() - Method in class org.apache.spark.sql.types.DataType
-
String representation for the type saved in external catalogs.
- catalogString() - Static method in class org.apache.spark.sql.types.DateType
- catalogString() - Static method in class org.apache.spark.sql.types.DoubleType
- catalogString() - Static method in class org.apache.spark.sql.types.FloatType
- catalogString() - Static method in class org.apache.spark.sql.types.IntegerType
- catalogString() - Static method in class org.apache.spark.sql.types.LongType
- catalogString() - Method in class org.apache.spark.sql.types.MapType
- catalogString() - Static method in class org.apache.spark.sql.types.NullType
- catalogString() - Static method in class org.apache.spark.sql.types.ShortType
- catalogString() - Static method in class org.apache.spark.sql.types.StringType
- catalogString() - Method in class org.apache.spark.sql.types.StructType
- catalogString() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- catalogString() - Static method in class org.apache.spark.sql.types.TimestampType
- catalogString() - Method in class org.apache.spark.sql.types.UserDefinedType
- catalogString() - Static method in class org.apache.spark.sql.types.VariantType
- CatalogV2Implicits - Class in org.apache.spark.sql.connector.catalog
-
Conversion helpers for working with v2
CatalogPlugin
. - CatalogV2Implicits() - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits
- CatalogV2Implicits.BucketSpecHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.CatalogHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.ClusterByHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.ColumnsHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.FunctionIdentifierHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.IdentifierHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.MultipartIdentifierHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.NamespaceHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.PartitionTypeHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.TableIdentifierHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Implicits.TransformHelper - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Util - Class in org.apache.spark.sql.connector.catalog
- CatalogV2Util() - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Util
- CatalystScan - Interface in org.apache.spark.sql.sources
-
::Experimental:: An interface for experimenting with a more direct connection to the query planner.
- Categorical() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
- categoricalCols() - Method in class org.apache.spark.ml.feature.FeatureHasher
-
Numeric columns to treat as categorical features.
- categoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- CategoricalSplit - Class in org.apache.spark.ml.tree
-
Split which tests a categorical feature.
- categories() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- categories() - Method in class org.apache.spark.mllib.tree.model.Split
- categoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- categorySizes() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- cause() - Method in exception org.apache.spark.sql.AnalysisException
- cause() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- CausedBy - Class in org.apache.spark.util
-
Extractor Object for pulling out the root cause of an error.
- CausedBy() - Constructor for class org.apache.spark.util.CausedBy
- cbrt(String) - Static method in class org.apache.spark.sql.functions
-
Computes the cube-root of the given column.
- cbrt(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the cube-root of the given value.
- ceil() - Method in class org.apache.spark.sql.types.Decimal
- ceil(String) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of
e
to 0 decimal places. - ceil(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of
e
to 0 decimal places. - ceil(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of
e
toscale
decimal places. - ceiling(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of
e
to 0 decimal places. - ceiling(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of
e
toscale
decimal places. - censorCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- censorCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- censorCol() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
-
Param for censor column name.
- centerMatrix() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, T, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<U>>, Function0<Parsers.Parser<Function2<T, U, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- chainr1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, U, U>>>, Function2<T, U, U>, U) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- changePrecision(int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Update precision and scale while keeping our value the same, and return true if successful.
- channel() - Method in interface org.apache.spark.shuffle.api.WritableByteChannelWrapper
-
The underlying channel to write bytes into.
- channelRead0(ChannelHandlerContext, byte[]) - Method in class org.apache.spark.api.r.RBackendAuthHandler
- char_length(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the character length of string data or number of bytes of binary data.
- character_length(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the character length of string data or number of bytes of binary data.
- charOrVarcharTypeAsStringUnsupportedError() - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- CharType - Class in org.apache.spark.sql.types
- CharType(int) - Constructor for class org.apache.spark.sql.types.CharType
- charTypeMissingLengthError(String, SqlBaseParser.PrimitiveDataTypeContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- checkAndGetK8sMasterUrl(String) - Static method in class org.apache.spark.util.Utils
-
Check the validity of the given Kubernetes master URL and return the resolved URL.
- checkColumnNameDuplication(Seq<String>, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if input column names have duplicate identifiers.
- checkColumnNameDuplication(Seq<String>, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if input column names have duplicate identifiers.
- checkColumnType(StructType, String, DataType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of the required data type.
- checkColumnTypes(StructType, String, Seq<DataType>, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of one of the require data types.
- checkCommandAvailable(String) - Static method in class org.apache.spark.util.Utils
-
Check if a command is available.
- checkDataColumns(RFormula, Dataset<?>) - Static method in class org.apache.spark.ml.r.RWrapperUtils
-
DataFrame column check.
- checkFileExists(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
Check if the file exists at the given path.
- checkHost(String) - Static method in class org.apache.spark.util.Utils
-
Checks if the host contains only valid hostname/ip without port NOTE: Incase of IPV6 ip it should be enclosed inside []
- checkHostPort(String) - Static method in class org.apache.spark.util.Utils
- checkIntegers(Dataset<?>, String) - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
-
Attempts to safely cast a user/item id to an Int.
- checkNumericType(StructType, String, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of the numeric data type.
- checkOffHeapEnabled(SparkConf, long) - Static method in class org.apache.spark.util.Utils
-
return 0 if MEMORY_OFFHEAP_ENABLED is false.
- checkpoint() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Mark this RDD for checkpointing.
- checkpoint() - Method in class org.apache.spark.graphx.Graph
-
Mark this Graph for checkpointing.
- checkpoint() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- checkpoint() - Method in class org.apache.spark.graphx.impl.GraphImpl
- checkpoint() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- checkpoint() - Method in class org.apache.spark.rdd.HadoopRDD
- checkpoint() - Method in class org.apache.spark.rdd.RDD
-
Mark this RDD for checkpointing.
- checkpoint() - Method in class org.apache.spark.sql.api.Dataset
-
Eagerly checkpoint a Dataset and return the new Dataset.
- checkpoint() - Method in class org.apache.spark.sql.Dataset
- checkpoint(boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a checkpointed version of this Dataset.
- checkpoint(boolean) - Method in class org.apache.spark.sql.Dataset
- checkpoint(String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Sets the context to periodically checkpoint the DStream operations for master fault-tolerance.
- checkpoint(String) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Set the context to periodically checkpoint the DStream operations for driver fault-tolerance.
- checkpoint(Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Enable periodic checkpointing of RDDs of this DStream.
- checkpoint(Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Enable periodic checkpointing of RDDs of this DStream
- checkpointCleaned(long) - Method in interface org.apache.spark.CleanerListener
- checkpointDirectoryHasNotBeenSetInSparkContextError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- Checkpointed() - Static method in class org.apache.spark.rdd.CheckpointState
- checkpointFailedToSaveError(int, Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
- CheckpointingInProgress() - Static method in class org.apache.spark.rdd.CheckpointState
- checkpointInterval() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- checkpointInterval() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- checkpointInterval() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- checkpointInterval() - Method in class org.apache.spark.ml.classification.GBTClassifier
- checkpointInterval() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- checkpointInterval() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- checkpointInterval() - Method in class org.apache.spark.ml.clustering.LDA
- checkpointInterval() - Method in class org.apache.spark.ml.clustering.LDAModel
- checkpointInterval() - Method in interface org.apache.spark.ml.param.shared.HasCheckpointInterval
-
Param for set checkpoint interval (>= 1) or disable checkpoint (-1).
- checkpointInterval() - Method in class org.apache.spark.ml.recommendation.ALS
- checkpointInterval() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- checkpointInterval() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- checkpointInterval() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- checkpointInterval() - Method in class org.apache.spark.ml.regression.GBTRegressor
- checkpointInterval() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- checkpointInterval() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- checkpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- checkpointLocationNotSpecifiedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- checkpointRDDBlockIdNotFoundError(RDDBlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- checkpointRDDHasDifferentNumberOfPartitionsFromOriginalRDDError(int, int, int, int) - Static method in class org.apache.spark.errors.SparkCoreErrors
- CheckpointReader - Class in org.apache.spark.streaming
- CheckpointReader() - Constructor for class org.apache.spark.streaming.CheckpointReader
- CheckpointState - Class in org.apache.spark.rdd
-
Enumeration to manage state transitions of an RDD through checkpointing
- CheckpointState() - Constructor for class org.apache.spark.rdd.CheckpointState
- checkSchemaColumnNameDuplication(DataType, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if an input schema has duplicate column names.
- checkSchemaColumnNameDuplication(StructType, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if an input schema has duplicate column names.
- checkSingleVsMultiColumnParams(Params, Seq<Param<?>>, Seq<Param<?>>) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Utility for Param validity checks for Transformers which have both single- and multi-column support.
- checkSpeculatableTasks(long) - Method in interface org.apache.spark.scheduler.Schedulable
- checkState(boolean, Function0<String>) - Static method in class org.apache.spark.streaming.util.HdfsUtils
- checkThresholdConsistency() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
If
threshold
andthresholds
are both set, ensures they are consistent. - checkTransformDuplication(Seq<Transform>, String, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if the partitioning transforms are being duplicated or not.
- checkUIViewPermissions() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
- checkUIViewPermissions(String, Option<String>, String) - Method in interface org.apache.spark.status.api.v1.UIRoot
- child() - Method in class org.apache.spark.sql.connector.expressions.filter.Not
- child() - Method in class org.apache.spark.sql.sources.Not
- CHILD_CLUSTERS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- CHILD_CONNECTION_TIMEOUT - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Maximum time (in ms) to wait for a child process to connect back to the launcher server when using @link{#start()}.
- CHILD_NODES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- CHILD_PROCESS_LOGGER_NAME - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Logger name to use when launching a child process.
- ChildFirstURLClassLoader - Class in org.apache.spark.util
-
A mutable class loader that gives preference to its own URLs over the parent class loader when loading classes and resources.
- ChildFirstURLClassLoader(URL[], ClassLoader) - Constructor for class org.apache.spark.util.ChildFirstURLClassLoader
- ChildFirstURLClassLoader(URL[], ClassLoader, ClassLoader) - Constructor for class org.apache.spark.util.ChildFirstURLClassLoader
-
Specify the grandparent if there is a need to load in the order of `grandparent -> urls (child) -> parent`.
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Avg
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Count
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.CountStar
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Max
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Min
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Sum
- children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.UserDefinedAggregateFunc
- children() - Method in class org.apache.spark.sql.connector.expressions.Cast
- children() - Method in interface org.apache.spark.sql.connector.expressions.Expression
-
Returns an array of the children of this node.
- children() - Method in class org.apache.spark.sql.connector.expressions.Extract
- children() - Method in class org.apache.spark.sql.connector.expressions.GeneralScalarExpression
- children() - Method in interface org.apache.spark.sql.connector.expressions.Literal
- children() - Method in interface org.apache.spark.sql.connector.expressions.NamedReference
- children() - Method in interface org.apache.spark.sql.connector.expressions.SortOrder
- children() - Method in interface org.apache.spark.sql.connector.expressions.Transform
- children() - Method in class org.apache.spark.sql.connector.expressions.UserDefinedScalarFunc
- chiSqFunc() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.Method
- ChiSqSelector - Class in org.apache.spark.ml.feature
-
Deprecated.use UnivariateFeatureSelector instead. Since 3.1.1.
- ChiSqSelector - Class in org.apache.spark.mllib.feature
-
Creates a ChiSquared feature selector.
- ChiSqSelector() - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- ChiSqSelector() - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
- ChiSqSelector(int) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
-
The is the same to call this() and setNumTopFeatures(numTopFeatures)
- ChiSqSelector(String) - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- ChiSqSelectorModel - Class in org.apache.spark.ml.feature
-
Model fitted by
ChiSqSelector
. - ChiSqSelectorModel - Class in org.apache.spark.mllib.feature
-
Chi Squared selector model.
- ChiSqSelectorModel(int[]) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel
- ChiSqSelectorModel.ChiSqSelectorModelWriter - Class in org.apache.spark.ml.feature
- ChiSqSelectorModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.feature
- ChiSqSelectorModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.feature
-
Model data for import/export
- ChiSqSelectorModel.SaveLoadV1_0$.Data$ - Class in org.apache.spark.mllib.feature
- ChiSqSelectorModelWriter(ChiSqSelectorModel) - Constructor for class org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter
- chiSqTest(JavaRDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of
chiSqTest()
- chiSqTest(Matrix) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's independence test on the input contingency matrix, which cannot contain negative entries or columns or rows that sum up to 0.
- chiSqTest(Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's chi-squared goodness of fit test of the observed data against the uniform distribution, with each category having an expected frequency of
1 / observed.size
. - chiSqTest(Vector, Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's chi-squared goodness of fit test of the observed data against the expected distribution.
- chiSqTest(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's independence test for every feature against the label across the input RDD.
- ChiSqTest - Class in org.apache.spark.mllib.stat.test
-
Conduct the chi-squared test for the input RDDs using the specified method.
- ChiSqTest() - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest
- ChiSqTest.Method - Class in org.apache.spark.mllib.stat.test
-
param: name String name for the method.
- ChiSqTest.Method$ - Class in org.apache.spark.mllib.stat.test
- ChiSqTest.NullHypothesis$ - Class in org.apache.spark.mllib.stat.test
- ChiSqTestResult - Class in org.apache.spark.mllib.stat.test
-
Object containing the test results for the chi-squared hypothesis test.
- chiSquared(Vector, Vector, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
- chiSquaredFeatures(RDD<LabeledPoint>, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
Conduct Pearson's independence test for each feature against the label across the input RDD.
- chiSquaredMatrix(Matrix, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
- ChiSquareTest - Class in org.apache.spark.ml.stat
-
Chi-square hypothesis testing for categorical data.
- ChiSquareTest() - Constructor for class org.apache.spark.ml.stat.ChiSquareTest
- chmod700(File) - Static method in class org.apache.spark.util.Utils
-
JDK equivalent of
chmod 700 file
. - CholeskyDecomposition - Class in org.apache.spark.mllib.linalg
-
Compute Cholesky decomposition.
- CholeskyDecomposition() - Constructor for class org.apache.spark.mllib.linalg.CholeskyDecomposition
- chr(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the ASCII character having the binary equivalent to
n
. - chunkId() - Method in class org.apache.spark.storage.ShuffleBlockChunkId
- classDoesNotImplementUserDefinedAggregateFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- classForName(String, boolean, boolean) - Method in interface org.apache.spark.util.SparkClassUtils
-
Preferred alternative to Class.forName(className), as well as Class.forName(className, initialize, loader) with current thread's ContextClassLoader.
- classForName(String, boolean, boolean) - Static method in class org.apache.spark.util.Utils
- classForName$default$2() - Static method in class org.apache.spark.util.Utils
- classForName$default$3() - Static method in class org.apache.spark.util.Utils
- classHasUnexpectedSerializerError(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- Classification() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
- ClassificationLoss - Interface in org.apache.spark.mllib.tree.loss
- ClassificationModel<FeaturesType,
M extends ClassificationModel<FeaturesType, M>> - Class in org.apache.spark.ml.classification -
Model produced by a
Classifier
. - ClassificationModel - Interface in org.apache.spark.mllib.classification
-
Represents a classification model that predicts to which of a set of categories an example belongs.
- ClassificationModel() - Constructor for class org.apache.spark.ml.classification.ClassificationModel
- ClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for multiclass classification results for a given model.
- classifier() - Method in class org.apache.spark.ml.classification.OneVsRest
- classifier() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- classifier() - Method in interface org.apache.spark.ml.classification.OneVsRestParams
-
param for the base binary classifier that we reduce multiclass classification into.
- Classifier<FeaturesType,
E extends Classifier<FeaturesType, E, M>, M extends ClassificationModel<FeaturesType, M>> - Class in org.apache.spark.ml.classification -
Single-label binary or multiclass classification.
- Classifier() - Constructor for class org.apache.spark.ml.classification.Classifier
- ClassifierParams - Interface in org.apache.spark.ml.classification
-
(private[spark]) Params for classification.
- ClassifierTypeTrait - Interface in org.apache.spark.ml.classification
- classifyException(String, Throwable) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Deprecated.Please override the classifyException method with an error class. Since 4.0.0.
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- classifyException(Throwable, String, Map<String, String>, String) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- classifyException(Throwable, String, Map<String, String>, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Gets a dialect exception, classifies it and wraps it by
AnalysisException
. - classifyException(Throwable, String, Map<String, String>, String) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- classifyException(Throwable, String, Map<String, String>, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- classifyException(Throwable, String, Map<String, String>, String) - Method in interface org.apache.spark.sql.jdbc.NoLegacyJDBCError
- classifyException(Throwable, String, Map<String, String>, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- classifyException(Throwable, String, Map<String, String>, String) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- classifyException(Throwable, String, Map<String, String>, String) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- classIsLoadable(String) - Method in interface org.apache.spark.util.SparkClassUtils
-
Determines whether the provided class is loadable in the current thread.
- classIsLoadable(String) - Static method in class org.apache.spark.util.Utils
- classIsLoadableAndAssignableFrom(String, Class<?>) - Method in interface org.apache.spark.util.SparkClassUtils
-
Determines whether the provided class is loadable in the current thread and assignable from the target class.
- classIsLoadableAndAssignableFrom(String, Class<?>) - Static method in class org.apache.spark.util.Utils
- classloader() - Method in class org.apache.spark.sql.artifact.ArtifactManager
-
Returns a
ClassLoader
for session-specific jar/class file resources. - className() - Method in class org.apache.spark.ExceptionFailure
- className() - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
-
Unique class name for identifying JSON object encoded by this class.
- className() - Method in class org.apache.spark.sql.catalog.Function
- CLASSPATH_ENTRIES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- classpathEntries() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
- classTag() - Method in class org.apache.spark.api.java.JavaDoubleRDD
- classTag() - Method in class org.apache.spark.api.java.JavaPairRDD
- classTag() - Method in class org.apache.spark.api.java.JavaRDD
- classTag() - Method in interface org.apache.spark.api.java.JavaRDDLike
- classTag() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
- classTag() - Method in interface org.apache.spark.storage.memory.MemoryEntry
- classTag() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaDStream
- classTag() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaInputDStream
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
- classUnsupportedByMapObjectsError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- classWithoutPublicNonArgumentConstructorError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- clean(long, boolean) - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Clean all the records that are older than the threshold time.
- clean(Object, boolean, boolean) - Static method in class org.apache.spark.util.SparkClosureCleaner
-
Clean the given closure in place.
- CleanAccum - Class in org.apache.spark
- CleanAccum(long) - Constructor for class org.apache.spark.CleanAccum
- CleanBroadcast - Class in org.apache.spark
- CleanBroadcast(long) - Constructor for class org.apache.spark.CleanBroadcast
- CleanCheckpoint - Class in org.apache.spark
- CleanCheckpoint(int) - Constructor for class org.apache.spark.CleanCheckpoint
- CleanerListener - Interface in org.apache.spark
-
Listener class used when any item has been cleaned by the Cleaner class.
- cleaning() - Method in class org.apache.spark.status.LiveStage
- CleanRDD - Class in org.apache.spark
- CleanRDD(int) - Constructor for class org.apache.spark.CleanRDD
- CleanShuffle - Class in org.apache.spark
- CleanShuffle(int) - Constructor for class org.apache.spark.CleanShuffle
- cleanShuffleDependencies(boolean) - Method in class org.apache.spark.rdd.RDD
-
Removes an RDD's shuffles and it's non-persisted ancestors.
- CleanSparkListener - Class in org.apache.spark
- CleanSparkListener(SparkListener) - Constructor for class org.apache.spark.CleanSparkListener
- cleanupApplication() - Method in interface org.apache.spark.shuffle.api.ShuffleDriverComponents
-
Called once at the end of the Spark application to clean up any existing shuffle state.
- cleanupOldBlocks(long) - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockHandler
-
Cleanup old blocks older than the given threshold time
- cleanUpSourceFilesUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- CleanupTask - Interface in org.apache.spark
-
Classes that represent cleaning tasks.
- CleanupTaskWeakReference - Class in org.apache.spark
-
A WeakReference associated with a CleanupTask.
- CleanupTaskWeakReference(CleanupTask, Object, ReferenceQueue<Object>) - Constructor for class org.apache.spark.CleanupTaskWeakReference
- clear() - Method in interface org.apache.spark.sql.streaming.ListState
-
Removes this state for the given grouping key.
- clear() - Method in interface org.apache.spark.sql.streaming.MapState
-
Remove this state.
- clear() - Method in interface org.apache.spark.sql.streaming.ValueState
-
Remove this state.
- clear() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- clear() - Method in class org.apache.spark.sql.util.ExecutionListenerManager
-
Removes all the registered
QueryExecutionListener
. - clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- clear() - Static method in class org.apache.spark.util.AccumulatorContext
-
Clears all registered
AccumulatorV2
s. - clear(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Clears the user-supplied value for the input param.
- clearAccumulatorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
int64 accumulator_id = 2;
- clearAccumulatorUpdates() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- clearAccumulatorUpdates() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- clearAccumulatorUpdates() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- clearActive() - Static method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use SparkSession.clearActiveSession instead. Since 2.0.0.
- clearActiveSession() - Static method in class org.apache.spark.sql.SparkSession
-
Clears the active SparkSession for current thread.
- clearActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 active_tasks = 9;
- clearAddress() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional string address = 1;
- clearAddresses() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- clearAddTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 add_time = 20;
- clearAddTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
int64 add_time = 5;
- clearAllRemovalsTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 all_removals_time_ms = 6;
- clearAllUpdatesTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 all_updates_time_ms = 4;
- clearAmount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
int64 amount = 2;
- clearAmount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
double amount = 2;
- clearAppSparkVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string app_spark_version = 8;
- clearAttempt() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 attempt = 3;
- clearAttempt() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 attempt = 3;
- clearAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string attempt_id = 1;
- clearAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 attempt_id = 3;
- clearAttempts() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- clearAttributes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- clearBarrier() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
bool barrier = 4;
- clearBatchDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
int64 batch_duration = 6;
- clearBatchId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
int64 batch_id = 5;
- clearBlacklistedInStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- clearBlockName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string block_name = 1;
- clearBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double bytes_read = 18;
- clearBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- clearBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
int64 bytes_read = 1;
- clearBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double bytes_written = 20;
- clearBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- clearBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
int64 bytes_written = 1;
- clearBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 bytes_written = 1;
- clearCache() - Method in class org.apache.spark.sql.api.Catalog
-
Removes all cached tables from the in-memory cache.
- clearCache() - Method in class org.apache.spark.sql.SQLContext
-
Removes all cached tables from the in-memory cache.
- clearCached() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
bool cached = 3;
- clearCallsite() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string callsite = 5;
- clearCallSite() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Pass-through to SparkContext.setCallSite.
- clearCallSite() - Method in class org.apache.spark.SparkContext
-
Clear the thread-local property for overriding the call sites of actions and RDDs.
- clearChildClusters() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- clearChildNodes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- clearClasspathEntries() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- clearCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- clearCommitTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 commit_time_ms = 7;
- clearCompleted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
bool completed = 7;
- clearCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 completed_tasks = 11;
- clearCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 completion_time = 5;
- clearCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional int64 completion_time = 9;
- clearCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 completion_time = 12;
- clearCoresGranted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_granted = 3;
- clearCoresPerExecutor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_per_executor = 5;
- clearCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- clearCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 corrupt_merged_block_chunks = 1;
- clearCurrentDefaultValue() - Method in class org.apache.spark.sql.types.StructField
-
Clears the StructField of its current default value, if any.
- clearCustomMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- clearDataDistribution() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- clearDefaultSession() - Static method in class org.apache.spark.sql.SparkSession
-
Clears the default SparkSession that is returned by the builder.
- clearDependencies() - Method in class org.apache.spark.rdd.CoGroupedRDD
- clearDependencies() - Method in class org.apache.spark.rdd.ShuffledRDD
- clearDependencies() - Method in class org.apache.spark.rdd.UnionRDD
- clearDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string desc = 3;
- clearDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string desc = 3;
- clearDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string description = 3;
- clearDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
optional string description = 1;
- clearDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string description = 1;
- clearDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string description = 3;
- clearDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string description = 40;
- clearDeserialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool deserialized = 7;
- clearDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string details = 4;
- clearDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string details = 41;
- clearDiscoveryScript() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string discovery_script = 3;
- clearDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double disk_bytes_spilled = 17;
- clearDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- clearDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 disk_bytes_spilled = 14;
- clearDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 disk_bytes_spilled = 22;
- clearDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 disk_bytes_spilled = 24;
- clearDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- clearDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 disk_bytes_spilled = 9;
- clearDiskSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
int64 disk_size = 9;
- clearDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 disk_used = 6;
- clearDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 disk_used = 4;
- clearDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
int64 disk_used = 4;
- clearDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int64 disk_used = 7;
- clearDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 duration = 5;
- clearDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double duration = 5;
- clearDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 duration = 7;
- clearDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 duration = 7;
- clearDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- clearDurationMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- clearEdges() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- clearEdges() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- clearEndOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string end_offset = 3;
- clearEndTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 end_time = 3;
- clearEndTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional int64 end_timestamp = 7;
- clearErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string error_message = 10;
- clearErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string error_message = 14;
- clearErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string error_message = 14;
- clearEventTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- clearException() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string exception = 5;
- clearExcludedInStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- clearExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
int64 execution_id = 1;
- clearExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 execution_id = 1;
- clearExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_cpu_time = 9;
- clearExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_cpu_time = 17;
- clearExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_cpu_time = 19;
- clearExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- clearExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_cpu_time = 4;
- clearExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_deserialize_cpu_time = 7;
- clearExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_deserialize_cpu_time = 15;
- clearExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_deserialize_cpu_time = 17;
- clearExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- clearExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_deserialize_cpu_time = 2;
- clearExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_deserialize_time = 6;
- clearExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_deserialize_time = 14;
- clearExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_deserialize_time = 16;
- clearExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- clearExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_deserialize_time = 1;
- clearExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
optional string executor_id = 3;
- clearExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string executor_id = 2;
- clearExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string executor_id = 8;
- clearExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string executor_id = 8;
- clearExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- clearExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- clearExecutorMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- clearExecutorMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- clearExecutorResourceRequests() - Method in class org.apache.spark.resource.ResourceProfileBuilder
- clearExecutorResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- clearExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_run_time = 8;
- clearExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_run_time = 16;
- clearExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_run_time = 18;
- clearExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- clearExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_run_time = 3;
- clearExecutors() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- clearExecutorSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- clearFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- clearFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 failed_tasks = 2;
- clearFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 failed_tasks = 10;
- clearFailureReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string failure_reason = 13;
- clearFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- clearFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 fetch_wait_time = 3;
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- clearField(Descriptors.FieldDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- clearFirstTaskLaunchedTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 first_task_launched_time = 11;
- clearFromId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
int32 from_id = 1;
- clearFromId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
int64 from_id = 1;
- clearGettingResultTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double getting_result_time = 13;
- clearGettingResultTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 getting_result_time = 18;
- clearGettingResultTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- clearHadoopProperties() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- clearHasMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
bool has_metrics = 15;
- clearHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string host = 9;
- clearHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string host = 9;
- clearHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string host_port = 2;
- clearHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string host_port = 2;
- clearHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string host_port = 3;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
int64 id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
int32 id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
int32 id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
int64 id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
int64 id = 1;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string id = 2;
- clearId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string id = 1;
- clearIncomingEdges() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- clearIndex() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 index = 2;
- clearIndex() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 index = 2;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- clearInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- clearInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- clearInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 input_bytes = 5;
- clearInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 input_bytes = 24;
- clearInputBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 input_bytes_read = 26;
- clearInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- clearInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- clearInputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- clearInputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 input_records = 6;
- clearInputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 input_records = 25;
- clearInputRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 input_records_read = 27;
- clearInputRowsPerSecond() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
double input_rows_per_second = 6;
- clearIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_active = 3;
- clearIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
bool is_active = 3;
- clearIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
bool is_active = 4;
- clearIsBlacklisted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_blacklisted = 18;
- clearIsBlacklistedForStage() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
bool is_blacklisted_for_stage = 15;
- clearIsExcluded() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_excluded = 30;
- clearIsExcludedForStage() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
bool is_excluded_for_stage = 17;
- clearIsShufflePushEnabled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
bool is_shuffle_push_enabled = 63;
- clearJavaHome() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_home = 2;
- clearJavaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_version = 1;
- clearJobGroup() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Clear the current thread's job group ID and its description.
- clearJobGroup() - Method in class org.apache.spark.SparkContext
-
Clear the current thread's job group ID and its description.
- clearJobGroup() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string job_group = 7;
- clearJobId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
All IDs are int64 for extendability, even when they are currently int32 in Spark.
- clearJobIds() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- clearJobs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- clearJobTags() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Clear the current thread's job tags.
- clearJobTags() - Method in class org.apache.spark.SparkContext
-
Clear the current thread's job tags.
- clearJobTags() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- clearJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double jvm_gc_time = 11;
- clearJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 jvm_gc_time = 19;
- clearJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 jvm_gc_time = 21;
- clearJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- clearJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 jvm_gc_time = 6;
- clearKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- clearKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 killed_tasks = 4;
- clearKilledTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- clearKillTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- clearLastUpdated() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 last_updated = 4;
- clearLatestOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string latest_offset = 4;
- clearLaunchTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 launch_time = 5;
- clearLaunchTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 launch_time = 5;
- clearLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- clearLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 local_blocks_fetched = 2;
- clearLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 local_bytes_read = 6;
- clearLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- clearLocalMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- clearLocalMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_blocks_fetched = 4;
- clearLocalMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- clearLocalMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_bytes_read = 8;
- clearLocalMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- clearLocalMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_chunks_fetched = 6;
- clearMaxCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 max_cores = 4;
- clearMaxMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 max_memory = 19;
- clearMaxTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 max_tasks = 8;
- clearMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double memory_bytes_spilled = 16;
- clearMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- clearMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 memory_bytes_spilled = 13;
- clearMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 memory_bytes_spilled = 21;
- clearMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 memory_bytes_spilled = 23;
- clearMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- clearMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 memory_bytes_spilled = 8;
- clearMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- clearMemoryPerExecutorMb() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 memory_per_executor_mb = 6;
- clearMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 memory_remaining = 3;
- clearMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 memory_used = 5;
- clearMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 memory_used = 2;
- clearMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
int64 memory_used = 3;
- clearMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int64 memory_used = 6;
- clearMemoryUsedBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 memory_used_bytes = 8;
- clearMemSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
int64 mem_size = 8;
- clearMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- clearMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 merged_fetch_fallback_count = 2;
- clearMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- clearMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- clearMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- clearMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- clearMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- clearMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- clearMetricsProperties() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- clearMetricType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string metric_type = 3;
- clearMetricValues() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- clearMetricValuesIsNull() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
bool metric_values_is_null = 13;
- clearModifiedConfigs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
optional string name = 1;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
optional string name = 1;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string name = 2;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string name = 1;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string name = 39;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string name = 1;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string name = 1;
- clearName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string name = 3;
- clearNode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- clearNodes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- clearNodes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- clearNumActiveStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_active_stages = 16;
- clearNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_active_tasks = 10;
- clearNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_active_tasks = 2;
- clearNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_active_tasks = 5;
- clearNumCachedPartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 num_cached_partitions = 4;
- clearNumCompletedIndices() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_indices = 15;
- clearNumCompletedIndices() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_completed_indices = 9;
- clearNumCompletedJobs() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
int32 num_completed_jobs = 1;
- clearNumCompletedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
int32 num_completed_stages = 2;
- clearNumCompletedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_stages = 17;
- clearNumCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_tasks = 11;
- clearNumCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_completed_tasks = 3;
- clearNumCompleteTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_complete_tasks = 6;
- clearNumFailedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_failed_stages = 19;
- clearNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_failed_tasks = 13;
- clearNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_failed_tasks = 4;
- clearNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_failed_tasks = 7;
- clearNumInputRows() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
int64 num_input_rows = 5;
- clearNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_killed_tasks = 14;
- clearNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_killed_tasks = 5;
- clearNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_killed_tasks = 8;
- clearNumOutputRows() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
int64 num_output_rows = 2;
- clearNumPartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 num_partitions = 3;
- clearNumRowsDroppedByWatermark() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_dropped_by_watermark = 9;
- clearNumRowsRemoved() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_removed = 5;
- clearNumRowsTotal() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_total = 2;
- clearNumRowsUpdated() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_updated = 3;
- clearNumShufflePartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_shuffle_partitions = 10;
- clearNumSkippedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_skipped_stages = 18;
- clearNumSkippedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_skipped_tasks = 12;
- clearNumStateStoreInstances() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_state_store_instances = 11;
- clearNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_tasks = 9;
- clearNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_tasks = 1;
- clearNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_tasks = 4;
- clearObservedMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- clearOffHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_remaining = 8;
- clearOffHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_used = 6;
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- clearOneof(Descriptors.OneofDescriptor) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- clearOnHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_remaining = 7;
- clearOnHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_used = 5;
- clearOperatorName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
optional string operator_name = 1;
- clearOutgoingEdges() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- clearOutputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- clearOutputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 output_bytes = 7;
- clearOutputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 output_bytes = 26;
- clearOutputBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 output_bytes_written = 28;
- clearOutputDeterministicLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- clearOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- clearOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- clearOutputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- clearOutputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 output_records = 8;
- clearOutputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 output_records = 27;
- clearOutputRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 output_records_written = 29;
- clearPartitionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 partition_id = 4;
- clearPartitionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 partition_id = 4;
- clearPartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- clearPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double peak_execution_memory = 15;
- clearPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 peak_execution_memory = 23;
- clearPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 peak_execution_memory = 25;
- clearPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- clearPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 peak_execution_memory = 10;
- clearPeakExecutorMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- clearPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- clearPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- clearPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- clearPhysicalPlanDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string physical_plan_description = 5;
- clearProcessedRowsPerSecond() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
double processed_rows_per_second = 7;
- clearProcessLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- clearProgress() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- clearQuantile() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
optional string quantile = 3;
- clearQuantiles() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- clearQuantiles() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- clearQuantiles() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- clearRddBlocks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 rdd_blocks = 4;
- clearRddIds() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- clearReadBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- clearReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- clearRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double records_read = 19;
- clearRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- clearRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
int64 records_read = 2;
- clearRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 records_read = 7;
- clearRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double records_written = 21;
- clearRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- clearRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
int64 records_written = 2;
- clearRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 records_written = 3;
- clearRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- clearRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_blocks_fetched = 1;
- clearRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- clearRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_bytes_read = 4;
- clearRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- clearRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_bytes_read_to_disk = 5;
- clearRemoteMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- clearRemoteMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_blocks_fetched = 3;
- clearRemoteMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- clearRemoteMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_bytes_read = 7;
- clearRemoteMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- clearRemoteMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_chunks_fetched = 5;
- clearRemoteMergedReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- clearRemoteMergedReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_reqs_duration = 9;
- clearRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- clearRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_reqs_duration = 8;
- clearRemoveReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string remove_reason = 22;
- clearRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional int64 remove_time = 21;
- clearRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional int64 remove_time = 6;
- clearResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string resource_name = 1;
- clearResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
optional string resource_name = 1;
- clearResourceProfileId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 resource_profile_id = 29;
- clearResourceProfileId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 resource_profile_id = 49;
- clearResourceProfiles() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- clearResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- clearResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 result_fetch_start = 6;
- clearResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_fetch_start = 6;
- clearResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double result_serialization_time = 12;
- clearResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 result_serialization_time = 20;
- clearResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_serialization_time = 22;
- clearResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- clearResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 result_serialization_time = 7;
- clearResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double result_size = 10;
- clearResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 result_size = 18;
- clearResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_size = 20;
- clearResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- clearResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 result_size = 5;
- clearRootCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- clearRootExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 root_execution_id = 2;
- clearRpInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- clearRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string run_id = 3;
- clearRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string run_id = 2;
- clearRuntime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- clearScalaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string scala_version = 3;
- clearSchedulerDelay() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double scheduler_delay = 14;
- clearSchedulerDelay() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 scheduler_delay = 17;
- clearSchedulerDelay() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- clearSchedulingPool() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string scheduling_pool = 42;
- clearShuffleBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_bytes_written = 37;
- clearShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_corrupt_merged_block_chunks = 33;
- clearShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_corrupt_merged_block_chunks = 53;
- clearShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_corrupt_merged_block_chunks = 42;
- clearShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_fetch_wait_time = 26;
- clearShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_fetch_wait_time = 30;
- clearShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_fetch_wait_time = 32;
- clearShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_local_blocks_fetched = 25;
- clearShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_local_blocks_fetched = 29;
- clearShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_local_blocks_fetched = 31;
- clearShuffleLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_local_bytes_read = 33;
- clearShuffleLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_local_bytes_read = 35;
- clearShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_fetch_fallback_count = 34;
- clearShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_fetch_fallback_count = 54;
- clearShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_fetch_fallback_count = 43;
- clearShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_blocks_fetched = 36;
- clearShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_blocks_fetched = 56;
- clearShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_blocks_fetched = 45;
- clearShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_bytes_read = 40;
- clearShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_bytes_read = 60;
- clearShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_bytes_read = 49;
- clearShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_chunks_fetched = 38;
- clearShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_chunks_fetched = 58;
- clearShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_chunks_fetched = 47;
- clearShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_blocks_fetched = 35;
- clearShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_blocks_fetched = 55;
- clearShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_blocks_fetched = 44;
- clearShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_bytes_read = 39;
- clearShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_bytes_read = 59;
- clearShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_bytes_read = 48;
- clearShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_chunks_fetched = 37;
- clearShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_chunks_fetched = 57;
- clearShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_chunks_fetched = 46;
- clearShuffleMergedRemoteReqDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_req_duration = 51;
- clearShuffleMergedRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_reqs_duration = 42;
- clearShuffleMergedRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_reqs_duration = 62;
- clearShuffleMergersCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 shuffle_mergers_count = 64;
- clearShufflePushReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- clearShufflePushReadMetricsDist() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- clearShuffleRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- clearShuffleRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_read = 9;
- clearShuffleReadBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_read_bytes = 22;
- clearShuffleReadBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_read_bytes = 34;
- clearShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- clearShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- clearShuffleReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- clearShuffleReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_read_records = 10;
- clearShuffleReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_read_records = 35;
- clearShuffleRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_records_read = 23;
- clearShuffleRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_records_read = 36;
- clearShuffleRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_records_written = 39;
- clearShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_blocks_fetched = 24;
- clearShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_blocks_fetched = 28;
- clearShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_blocks_fetched = 30;
- clearShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_bytes_read = 27;
- clearShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_bytes_read = 31;
- clearShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_bytes_read = 33;
- clearShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_bytes_read_to_disk = 28;
- clearShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_bytes_read_to_disk = 32;
- clearShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_bytes_read_to_disk = 34;
- clearShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_reqs_duration = 41;
- clearShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_reqs_duration = 61;
- clearShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_reqs_duration = 50;
- clearShuffleTotalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_total_blocks_fetched = 29;
- clearShuffleWrite() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- clearShuffleWrite() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_write = 11;
- clearShuffleWriteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_bytes = 30;
- clearShuffleWriteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_bytes = 36;
- clearShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- clearShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- clearShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_records = 31;
- clearShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- clearShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_write_records = 12;
- clearShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_records = 38;
- clearShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_time = 32;
- clearShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_time = 37;
- clearShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_write_time = 38;
- clearSink() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- clearSkippedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- clearSources() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- clearSparkProperties() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- clearSparkUser() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string spark_user = 6;
- clearSpeculationSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- clearSpeculative() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
bool speculative = 12;
- clearSpeculative() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
bool speculative = 12;
- clearSqlExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
optional int64 sql_execution_id = 3;
- clearStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int32 stage_attempt_id = 2;
- clearStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
int32 stage_attempt_id = 2;
- clearStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
int32 stage_attempt_id = 2;
- clearStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 stage_attempt_id = 41;
- clearStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int64 stage_id = 1;
- clearStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
int64 stage_id = 1;
- clearStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
int64 stage_id = 1;
- clearStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
int64 stage_id = 1;
- clearStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 stage_id = 2;
- clearStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 stage_id = 40;
- clearStageIds() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- clearStageIds() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- clearStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- clearStartOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string start_offset = 2;
- clearStartTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 start_time = 2;
- clearStartTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
int64 start_timestamp = 6;
- clearStateOperators() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- clearStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- clearStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- clearStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string status = 10;
- clearStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string status = 10;
- clearStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string storage_level = 2;
- clearStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string storage_level = 5;
- clearStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string storage_level = 4;
- clearSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 submission_time = 4;
- clearSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 submission_time = 8;
- clearSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 submission_time = 10;
- clearSucceededTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- clearSucceededTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 succeeded_tasks = 3;
- clearSystemProperties() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- clearTags() - Method in class org.apache.spark.sql.api.SparkSession
-
Clear the current thread's operation tags.
- clearTags() - Method in class org.apache.spark.sql.SparkSession
- clearTaskCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int64 task_count = 4;
- clearTaskId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 task_id = 1;
- clearTaskId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 task_id = 1;
- clearTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string task_locality = 11;
- clearTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string task_locality = 11;
- clearTaskMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- clearTaskMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- clearTaskResourceRequests() - Method in class org.apache.spark.resource.ResourceProfileBuilder
- clearTaskResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- clearTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- clearTaskTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- clearTaskTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 task_time = 1;
- clearThreshold() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
Clears the threshold so that
predict
will output raw prediction scores. - clearThreshold() - Method in class org.apache.spark.mllib.classification.SVMModel
-
Clears the threshold so that
predict
will output raw prediction scores. - clearTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string timestamp = 4;
- clearToId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
int32 to_id = 2;
- clearToId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
int64 to_id = 2;
- clearTotalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- clearTotalCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 total_cores = 7;
- clearTotalCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
int32 total_cores = 4;
- clearTotalDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_duration = 13;
- clearTotalGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_gc_time = 14;
- clearTotalInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_input_bytes = 15;
- clearTotalOffHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 total_off_heap_storage_memory = 4;
- clearTotalOnHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 total_on_heap_storage_memory = 3;
- clearTotalShuffleRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_shuffle_read = 16;
- clearTotalShuffleWrite() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_shuffle_write = 17;
- clearTotalTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 total_tasks = 12;
- clearUpdate() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string update = 3;
- clearUseDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool use_disk = 6;
- clearUsedOffHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 used_off_heap_storage_memory = 2;
- clearUsedOnHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 used_on_heap_storage_memory = 1;
- clearUseMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool use_memory = 5;
- clearValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string value = 4;
- clearValue1() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value1 = 1;
- clearValue2() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value2 = 2;
- clearVendor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string vendor = 4;
- clearWrapper() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- clearWriteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- clearWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- clearWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- clearWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 write_time = 2;
- Clock - Interface in org.apache.spark.util
-
An interface to represent clocks, so that they can be mocked out in unit tests.
- CLogLog$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
- clone() - Method in class org.apache.spark.scheduler.TaskInfo
- clone() - Method in class org.apache.spark.SparkConf
-
Copy this object
- clone() - Method in class org.apache.spark.sql.ExperimentalMethods
- clone() - Method in class org.apache.spark.sql.types.Decimal
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- clone() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- clone() - Method in class org.apache.spark.storage.StorageLevel
- clone() - Method in class org.apache.spark.util.random.BernoulliCellSampler
- clone() - Method in class org.apache.spark.util.random.BernoulliSampler
- clone() - Method in class org.apache.spark.util.random.PoissonSampler
- clone() - Method in interface org.apache.spark.util.random.RandomSampler
-
return a copy of the RandomSampler object
- clone(T, SerializerInstance, ClassTag<T>) - Static method in class org.apache.spark.util.Utils
-
Clone an object using a Spark serializer.
- cloneComplement() - Method in class org.apache.spark.util.random.BernoulliCellSampler
-
Return a sampler that is the complement of the range specified of the current sampler.
- cloneProperties(Properties) - Static method in class org.apache.spark.util.Utils
-
Create a new properties object with the same values as `props`
- close() - Method in class org.apache.spark.api.java.JavaSparkContext
- close() - Method in class org.apache.spark.io.NioBufferedFileInputStream
- close() - Method in class org.apache.spark.io.ReadAheadInputStream
- close() - Method in class org.apache.spark.serializer.DeserializationStream
- close() - Method in class org.apache.spark.serializer.SerializationStream
- close() - Method in class org.apache.spark.sql.SparkSession
-
Stop the underlying
SparkContext
. - close() - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- close() - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
-
Called to close all the columns in this batch.
- close() - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Cleans up memory for this column vector.
- close() - Method in class org.apache.spark.storage.BufferReleasingInputStream
- close() - Method in class org.apache.spark.storage.CountingWritableChannel
- close() - Method in class org.apache.spark.storage.TimeTrackingOutputStream
- close() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
- close() - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Close this log and release any resources.
- close(Throwable) - Method in class org.apache.spark.sql.ForeachWriter
-
Called when stopping to process one partition of new data in the executor side.
- ClosureCleaner - Class in org.apache.spark.util
-
A cleaner that renders closures serializable if they can be done so safely.
- ClosureCleaner() - Constructor for class org.apache.spark.util.ClosureCleaner
- closureSerializer() - Method in class org.apache.spark.SparkEnv
- cls() - Method in class org.apache.spark.sql.types.ObjectType
- cls() - Method in class org.apache.spark.util.MethodIdentifier
- clsTag() - Method in interface org.apache.spark.sql.Encoder
-
A ClassTag that can be used to construct an Array to contain a collection of
T
. - cluster() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
- cluster() - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
- CLUSTER - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
- CLUSTER_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- Cluster$() - Constructor for class org.apache.spark.mllib.clustering.KMeansModel.Cluster$
- clusterBy(String...) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Clusters the output by the given columns.
- clusterBy(String, String...) - Method in interface org.apache.spark.sql.CreateTableWriter
-
Clusters the output by the given columns on the storage.
- clusterBy(String, String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Clusters the output by the given columns on the storage.
- clusterBy(String, String...) - Method in class org.apache.spark.sql.DataFrameWriterV2
- clusterBy(String, Seq<String>) - Method in interface org.apache.spark.sql.CreateTableWriter
-
Clusters the output by the given columns on the storage.
- clusterBy(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Clusters the output by the given columns on the storage.
- clusterBy(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriterV2
- clusterBy(NamedReference[]) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for changing clustering columns for a table.
- clusterBy(NamedReference[]) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- clusterBy(Seq<String>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Clusters the output by the given columns.
- ClusterByHelper(ClusterBySpec) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.ClusterByHelper
- ClusterByTransform - Class in org.apache.spark.sql.connector.expressions
-
This class represents a transform for
ClusterBySpec
. - ClusterByTransform(Seq<NamedReference>) - Constructor for class org.apache.spark.sql.connector.expressions.ClusterByTransform
- clusterByWithBucketing() - Method in interface org.apache.spark.sql.errors.CompilationErrors
- clusterByWithBucketing() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- clusterByWithBucketing(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- clusterByWithPartitionedBy() - Method in interface org.apache.spark.sql.errors.CompilationErrors
- clusterByWithPartitionedBy() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- clusterByWithPartitionedBy(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- clusterCenter() - Method in class org.apache.spark.ml.clustering.ClusterData
- clusterCenters() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- clusterCenters() - Method in class org.apache.spark.ml.clustering.KMeansModel
- clusterCenters() - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Leaf cluster centers.
- clusterCenters() - Method in class org.apache.spark.mllib.clustering.KMeansModel
- clusterCenters() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
- ClusterData - Class in org.apache.spark.ml.clustering
-
Helper class for storing model data
- ClusterData(int, Vector) - Constructor for class org.apache.spark.ml.clustering.ClusterData
- clustered(Expression[]) - Static method in class org.apache.spark.sql.connector.distributions.Distributions
-
Creates a distribution where tuples that share the same values for clustering expressions are co-located in the same partition.
- clustered(Expression[]) - Static method in class org.apache.spark.sql.connector.distributions.LogicalDistributions
- ClusteredDistribution - Interface in org.apache.spark.sql.connector.distributions
-
A distribution where tuples that share the same values for clustering expressions are co-located in the same partition.
- clusterIdx() - Method in class org.apache.spark.ml.clustering.ClusterData
- clustering() - Method in interface org.apache.spark.sql.connector.distributions.ClusteredDistribution
-
Returns clustering expressions.
- clusteringColumns() - Method in class org.apache.spark.sql.connector.catalog.TableChange.ClusterBy
- ClusteringEvaluator - Class in org.apache.spark.ml.evaluation
-
Evaluator for clustering results.
- ClusteringEvaluator() - Constructor for class org.apache.spark.ml.evaluation.ClusteringEvaluator
- ClusteringEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.ClusteringEvaluator
- ClusteringMetrics - Class in org.apache.spark.ml.evaluation
-
Metrics for clustering, which expects two input columns: prediction and label.
- ClusteringSummary - Class in org.apache.spark.ml.clustering
-
Summary of clustering algorithms.
- CLUSTERS_CONFIG_PREFIX() - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- clusterSchedulerError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- clusterSizes() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
- ClusterStats(Vector, double, double) - Constructor for class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
- ClusterStats$() - Constructor for class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats$
- clusterWeights() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
- cmdOnlyWorksOnPartitionedTablesError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cmdOnlyWorksOnTableWithLocationError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- cn() - Method in class org.apache.spark.mllib.feature.VocabWord
- coalesce(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD that is reduced into
numPartitions
partitions. - coalesce(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD that is reduced into
numPartitions
partitions. - coalesce(int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD that is reduced into
numPartitions
partitions. - coalesce(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset that has exactly
numPartitions
partitions, when the fewer partitions are requested. - coalesce(int) - Method in class org.apache.spark.sql.Dataset
- coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD that is reduced into
numPartitions
partitions. - coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD that is reduced into
numPartitions
partitions. - coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD that is reduced into
numPartitions
partitions. - coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD that is reduced into
numPartitions
partitions. - coalesce(int, RDD<?>) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Runs the packing algorithm and returns an array of PartitionGroups that if possible are load balanced and grouped by locality
- coalesce(int, RDD<?>) - Method in interface org.apache.spark.rdd.PartitionCoalescer
-
Coalesce the partitions of the given RDD.
- coalesce(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the first column that is not null, or null if all inputs are null.
- coalesce(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the first column that is not null, or null if all inputs are null.
- CoarseGrainedClusterMessage - Interface in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages
- CoarseGrainedClusterMessages.AddWebUIFilter - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.AddWebUIFilter$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.DecommissionExecutor$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.DecommissionExecutorsOnHost - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.DecommissionExecutorsOnHost$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.ExecutorDecommissioning - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.ExecutorDecommissioning$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.ExecutorDecommissionSigReceived$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.GetExecutorLossReason - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.GetExecutorLossReason$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.IsExecutorAlive - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.IsExecutorAlive$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.KillExecutors - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.KillExecutors$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.KillExecutorsOnHost - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.KillExecutorsOnHost$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.KillTask - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.KillTask$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.LaunchedExecutor - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.LaunchedExecutor$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.LaunchTask - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.LaunchTask$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.MiscellaneousProcessAdded - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.MiscellaneousProcessAdded$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RegisterClusterManager - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RegisterClusterManager$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RegisterExecutor - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RegisterExecutor$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RemoveExecutor - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RemoveExecutor$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RemoveWorker - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RemoveWorker$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RequestExecutors - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RequestExecutors$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RetrieveDelegationTokens$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RetrieveSparkAppConfig - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.RetrieveSparkAppConfig$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.ReviveOffers$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.SetupDriver - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.SetupDriver$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.ShufflePushCompletion - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.ShufflePushCompletion$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.Shutdown - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.Shutdown$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.SparkAppConfig - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.SparkAppConfig$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.StatusUpdate - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.StatusUpdate$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.StopDriver$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.StopExecutor$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.StopExecutors$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.TaskThreadDump - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.TaskThreadDump$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.UpdateDelegationTokens - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.UpdateDelegationTokens$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.UpdateExecutorLogLevel - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.UpdateExecutorLogLevel$ - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.UpdateExecutorsLogLevel - Class in org.apache.spark.scheduler.cluster
- CoarseGrainedClusterMessages.UpdateExecutorsLogLevel$ - Class in org.apache.spark.scheduler.cluster
- code() - Method in class org.apache.spark.mllib.feature.VocabWord
- code() - Method in class org.apache.spark.util.SparkTestUtils.JavaSourceFromString
- codecNotAvailableError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- codecNotAvailableError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- CodegenMetrics - Class in org.apache.spark.metrics.source
-
Metrics for code generation.
- CodegenMetrics() - Constructor for class org.apache.spark.metrics.source.CodegenMetrics
- codeLen() - Method in class org.apache.spark.mllib.feature.VocabWord
- coefficientMatrix() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- coefficients() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- coefficients() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
A vector of model coefficients for "binomial" logistic regression.
- coefficients() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- coefficients() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- coefficients() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- coefficientStandardErrors() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
- coefficientStandardErrors() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- cogroup(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother
, return a resulting RDD that contains a tuple with the list of values for that key inthis
as well asother
. - cogroup(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother
, return a resulting RDD that contains a tuple with the list of values for that key inthis
as well asother
. - cogroup(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother
, return a resulting RDD that contains a tuple with the list of values for that key inthis
as well asother
. - cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother1
orother2
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
andother2
. - cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother1
orother2
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
andother2
. - cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother1
orother2
orother3
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
,other2
andother3
. - cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother1
orother2
orother3
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
,other2
andother3
. - cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother1
orother2
orother3
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
,other2
andother3
. - cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in
this
orother1
orother2
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
andother2
. - cogroup(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother
, return a resulting RDD that contains a tuple with the list of values for that key inthis
as well asother
. - cogroup(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother
, return a resulting RDD that contains a tuple with the list of values for that key inthis
as well asother
. - cogroup(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother
, return a resulting RDD that contains a tuple with the list of values for that key inthis
as well asother
. - cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother1
orother2
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
andother2
. - cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother1
orother2
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
andother2
. - cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother1
orother2
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
andother2
. - cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother1
orother2
orother3
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
,other2
andother3
. - cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother1
orother2
orother3
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
,other2
andother3
. - cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in
this
orother1
orother2
orother3
, return a resulting RDD that contains a tuple with the list of values for that key inthis
,other1
,other2
andother3
. - cogroup(KeyValueGroupedDataset, CoGroupFunction<K, V, U, R>, Encoder<R>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each cogrouped data.
- cogroup(KeyValueGroupedDataset, Function3<K, Iterator<V>, Iterator<U>, IterableOnce<R>>, Encoder<R>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each cogrouped data.
- cogroup(KeyValueGroupedDataset<K, U>, CoGroupFunction<K, V, U, R>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- cogroup(KeyValueGroupedDataset<K, U>, Function3<K, Iterator<V>, Iterator<U>, IterableOnce<R>>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- cogroup(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'cogroup' between RDDs of
this
DStream andother
DStream. - cogroup(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'cogroup' between RDDs of
this
DStream andother
DStream. - cogroup(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'cogroup' between RDDs of
this
DStream andother
DStream. - cogroup(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'cogroup' between RDDs of
this
DStream andother
DStream. - cogroup(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'cogroup' between RDDs of
this
DStream andother
DStream. - cogroup(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'cogroup' between RDDs of
this
DStream andother
DStream. - CoGroupedRDD<K> - Class in org.apache.spark.rdd
-
:: DeveloperApi :: An RDD that cogroups its parents.
- CoGroupedRDD(Seq<RDD<? extends Product2<K, ?>>>, Partitioner, ClassTag<K>) - Constructor for class org.apache.spark.rdd.CoGroupedRDD
- CoGroupFunction<K,
V1, V2, R> - Interface in org.apache.spark.api.java.function -
A function that returns zero or more output records from each grouping key and its values from 2 Datasets.
- cogroupSorted(KeyValueGroupedDataset, Column[], Column[], CoGroupFunction<K, V, U, R>, Encoder<R>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each sorted cogrouped data.
- cogroupSorted(KeyValueGroupedDataset, Seq<Column>, Seq<Column>, Function3<K, Iterator<V>, Iterator<U>, IterableOnce<R>>, Encoder<R>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each sorted cogrouped data.
- cogroupSorted(KeyValueGroupedDataset<K, U>, Column[], Column[], CoGroupFunction<K, V, U, R>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- cogroupSorted(KeyValueGroupedDataset<K, U>, Seq<Column>, Seq<Column>, Function3<K, Iterator<V>, Iterator<U>, IterableOnce<R>>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- col(String) - Method in class org.apache.spark.sql.api.Dataset
-
Selects column based on the column name and returns it as a
Column
. - col(String) - Method in class org.apache.spark.sql.Dataset
- col(String) - Static method in class org.apache.spark.sql.functions
-
Returns a
Column
based on the given column name. - COL_POS_KEY() - Static method in class org.apache.spark.sql.Dataset
- coldStartStrategy() - Method in class org.apache.spark.ml.recommendation.ALS
- coldStartStrategy() - Method in class org.apache.spark.ml.recommendation.ALSModel
- coldStartStrategy() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
-
Param for strategy for dealing with unknown or new users/items at prediction time.
- colIter() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- colIter() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns an iterator of column vectors.
- colIter() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- colIter() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- colIter() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Returns an iterator of column vectors.
- colIter() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- collate(Column, String) - Static method in class org.apache.spark.sql.functions
-
Marks a given column with specified collation.
- CollatedEqualNullSafe - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
EqualNullSafe
. - CollatedEqualNullSafe(String, Object, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedEqualNullSafe
- CollatedEqualTo - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
EqualTo
. - CollatedEqualTo(String, Object, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedEqualTo
- CollatedFilter - Class in org.apache.spark.sql.sources
-
Base class for collation aware string filters.
- CollatedFilter() - Constructor for class org.apache.spark.sql.sources.CollatedFilter
- CollatedGreaterThan - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
GreaterThan
. - CollatedGreaterThan(String, Object, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedGreaterThan
- CollatedGreaterThanOrEqual - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
GreaterThanOrEqual
. - CollatedGreaterThanOrEqual(String, Object, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedGreaterThanOrEqual
- CollatedIn - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
In
. - CollatedIn(String, Object[], DataType) - Constructor for class org.apache.spark.sql.sources.CollatedIn
- CollatedLessThan - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
LessThan
. - CollatedLessThan(String, Object, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedLessThan
- CollatedLessThanOrEqual - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
LessThanOrEqual
. - CollatedLessThanOrEqual(String, Object, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedLessThanOrEqual
- CollatedStringContains - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
StringContains
. - CollatedStringContains(String, String, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedStringContains
- CollatedStringEndsWith - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
StringEndsWith
. - CollatedStringEndsWith(String, String, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedStringEndsWith
- CollatedStringStartsWith - Class in org.apache.spark.sql.sources
-
Collation aware equivalent of
StringStartsWith
. - CollatedStringStartsWith(String, String, DataType) - Constructor for class org.apache.spark.sql.sources.CollatedStringStartsWith
- collation(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the collation name of a given column.
- collationId() - Method in class org.apache.spark.sql.types.StringType
- COLLATIONS_METADATA_KEY() - Static method in class org.apache.spark.sql.types.DataType
- collect() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an array that contains all of the elements in this RDD.
- collect() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- collect() - Method in class org.apache.spark.rdd.RDD
-
Return an array that contains all of the elements in this RDD.
- collect() - Method in class org.apache.spark.sql.api.Dataset
-
Returns an array that contains all rows in this Dataset.
- collect() - Method in class org.apache.spark.sql.Dataset
- collect(PartialFunction<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD that contains all matching values by applying
f
. - collect_list(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a list of objects with duplicates.
- collect_list(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a list of objects with duplicates.
- collect_set(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a set of objects with duplicate elements eliminated.
- collect_set(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a set of objects with duplicate elements eliminated.
- collectAsList() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a Java list that contains all rows in this Dataset.
- collectAsList() - Method in class org.apache.spark.sql.Dataset
- collectAsMap() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return the key-value pairs in this RDD to the master as a Map.
- collectAsMap() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return the key-value pairs in this RDD to the master as a Map.
- collectAsync() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of
collect
, which returns a future for retrieving an array containing all of the elements in this RDD. - collectAsync() - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Returns a future for retrieving all elements of this RDD.
- collectEdges(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
-
Returns an RDD that contains for each vertex v its local edges, i.e., the edges that are incident on v, in the user-specified direction.
- collectionAccumulator() - Method in class org.apache.spark.SparkContext
-
Create and register a
CollectionAccumulator
, which starts with empty list and accumulates inputs by adding them into the list. - collectionAccumulator(String) - Method in class org.apache.spark.SparkContext
-
Create and register a
CollectionAccumulator
, which starts with empty list and accumulates inputs by adding them into the list. - CollectionAccumulator<T> - Class in org.apache.spark.util
-
An
accumulator
for collecting a list of elements. - CollectionAccumulator() - Constructor for class org.apache.spark.util.CollectionAccumulator
- CollectionsUtils - Class in org.apache.spark.util
- CollectionsUtils() - Constructor for class org.apache.spark.util.CollectionsUtils
- collectNeighborIds(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
-
Collect the neighbor vertex ids for each vertex.
- collectNeighbors(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
-
Collect the neighbor vertex attributes for each vertex.
- collectPartitions(int[]) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an array that contains all of the elements in a specific partition of this RDD.
- collectSubModels() - Method in interface org.apache.spark.ml.param.shared.HasCollectSubModels
-
Param for whether to collect a list of sub-models trained during tuning.
- collectSubModels() - Method in class org.apache.spark.ml.tuning.CrossValidator
- collectSubModels() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- colPtrs() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- colPtrs() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- colRegex(String) - Method in class org.apache.spark.sql.api.Dataset
-
Selects column based on the column name specified as a regex and returns it as
Column
. - colRegex(String) - Method in class org.apache.spark.sql.Dataset
- colsPerBlock() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- colStats(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Computes column-wise summary statistics for the input RDD[Vector].
- column() - Method in class org.apache.spark.sql.connector.catalog.TableChange.After
- column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Avg
- column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Count
- column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Max
- column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Min
- column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Sum
- column(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
-
Returns the column at `ordinal`.
- column(String) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a named reference expression for a (nested) column.
- column(String) - Static method in class org.apache.spark.sql.functions
-
Returns a
Column
based on the given column name. - Column - Class in org.apache.spark.sql.catalog
-
A column in Spark, as returned by
listColumns
method inCatalog
. - Column - Class in org.apache.spark.sql
-
A column that will be computed based on the data in a
DataFrame
. - Column - Interface in org.apache.spark.sql.connector.catalog
-
An interface representing a column of a
Table
. - Column(String) - Constructor for class org.apache.spark.sql.Column
- Column(String, String, String, boolean, boolean, boolean) - Constructor for class org.apache.spark.sql.catalog.Column
- Column(String, String, String, boolean, boolean, boolean, boolean) - Constructor for class org.apache.spark.sql.catalog.Column
- Column(ColumnNode) - Constructor for class org.apache.spark.sql.Column
- columnAliases() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The view column aliases.
- columnAliases() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- columnAliasInOperationNotAllowedError(String, SqlBaseParser.TableAliasContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- columnAlreadyExistsError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ColumnarArray - Class in org.apache.spark.sql.vectorized
-
Array abstraction in
ColumnVector
. - ColumnarArray(ColumnVector, int, int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarArray
- ColumnarBatch - Class in org.apache.spark.sql.vectorized
-
This class wraps multiple ColumnVectors as a row-wise table.
- ColumnarBatch(ColumnVector[]) - Constructor for class org.apache.spark.sql.vectorized.ColumnarBatch
- ColumnarBatch(ColumnVector[], int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarBatch
-
Create a new batch from existing column vectors.
- ColumnarBatchRow - Class in org.apache.spark.sql.vectorized
-
This class wraps an array of
ColumnVector
and provides a row view. - ColumnarBatchRow(ColumnVector[]) - Constructor for class org.apache.spark.sql.vectorized.ColumnarBatchRow
- ColumnarMap - Class in org.apache.spark.sql.vectorized
-
Map abstraction in
ColumnVector
. - ColumnarMap(ColumnVector, ColumnVector, int, int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarMap
- ColumnarRow - Class in org.apache.spark.sql.vectorized
-
Row abstraction in
ColumnVector
. - ColumnarRow(ColumnVector, int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarRow
- columnarSupportMode() - Method in interface org.apache.spark.sql.connector.read.Scan
-
Subclasses can implement this method to indicate if the support for columnar data should be determined by each partition or is set as a default for the whole scan.
- columnComments() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The view column comments.
- columnComments() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- ColumnDefaultValue - Class in org.apache.spark.sql.connector.catalog
-
A class representing the default value of a column.
- ColumnDefaultValue(String, Literal<?>) - Constructor for class org.apache.spark.sql.connector.catalog.ColumnDefaultValue
- ColumnName - Class in org.apache.spark.sql
-
A convenient class used for constructing schema.
- ColumnName(String) - Constructor for class org.apache.spark.sql.ColumnName
- columnNames() - Method in class org.apache.spark.sql.connector.expressions.ClusterByTransform
- columnNotDefinedInTableError(String, String, String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- columnNotFoundError(String) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- columnNotFoundError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- columnNotFoundInExistingColumnsError(String, String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- columnNotFoundInSchemaError(StructField, Option<StructType>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- columnNotInGroupByClauseError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- columnProperties() - Method in class org.apache.spark.sql.connector.catalog.index.TableIndex
- ColumnPruner - Class in org.apache.spark.ml.feature
-
Utility transformer for removing temporary columns from a DataFrame.
- ColumnPruner(String, Set<String>) - Constructor for class org.apache.spark.ml.feature.ColumnPruner
- ColumnPruner(Set<String>) - Constructor for class org.apache.spark.ml.feature.ColumnPruner
- columns() - Method in class org.apache.spark.sql.api.Dataset
-
Returns all column names as an array.
- columns() - Method in class org.apache.spark.sql.connector.catalog.index.TableIndex
- columns() - Method in interface org.apache.spark.sql.connector.catalog.Table
-
Returns the columns of this table.
- columnSchema() - Static method in class org.apache.spark.ml.image.ImageSchema
-
Schema for the image column: Row(String, Int, Int, Int, Int, Array[Byte])
- ColumnsHelper(Column[]) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.ColumnsHelper
- columnSimilarities() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.
- columnSimilarities() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.
- columnSimilarities(double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Compute similarities between columns of this matrix using a sampling approach.
- ColumnStatistics - Interface in org.apache.spark.sql.connector.read.colstats
-
An interface to represent column statistics, which is part of
Statistics
. - columnStatisticsDeserializationNotSupportedError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- columnStatisticsSerializationNotSupportedError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- columnStats() - Method in interface org.apache.spark.sql.connector.read.Statistics
- columnsToPrune() - Method in class org.apache.spark.ml.feature.ColumnPruner
- columnToOldVector(Dataset<?>, String) - Static method in class org.apache.spark.ml.util.DatasetUtils
- columnToVector(Dataset<?>, String) - Static method in class org.apache.spark.ml.util.DatasetUtils
-
Cast a column in a Dataset to Vector type.
- columnTypeNotSupportStatisticsCollectionError(String, TableIdentifier, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ColumnVector - Class in org.apache.spark.sql.vectorized
-
An interface representing in-memory columnar data in Spark.
- combinationQueryResultClausesUnsupportedError(SqlBaseParser.QueryOrganizationContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Simplified version of combineByKey that hash-partitions the resulting RDD using the existing partitioner/parallelism level and using map-side aggregation.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Simplified version of combineByKey that hash-partitions the output RDD and uses map-side aggregation.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Generic function to combine the elements for each key using a custom set of aggregation functions.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Combine elements of each key in DStream's RDDs using custom function.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Combine elements of each key in DStream's RDDs using custom function.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Generic function to combine the elements for each key using a custom set of aggregation functions.
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the existing partitioner/parallelism level.
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Generic function to combine the elements for each key using a custom set of aggregation functions.
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, ClassTag<C>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Combine elements of each key in DStream's RDDs using custom functions.
- combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
- combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Generic function to combine the elements for each key using a custom set of aggregation functions.
- combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the existing partitioner/parallelism level.
- combineCombinersByKey(Iterator<Product2<K, C>>, TaskContext) - Method in class org.apache.spark.Aggregator
- combineValuesByKey(Iterator<Product2<K, V>>, TaskContext) - Method in class org.apache.spark.Aggregator
- command() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperation
-
Returns the SQL command that is being performed.
- command() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperationInfo
-
Returns the row-level SQL command (e.g.
- commandExecutionInRunnerUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- CommandLineLoggingUtils - Interface in org.apache.spark.util
- CommandLineUtils - Interface in org.apache.spark.util
-
Contains basic command line parsing functionality and methods to parse some common Spark CLI options.
- commandNotSupportNestedColumnError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- comment() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns the comment of this table column.
- comment() - Method in interface org.apache.spark.sql.connector.catalog.MetadataColumn
-
Documentation for this metadata column, or null.
- comment() - Method in interface org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter
-
Returns the comment of this parameter or null if not provided.
- comment() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- comment(String) - Method in class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Builder
-
Sets the comment of the parameter.
- commentOnTableUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- commit() - Method in interface org.apache.spark.sql.connector.write.DataWriter
-
Commits this writer after all records are written successfully, returns a commit message which will be sent back to driver side and passed to
BatchWrite.commit(WriterCommitMessage[])
. - commit(long, WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingWrite
-
Commits this writing job for the specified epoch with a list of commit messages.
- commit(Offset) - Method in interface org.apache.spark.sql.connector.read.streaming.SparkDataStream
-
Informs the source that Spark has completed processing all data for offsets less than or equal to `end` and will only request offsets greater than `end` in the future.
- commit(WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.BatchWrite
-
Commits this writing job with a list of commit messages.
- commit(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- COMMIT_TIME_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- commitAllPartitions(long[]) - Method in interface org.apache.spark.shuffle.api.ShuffleMapOutputWriter
-
Commits the writes done by all partition writers returned by all calls to this object's
ShuffleMapOutputWriter.getPartitionWriter(int)
, and returns the number of bytes written for each partition. - commitDeniedError(int, long, int, int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- commitStagedChanges() - Method in interface org.apache.spark.sql.connector.catalog.StagedTable
-
Finalize the creation or replacement of this table.
- commitTask(OutputCommitter, TaskAttemptContext, int, int) - Static method in class org.apache.spark.mapred.SparkHadoopMapRedUtil
-
Commits a task output.
- commitTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- commonHeaderNodes(HttpServletRequest) - Static method in class org.apache.spark.ui.UIUtils
- comparator(Schedulable, Schedulable) - Method in interface org.apache.spark.scheduler.SchedulingAlgorithm
- comparatorReturnsNull(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- compare(byte, byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- compare(double, double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- compare(double, double) - Method in class org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral$
- compare(float, float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- compare(float, float) - Method in class org.apache.spark.sql.types.FloatType.FloatAsIfIntegral$
- compare(int, int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- compare(long, long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- compare(short, short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- compare(PartitionGroup, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer.partitionGroupOrdering$
- compare(Decimal) - Method in class org.apache.spark.sql.types.Decimal
- compare(Decimal, Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- compare(Decimal, Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- compare(RDDInfo) - Method in class org.apache.spark.storage.RDDInfo
- compareTo(NumericHistogram.Coord) - Method in class org.apache.spark.sql.util.NumericHistogram.Coord
- compareTo(VariantBuilder.FieldEntry) - Method in class org.apache.spark.types.variant.VariantBuilder.FieldEntry
- compareTo(CalendarInterval) - Method in class org.apache.spark.unsafe.types.CalendarInterval
-
This method is not used to order CalendarInterval instances, as they are not orderable and cannot be used in a ORDER BY statement.
- compareTo(SparkShutdownHook) - Method in class org.apache.spark.util.SparkShutdownHook
- CompilationErrors - Interface in org.apache.spark.sql.errors
- compileAggregate(AggregateFunc) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Deprecated.use org.apache.spark.sql.jdbc.JdbcDialect.compileExpression instead. Since 3.4.0.
- compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- compileExpression(Expression) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- compileExpression(Expression) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Converts V2 expression to String representing a SQL expression.
- compileExpression(Expression) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- compileExpression(Expression) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- compileExpression(Expression) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- compilerError(CompileException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- compileValue(Object) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Converts value to SQL expression.
- compileValue(Object) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- compileValue(Object) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- Complete() - Static method in class org.apache.spark.sql.streaming.OutputMode
-
OutputMode in which all the rows in the streaming DataFrame/Dataset will be written to the sink every time there are some updates.
- COMPLETE - Enum constant in enum class org.apache.spark.status.api.v1.StageStatus
- completed() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- COMPLETED - Enum constant in enum class org.apache.spark.status.api.v1.ApplicationStatus
- COMPLETED - Enum constant in enum class org.apache.spark.status.api.v1.streaming.BatchStatus
- COMPLETED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- COMPLETED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- completedIndices() - Method in class org.apache.spark.status.LiveJob
- completedIndices() - Method in class org.apache.spark.status.LiveStage
- completedStages() - Method in class org.apache.spark.status.LiveJob
- completedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- completedTasks() - Method in class org.apache.spark.status.LiveJob
- completedTasks() - Method in class org.apache.spark.status.LiveStage
- COMPLETION_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- COMPLETION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- COMPLETION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- COMPLETION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- completionTime() - Method in class org.apache.spark.scheduler.StageInfo
-
Time when the stage completed or when the stage was cancelled.
- completionTime() - Method in class org.apache.spark.status.api.v1.JobData
- completionTime() - Method in class org.apache.spark.status.api.v1.StageData
- completionTime() - Method in class org.apache.spark.status.LiveJob
- ComplexFutureAction<T> - Class in org.apache.spark
-
A
FutureAction
for actions that could trigger multiple Spark jobs. - ComplexFutureAction(Function1<JobSubmitter, Future<T>>) - Constructor for class org.apache.spark.ComplexFutureAction
- componentName() - Method in class org.apache.spark.resource.ResourceID
- compositeLimit(ReadLimit[]) - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
- CompositeReadLimit - Class in org.apache.spark.sql.connector.read.streaming
-
/** Represents a
ReadLimit
where theMicroBatchStream
should scan approximately given maximum number of rows with at least the given minimum number of rows. - CompoundBodyExec - Class in org.apache.spark.sql.scripting
-
Executable node for CompoundBody.
- CompoundBodyExec(Seq<CompoundStatementExec>, Option<String>) - Constructor for class org.apache.spark.sql.scripting.CompoundBodyExec
- CompoundNestedStatementIteratorExec - Class in org.apache.spark.sql.scripting
-
Abstract class for all statements that contain nested statements.
- CompoundNestedStatementIteratorExec(Seq<CompoundStatementExec>, Option<String>) - Constructor for class org.apache.spark.sql.scripting.CompoundNestedStatementIteratorExec
- CompoundStatementExec - Interface in org.apache.spark.sql.scripting
-
Trait for all SQL scripting execution nodes used during interpretation phase.
- compressed() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns a matrix in dense column major, dense row major, sparse row major, or sparse column major format, whichever uses less storage.
- compressed() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns a vector in either dense or sparse format, whichever uses less storage.
- compressed() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns a vector in either dense or sparse format, whichever uses less storage.
- compressedColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns a matrix in dense or sparse column major format, whichever uses less storage.
- compressedContinuousInputStream(InputStream) - Method in interface org.apache.spark.io.CompressionCodec
- compressedContinuousInputStream(InputStream) - Method in class org.apache.spark.io.ZStdCompressionCodec
- compressedContinuousOutputStream(OutputStream) - Method in interface org.apache.spark.io.CompressionCodec
- compressedInputStream(InputStream) - Method in interface org.apache.spark.io.CompressionCodec
- compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
- compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
- compressedInputStream(InputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
- compressedInputStream(InputStream) - Method in class org.apache.spark.io.ZStdCompressionCodec
- compressedOutputStream(OutputStream) - Method in interface org.apache.spark.io.CompressionCodec
- compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
- compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
- compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
- compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.ZStdCompressionCodec
- compressedRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns a matrix in dense or sparse row major format, whichever uses less storage.
- compressedWithNNZ(int) - Method in interface org.apache.spark.ml.linalg.Vector
- CompressionCodec - Interface in org.apache.spark.io
-
:: DeveloperApi :: CompressionCodec allows the customization of choosing different compression implementations to be used in block storage.
- compressSparse(int[], double[], int[]) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- compute(long, long, long, long) - Method in interface org.apache.spark.streaming.scheduler.rate.RateEstimator
-
Computes the number of records the stream attached to this
RateEstimator
should ingest per second, given an update on the size and completion times of the latest batch. - compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
-
Compute the gradient and loss given the features of a single data point.
- compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
- compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
-
Compute the gradient and loss given the features of a single data point, add the gradient to a provided vector to avoid creating new objects, and return loss.
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LogisticGradient
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.L1Updater
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SimpleUpdater
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SquaredL2Updater
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.Updater
-
Compute an updated value for weights given the gradient, stepSize, iteration number and regularization parameter.
- compute(Partition, TaskContext) - Method in class org.apache.spark.api.r.BaseRRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.EdgeRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.VertexRDD
-
Provides the
RDD[(VertexId, VD)]
equivalent output. - compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.CoGroupedRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.HadoopRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.JdbcRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.NewHadoopRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.PartitionPruningRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.RDD
-
:: DeveloperApi :: Implemented by subclasses to compute a given partition.
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.ShuffledRDD
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.UnionRDD
- compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Generate an RDD for the given duration
- compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Method that generates an RDD for the given Duration
- compute(Time) - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
- compute(Time) - Method in class org.apache.spark.streaming.dstream.DStream
-
Method that generates an RDD for the given time
- compute(Time) - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
- computeClusterStats(Dataset<Row>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
-
The method takes the input dataset and computes the aggregated values about a cluster which are needed by the algorithm.
- computeClusterStats(Dataset<Row>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
-
The method takes the input dataset and computes the aggregated values about a cluster which are needed by the algorithm.
- computeColumnSummaryStatistics() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes column-wise summary statistics.
- computeCorrelation(RDD<Object>, RDD<Object>) - Method in interface org.apache.spark.mllib.stat.correlation.Correlation
-
Compute correlation for two datasets.
- computeCorrelation(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
Compute the Pearson correlation for two datasets.
- computeCorrelation(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
-
Compute Spearman's correlation for two datasets.
- computeCorrelationMatrix(RDD<Vector>) - Method in interface org.apache.spark.mllib.stat.correlation.Correlation
-
Compute the correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
- computeCorrelationMatrix(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
Compute the Pearson correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
- computeCorrelationMatrix(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
-
Compute Spearman's correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
- computeCorrelationMatrixFromCovariance(Matrix) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
Compute the Pearson correlation matrix from the covariance matrix.
- computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Method in interface org.apache.spark.mllib.stat.correlation.Correlation
-
Combine the two input RDD[Double]s into an RDD[Vector] and compute the correlation using the correlation implementation for RDD[Vector].
- computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
- computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
- computeCost(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Java-friendly version of
computeCost()
. - computeCost(Vector) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Computes the squared distance between the input point and the cluster center it belongs to.
- computeCost(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Computes the sum of squared distances between the input points and their corresponding cluster centers.
- computeCost(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Return the K-means cost (sum of squared distances of points to their nearest center) for this model on the given data.
- computeCost(Dataset<?>) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
Deprecated.This method is deprecated and will be removed in future versions. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
- computeCovariance() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the covariance matrix, treating each row as an observation.
- computeError(double, double) - Method in interface org.apache.spark.mllib.tree.loss.Loss
-
Method to calculate loss when the predictions are already known.
- computeError(org.apache.spark.mllib.tree.model.TreeEnsembleModel, RDD<LabeledPoint>) - Method in interface org.apache.spark.mllib.tree.loss.Loss
-
Method to calculate error of the base learner for the gradient boosting calculation.
- computeFractionForSampleSize(int, long, boolean) - Static method in class org.apache.spark.util.random.SamplingUtils
-
Returns a sampling rate that guarantees a sample of size greater than or equal to sampleSizeLowerBound 99.99% of the time.
- computeGradient(DenseMatrix<Object>, DenseMatrix<Object>, Vector, int) - Method in interface org.apache.spark.ml.ann.TopologyModel
-
Computes gradient for the network
- computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Computes the Gramian matrix
A^T A
. - computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the Gramian matrix
A^T A
. - computeInitialPredictionAndError(RDD<TreePoint>, double, DecisionTreeRegressionModel, Loss, Broadcast<Split[][]>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Compute the initial predictions and errors for a dataset for the first iteration of gradient boosting.
- computeInitialPredictionAndError(RDD<LabeledPoint>, double, DecisionTreeModel, Loss) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Compute the initial predictions and errors for a dataset for the first iteration of gradient boosting.
- computePreferredLocations(Seq<InputFormatInfo>) - Static method in class org.apache.spark.scheduler.InputFormatInfo
-
Computes the preferred locations based on input(s) and returned a location to block map.
- computePrevDelta(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>) - Method in interface org.apache.spark.ml.ann.LayerModel
-
Computes the delta for back propagation.
- computePrincipalComponents(int) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the top k principal components only.
- computePrincipalComponentsAndExplainedVariance(int) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the top k principal components and a vector of proportions of variance explained by each principal component.
- computeProbability(double) - Method in interface org.apache.spark.mllib.tree.loss.ClassificationLoss
-
Computes the class probability given the margin.
- computeSilhouetteCoefficient(Broadcast<Map<Object, SquaredEuclideanSilhouette.ClusterStats>>, Vector, double, double, double) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
-
It computes the Silhouette coefficient for a point.
- computeSilhouetteCoefficient(Broadcast<Map<Object, Tuple2<Vector, Object>>>, Vector, double, double) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
-
It computes the Silhouette coefficient for a point.
- computeSilhouetteScore(Dataset<?>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
-
Compute the Silhouette score of the dataset using the cosine distance measure.
- computeSilhouetteScore(Dataset<?>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
-
Compute the Silhouette score of the dataset using squared Euclidean distance measure.
- computeStatisticsNotExpectedError(SqlBaseParser.IdentifierContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Computes the singular value decomposition of this IndexedRowMatrix.
- computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes singular value decomposition of this matrix.
- computeThresholdByKey(Map<K, AcceptanceResult>, Map<K, Object>) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Given the result returned by getCounts, determine the threshold for accepting items to generate exact sample size.
- computeWeightedError(RDD<org.apache.spark.ml.feature.Instance>, DecisionTreeRegressionModel[], double[], Loss) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to calculate error of the base learner for the gradient boosting calculation.
- computeWeightedError(RDD<TreePoint>, RDD<Tuple2<Object, Object>>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to calculate error of the base learner for the gradient boosting calculation.
- concat(Column...) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input columns together into a single column.
- concat(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input columns together into a single column.
- concat_ws(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input string columns together into a single string column, using the given separator.
- concat_ws(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input string columns together into a single string column, using the given separator.
- concurrentModificationOnExternalAppendOnlyUnsafeRowArrayError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- concurrentQueryInstanceError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- concurrentStreamLogUpdate(long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- condition() - Method in class org.apache.spark.sql.WhenMatched
- condition() - Method in class org.apache.spark.sql.WhenNotMatched
- condition() - Method in class org.apache.spark.sql.WhenNotMatchedBySource
- conf() - Method in interface org.apache.spark.api.plugin.PluginContext
-
Configuration of the Spark application.
- conf() - Method in class org.apache.spark.SparkEnv
- conf() - Method in class org.apache.spark.sql.api.SparkSession
-
Runtime configuration interface for Spark.
- conf() - Method in class org.apache.spark.sql.SparkSession
- Conf(int, int, double, double, double, double, double, double) - Constructor for class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- confidence() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
Returns the confidence of the rule.
- confidence() - Method in class org.apache.spark.partial.BoundedDouble
- confidence() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Returns the confidence (or
delta
) of thisCountMinSketch
. - config(String, boolean) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(String, double) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(String, long) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(String, String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(Map<String, Object>) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(SparkConf) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a list of config options based on the given
SparkConf
. - config(Map<String, Object>) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- configRemovedInVersionError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- configTestLog4j2(String) - Static method in class org.apache.spark.TestUtils
-
config a log4j2 properties used for testsuite
- Configurable - Interface in org.apache.spark.input
-
A trait to implement
Configurable
interface. - configuration() - Method in class org.apache.spark.scheduler.InputFormatInfo
- CONFIGURATION_INSTANTIATION_LOCK() - Static method in class org.apache.spark.rdd.HadoopRDD
-
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
- CONFIGURATION_INSTANTIATION_LOCK() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
- conflictingAttributesInJoinConditionError(AttributeSet, LogicalPlan, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- conflictingPartitionColumnNamesError(Seq<String>, Seq<Path>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- confusionMatrix() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns confusion matrix: predicted classes are in columns, they are ordered by class label ascending, as in "labels"
- CONNECTED - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The application has connected to the handle.
- connectedComponents() - Method in class org.apache.spark.graphx.GraphOps
-
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
- connectedComponents(int) - Method in class org.apache.spark.graphx.GraphOps
-
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
- ConnectedComponents - Class in org.apache.spark.graphx.lib
-
Connected components algorithm.
- ConnectedComponents() - Constructor for class org.apache.spark.graphx.lib.ConnectedComponents
- consequent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
- ConstantInputDStream<T> - Class in org.apache.spark.streaming.dstream
-
An input stream that always returns the same RDD on each time step.
- ConstantInputDStream(StreamingContext, RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.ConstantInputDStream
- constructMessageParams(Map<String, String>) - Static method in exception org.apache.spark.SparkException
-
Utility method to construct message params from Java Map.
- constructorNotFoundError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- constructTree(org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData[]) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
Given a list of nodes from a tree, construct the tree.
- constructTrees(RDD<org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData>) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
- contains(Object) - Method in class org.apache.spark.sql.Column
-
Contains the other element.
- contains(String) - Method in class org.apache.spark.SparkConf
-
Does the configuration contain a given parameter?
- contains(String) - Method in class org.apache.spark.sql.types.Metadata
-
Tests whether this Metadata contains a binding for a key.
- contains(Param<?>) - Method in class org.apache.spark.ml.param.ParamMap
-
Checks whether a parameter is explicitly specified.
- contains(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a boolean.
- contains(T) - Method in class org.apache.spark.sql.util.SQLOpenHashSet
- containsAttributes(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> attributes = 27;
- containsAttributes(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> attributes = 27;
- containsAttributes(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> attributes = 27;
- containsCustomMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
map<string, int64> custom_metrics = 12;
- containsCustomMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
map<string, int64> custom_metrics = 12;
- containsCustomMetrics(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
map<string, int64> custom_metrics = 12;
- containsDurationMs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, int64> duration_ms = 7;
- containsDurationMs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, int64> duration_ms = 7;
- containsDurationMs(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, int64> duration_ms = 7;
- containsEventTime(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> event_time = 8;
- containsEventTime(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> event_time = 8;
- containsEventTime(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> event_time = 8;
- containsExecutorLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> executor_logs = 23;
- containsExecutorLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> executor_logs = 23;
- containsExecutorLogs(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> executor_logs = 23;
- containsExecutorLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
map<string, string> executor_logs = 16;
- containsExecutorLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
map<string, string> executor_logs = 16;
- containsExecutorLogs(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
map<string, string> executor_logs = 16;
- containsExecutorResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- containsExecutorResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- containsExecutorResources(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- containsExecutorSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- containsExecutorSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- containsExecutorSummary(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- containsJobs(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- containsJobs(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- containsJobs(long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- containsKey(Object) - Method in class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
- containsKey(Object) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- containsKey(K) - Method in interface org.apache.spark.sql.streaming.MapState
-
Check if the user key is contained in the map
- containsKilledTasksSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, int32> killed_tasks_summary = 48;
- containsKilledTasksSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, int32> killed_tasks_summary = 48;
- containsKilledTasksSummary(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, int32> killed_tasks_summary = 48;
- containsKillTasksSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
map<string, int32> kill_tasks_summary = 20;
- containsKillTasksSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
map<string, int32> kill_tasks_summary = 20;
- containsKillTasksSummary(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
map<string, int32> kill_tasks_summary = 20;
- containsLocality(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
map<string, int64> locality = 3;
- containsLocality(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
map<string, int64> locality = 3;
- containsLocality(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
map<string, int64> locality = 3;
- containsMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
map<string, int64> metrics = 1;
- containsMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
-
map<string, int64> metrics = 1;
- containsMetrics(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsOrBuilder
-
map<string, int64> metrics = 1;
- containsMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
map<string, string> metrics = 3;
- containsMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
map<string, string> metrics = 3;
- containsMetrics(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
map<string, string> metrics = 3;
- containsMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
map<string, string> metrics = 8;
- containsMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
map<string, string> metrics = 8;
- containsMetrics(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
map<string, string> metrics = 8;
- containsMetricValues(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, string> metric_values = 14;
- containsMetricValues(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, string> metric_values = 14;
- containsMetricValues(long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, string> metric_values = 14;
- containsModifiedConfigs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<string, string> modified_configs = 6;
- containsModifiedConfigs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<string, string> modified_configs = 6;
- containsModifiedConfigs(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<string, string> modified_configs = 6;
- containsNaN() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
- containsNull() - Method in class org.apache.spark.sql.types.ArrayType
- containsNull() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
- containsObservedMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> observed_metrics = 12;
- containsObservedMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> observed_metrics = 12;
- containsObservedMetrics(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> observed_metrics = 12;
- containsProcessLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
map<string, string> process_logs = 7;
- containsProcessLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
map<string, string> process_logs = 7;
- containsProcessLogs(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
map<string, string> process_logs = 7;
- containsResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- containsResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- containsResources(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- containsTaskResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- containsTaskResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- containsTaskResources(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- containsTasks(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- containsTasks(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- containsTasks(long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- containsValue(Object) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- contentType() - Method in class org.apache.spark.ui.JettyUtils.ServletParams
- context() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The
SparkContext
that this RDD was created on. - context() - Method in class org.apache.spark.ContextAwareIterator
-
Deprecated.
- context() - Method in class org.apache.spark.InterruptibleIterator
- context() - Method in class org.apache.spark.rdd.RDD
-
The
SparkContext
that this RDD was created on. - context() - Method in exception org.apache.spark.sql.AnalysisException
- context() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return the
StreamingContext
associated with this DStream - context() - Method in class org.apache.spark.streaming.dstream.DStream
-
Return the StreamingContext associated with this DStream
- ContextAwareIterator<T> - Class in org.apache.spark
-
Deprecated.since 4.0.0 as its only usage for Python evaluation is now extinct
- ContextAwareIterator(TaskContext, Iterator<T>) - Constructor for class org.apache.spark.ContextAwareIterator
-
Deprecated.
- ContextBarrierId - Class in org.apache.spark
-
For each barrier stage attempt, only at most one barrier() call can be active at any time, thus we can use (stageId, stageAttemptId) to identify the stage attempt where the barrier() call is from.
- ContextBarrierId(int, int) - Constructor for class org.apache.spark.ContextBarrierId
- contextType() - Method in interface org.apache.spark.QueryContext
- Continuous() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
- Continuous(long) - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
- Continuous(long, TimeUnit) - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
- Continuous(String) - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
- Continuous(Duration) - Static method in class org.apache.spark.sql.streaming.Trigger
-
(Scala-friendly) A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
- CONTINUOUS_READ - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table supports reads in continuous streaming execution mode.
- ContinuousPartitionReader<T> - Interface in org.apache.spark.sql.connector.read.streaming
-
A variation on
PartitionReader
for use with continuous streaming processing. - ContinuousPartitionReaderFactory - Interface in org.apache.spark.sql.connector.read.streaming
-
A variation on
PartitionReaderFactory
that returnsContinuousPartitionReader
instead ofPartitionReader
. - continuousProcessingUnsupportedByDataSourceError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- ContinuousSplit - Class in org.apache.spark.ml.tree
-
Split which tests a continuous feature.
- ContinuousStream - Interface in org.apache.spark.sql.connector.read.streaming
-
A
SparkDataStream
for streaming queries with continuous mode. - conv(Column, int, int) - Static method in class org.apache.spark.sql.functions
-
Convert a number in a string column from one base to another.
- convert_timezone(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Converts the timestamp without time zone
sourceTs
from the current time zone totargetTz
. - convert_timezone(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Converts the timestamp without time zone
sourceTs
from thesourceTz
time zone totargetTz
. - convertCachedBatchToColumnarBatch(RDD<CachedBatch>, Seq<Attribute>, Seq<Attribute>, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Convert the cached data into a ColumnarBatch.
- convertCachedBatchToInternalRow(RDD<CachedBatch>, Seq<Attribute>, Seq<Attribute>, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Convert the cached batch into
InternalRow
s. - convertColumnarBatchToCachedBatch(RDD<ColumnarBatch>, Seq<Attribute>, StorageLevel, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Convert an
RDD[ColumnarBatch]
into anRDD[CachedBatch]
in preparation for caching the data. - Converter$() - Constructor for class org.apache.spark.sql.SparkSession.Converter$
- convertHiveTableToCatalogTableError(SparkException, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- convertInternalRowToCachedBatch(RDD<InternalRow>, Seq<Attribute>, StorageLevel, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Convert an
RDD[InternalRow]
into anRDD[CachedBatch]
in preparation for caching the data. - convertJavaDateToDate(Date) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Converts an instance of
java.sql.Date
to a customjava.sql.Date
value. - convertJavaDateToDate(Date) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- convertJavaDateToDate(Date) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- convertJavaTimestampToTimestamp(Timestamp) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Converts an instance of
java.sql.Timestamp
to a customjava.sql.Timestamp
value. - convertJavaTimestampToTimestamp(Timestamp) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- convertJavaTimestampToTimestamp(Timestamp) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
-
java.sql timestamps are measured with millisecond accuracy (from Long.MinValue milliseconds to Long.MaxValue milliseconds), while Spark timestamps are measured at microseconds accuracy.
- convertJavaTimestampToTimestampNTZ(Timestamp) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Convert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the value stored in a remote database.
- convertJavaTimestampToTimestampNTZ(Timestamp) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- convertJavaTimestampToTimestampNTZ(Timestamp) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- convertMatrixColumnsFromML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
- convertMatrixColumnsFromML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
- convertMatrixColumnsToML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
- convertMatrixColumnsToML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
- convertTableProperties(TableSpec) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- convertTimestampNTZToJavaTimestamp(LocalDateTime) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Converts a LocalDateTime representing a TimestampNTZ type to an instance of
java.sql.Timestamp
. - convertTimestampNTZToJavaTimestamp(LocalDateTime) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- convertTimestampNTZToJavaTimestamp(LocalDateTime) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- convertToCanonicalEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.GraphOps
-
Convert bi-directional edges into uni-directional ones.
- convertToOldLossType(String) - Method in interface org.apache.spark.ml.tree.GBTRegressorParams
- convertToTimeUnit(long, TimeUnit) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
Convert
milliseconds
to the specifiedunit
. - convertTransforms() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TransformHelper
- convertVectorColumnsFromML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
- convertVectorColumnsFromML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
- convertVectorColumnsToML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
- convertVectorColumnsToML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
- Coord() - Constructor for class org.apache.spark.sql.util.NumericHistogram.Coord
- CoordinateMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a matrix in coordinate format.
- CoordinateMatrix(RDD<MatrixEntry>) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Alternative constructor leaving matrix dimensions to be determined automatically.
- CoordinateMatrix(RDD<MatrixEntry>, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
- copy() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- copy() - Method in class org.apache.spark.ml.linalg.DenseVector
- copy() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Get a deep copy of the matrix.
- copy() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- copy() - Method in class org.apache.spark.ml.linalg.SparseVector
- copy() - Method in interface org.apache.spark.ml.linalg.Vector
-
Makes a deep copy of this vector.
- copy() - Method in class org.apache.spark.ml.param.ParamMap
-
Creates a copy of this param map.
- copy() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- copy() - Method in class org.apache.spark.mllib.linalg.DenseVector
- copy() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Get a deep copy of the matrix.
- copy() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- copy() - Method in class org.apache.spark.mllib.linalg.SparseVector
- copy() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Makes a deep copy of this vector.
- copy() - Method in class org.apache.spark.mllib.random.ExponentialGenerator
- copy() - Method in class org.apache.spark.mllib.random.GammaGenerator
- copy() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
- copy() - Method in class org.apache.spark.mllib.random.PoissonGenerator
- copy() - Method in interface org.apache.spark.mllib.random.RandomDataGenerator
-
Returns a copy of the RandomDataGenerator with a new instance of the rng object used in the class when applicable for non-locking concurrent usage.
- copy() - Method in class org.apache.spark.mllib.random.StandardNormalGenerator
- copy() - Method in class org.apache.spark.mllib.random.UniformGenerator
- copy() - Method in class org.apache.spark.mllib.random.WeibullGenerator
- copy() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Returns a shallow copy of this instance.
- copy() - Method in interface org.apache.spark.sql.Row
-
Make a copy of the current
Row
object. - copy() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- copy() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- copy() - Method in class org.apache.spark.sql.util.MapperRowCounter
- copy() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- copy() - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- copy() - Method in class org.apache.spark.sql.vectorized.ColumnarMap
- copy() - Method in class org.apache.spark.sql.vectorized.ColumnarRow
-
Revisit this.
- copy() - Method in class org.apache.spark.util.AccumulatorV2
-
Creates a new copy of this accumulator.
- copy() - Method in class org.apache.spark.util.CollectionAccumulator
- copy() - Method in class org.apache.spark.util.DoubleAccumulator
- copy() - Method in class org.apache.spark.util.LongAccumulator
- copy() - Method in class org.apache.spark.util.StatCounter
-
Clone this StatCounter
- copy(String, Option<Object>, Option<Object>, Option<Throwable>, Option<String>, Map<String, String>, QueryContext[]) - Method in exception org.apache.spark.sql.AnalysisException
- copy(Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y = x
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.FMClassificationModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.FMClassifier
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.GBTClassifier
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LinearSVC
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LinearSVCModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LogisticRegression
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.NaiveBayes
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.OneVsRest
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.OneVsRestModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.KMeans
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.KMeansModel
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.LDA
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.LocalLDAModel
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- copy(ParamMap) - Method in class org.apache.spark.ml.Estimator
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.Evaluator
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Binarizer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Bucketizer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ColumnPruner
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.CountVectorizer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.FeatureHasher
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.HashingTF
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.IDF
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.IDFModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Imputer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ImputerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.IndexToString
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Interaction
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinHashLSH
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinHashLSHModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinMaxScaler
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.PCA
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.PCAModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.PolynomialExpansion
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RegexTokenizer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RFormula
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RFormulaModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RobustScaler
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RobustScalerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.SQLTransformer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StandardScaler
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StandardScalerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StringIndexer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Tokenizer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorAssembler
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorIndexer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorSizeHint
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorSlicer
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Word2Vec
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Word2VecModel
- copy(ParamMap) - Method in class org.apache.spark.ml.fpm.FPGrowth
- copy(ParamMap) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- copy(ParamMap) - Method in class org.apache.spark.ml.fpm.PrefixSpan
- copy(ParamMap) - Method in class org.apache.spark.ml.Model
- copy(ParamMap) - Method in interface org.apache.spark.ml.param.Params
-
Creates a copy of this instance with the same UID and some extra params.
- copy(ParamMap) - Method in class org.apache.spark.ml.Pipeline
- copy(ParamMap) - Method in class org.apache.spark.ml.PipelineModel
- copy(ParamMap) - Method in class org.apache.spark.ml.PipelineStage
- copy(ParamMap) - Method in class org.apache.spark.ml.Predictor
- copy(ParamMap) - Method in class org.apache.spark.ml.recommendation.ALS
- copy(ParamMap) - Method in class org.apache.spark.ml.recommendation.ALSModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.FMRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.FMRegressor
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GBTRegressor
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.LinearRegression
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- copy(ParamMap) - Method in class org.apache.spark.ml.Transformer
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.CrossValidator
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- copy(ParamMap) - Method in class org.apache.spark.ml.UnaryTransformer
- copy(Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y = x
- copyAndReset() - Method in class org.apache.spark.sql.util.MapperRowCounter
- copyAndReset() - Method in class org.apache.spark.util.AccumulatorV2
-
Creates a new copy of this accumulator, which is zero value.
- copyAndReset() - Method in class org.apache.spark.util.CollectionAccumulator
- copyFileStreamNIO(FileChannel, WritableByteChannel, long, long) - Method in interface org.apache.spark.util.SparkStreamUtils
- copyFileStreamNIO(FileChannel, WritableByteChannel, long, long) - Static method in class org.apache.spark.util.Utils
- copyStream(InputStream, OutputStream, boolean, boolean) - Method in interface org.apache.spark.util.SparkStreamUtils
-
Copy all data from an InputStream to an OutputStream.
- copyStream(InputStream, OutputStream, boolean, boolean) - Static method in class org.apache.spark.util.Utils
- copyStream$default$3() - Static method in class org.apache.spark.util.Utils
- copyStream$default$4() - Static method in class org.apache.spark.util.Utils
- copyStreamUpTo(InputStream, long) - Static method in class org.apache.spark.util.Utils
-
Copy the first
maxSize
bytes of data from the InputStream to an in-memory buffer, primarily to check for corruption. - copyValues(T, ParamMap) - Method in interface org.apache.spark.ml.param.Params
-
Copies param values from this instance to another instance for params shared by them.
- cores() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- cores() - Method in class org.apache.spark.scheduler.MiscellaneousProcessDetails
- cores(int) - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
Specify number of cores per Executor.
- CORES() - Static method in class org.apache.spark.resource.ResourceProfile
-
built-in executor resource: cores
- CORES_GRANTED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- CORES_PER_EXECUTOR_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- coresGranted() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
- coresPerExecutor() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
- corr(String, String) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Calculates the Pearson Correlation Coefficient of two columns of a DataFrame.
- corr(String, String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the Pearson Correlation Coefficient for two columns.
- corr(String, String, String) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Calculates the correlation of two columns of a DataFrame.
- corr(String, String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- corr(JavaRDD<Double>, JavaRDD<Double>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of
corr()
- corr(JavaRDD<Double>, JavaRDD<Double>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of
corr()
- corr(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the Pearson correlation for the input RDDs.
- corr(RDD<Object>, RDD<Object>, String) - Static method in class org.apache.spark.mllib.stat.correlation.Correlations
- corr(RDD<Object>, RDD<Object>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the correlation for the input RDDs using the specified method.
- corr(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the Pearson correlation matrix for the input RDD of Vectors.
- corr(RDD<Vector>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the correlation matrix for the input RDD of Vectors using the specified method.
- corr(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the Pearson Correlation Coefficient for two columns.
- corr(Dataset<?>, String) - Static method in class org.apache.spark.ml.stat.Correlation
-
Compute the Pearson correlation matrix for the input Dataset of Vectors.
- corr(Dataset<?>, String, String) - Static method in class org.apache.spark.ml.stat.Correlation
-
Compute the correlation matrix for the input Dataset of Vectors using the specified method.
- Correlation - Class in org.apache.spark.ml.stat
-
API for correlation functions in MLlib, compatible with DataFrames and Datasets.
- Correlation - Interface in org.apache.spark.mllib.stat.correlation
-
Trait for correlation algorithms.
- Correlation() - Constructor for class org.apache.spark.ml.stat.Correlation
- CorrelationNames - Class in org.apache.spark.mllib.stat.correlation
-
Maintains supported and default correlation names.
- CorrelationNames() - Constructor for class org.apache.spark.mllib.stat.correlation.CorrelationNames
- Correlations - Class in org.apache.spark.mllib.stat.correlation
-
Delegates computation to the specific correlation object based on the input method name.
- Correlations() - Constructor for class org.apache.spark.mllib.stat.correlation.Correlations
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedEqualNullSafe
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedEqualTo
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedFilter
-
The corresponding non-collation aware filter.
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedGreaterThan
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedGreaterThanOrEqual
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedIn
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedLessThan
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedLessThanOrEqual
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedStringContains
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedStringEndsWith
- correspondingFilter() - Method in class org.apache.spark.sql.sources.CollatedStringStartsWith
- corrMatrix(RDD<Vector>, String) - Static method in class org.apache.spark.mllib.stat.correlation.Correlations
- CORRUPT_MERGED_BLOCK_CHUNKS() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- CORRUPT_MERGED_BLOCK_CHUNKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- CORRUPT_MERGED_BLOCK_CHUNKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- corruptedTableNameContextInCatalogError(int, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- corruptedViewQueryOutputColumnsInCatalogError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- corruptedViewReferredTempFunctionsInCatalogError(Exception) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- corruptedViewReferredTempViewInCatalogError(Exception) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- corruptedViewSQLConfigsInCatalogError(Exception) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- corruptMergedBlockChunks() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- corruptMergedBlockChunks() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- cos(String) - Static method in class org.apache.spark.sql.functions
- cos(Column) - Static method in class org.apache.spark.sql.functions
- cosh(String) - Static method in class org.apache.spark.sql.functions
- cosh(Column) - Static method in class org.apache.spark.sql.functions
- CosineSilhouette - Class in org.apache.spark.ml.evaluation
-
The algorithm which is implemented in this object, instead, is an efficient and parallel implementation of the Silhouette using the cosine distance measure.
- CosineSilhouette() - Constructor for class org.apache.spark.ml.evaluation.CosineSilhouette
- costSum() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- cot(Column) - Static method in class org.apache.spark.sql.functions
- count() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the number of elements in the RDD.
- count() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
The number of edges in the RDD.
- count() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
The number of vertices in the RDD.
- count() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
- count() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- count() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Sample size.
- count() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Sample size.
- count() - Method in class org.apache.spark.rdd.RDD
-
Return the number of elements in the RDD.
- count() - Method in class org.apache.spark.sql.api.Dataset
-
Returns the number of rows in the Dataset.
- count() - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Returns a
Dataset
that contains a tuple with each key and the number of items present for that key. - count() - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Count the number of rows for each group.
- count() - Method in class org.apache.spark.sql.Dataset
- count() - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- count() - Method in class org.apache.spark.sql.RelationalGroupedDataset
- count() - Method in class org.apache.spark.status.RDDPartitionSeq
- count() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
- count() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.
- count() - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.
- count() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the number of elements added to the accumulator.
- count() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the number of elements added to the accumulator.
- count() - Method in class org.apache.spark.util.StatCounter
- count(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of items in a group.
- count(MapFunction<T, Object>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Deprecated.Count aggregate function.
- count(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- count(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of items in a group.
- count(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- count(KVStoreView<T>, Function1<T, Object>) - Static method in class org.apache.spark.status.KVUtils
-
Counts the number of elements in the KVStoreView which satisfy a predicate.
- count(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Deprecated.Count aggregate function.
- Count - Class in org.apache.spark.sql.connector.expressions.aggregate
-
An aggregate function that returns the number of the specific row in a group.
- Count(Expression, boolean) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Count
- count_distinct(Column, Column...) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- count_distinct(Column, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- count_if(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of
TRUE
values for the expression. - count_min_sketch(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a count-min sketch of a column with the given esp, confidence and seed.
- count_min_sketch(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a count-min sketch of a column with the given esp, confidence and seed.
- countApprox(long) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
- countApprox(long, double) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
- countApprox(long, double) - Method in class org.apache.spark.rdd.RDD
-
Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
- countApproxDistinct(double) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return approximate number of distinct elements in the RDD.
- countApproxDistinct(double) - Method in class org.apache.spark.rdd.RDD
-
Return approximate number of distinct elements in the RDD.
- countApproxDistinct(int, int) - Method in class org.apache.spark.rdd.RDD
-
Return approximate number of distinct elements in the RDD.
- countApproxDistinctByKey(double) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(int, int, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countAsync() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of
count
, which returns a future for counting the number of elements in this RDD. - countAsync() - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Returns a future for counting the number of elements in the RDD.
- countByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Count the number of elements for each key, and return the result to the master as a Map.
- countByKey() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Count the number of elements for each key, collecting the results to a local Map.
- countByKeyApprox(long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
- countByKeyApprox(long, double) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
- countByKeyApprox(long, double) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
- countByValue() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the count of each unique value in this RDD as a map of (value, count) pairs.
- countByValue() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
- countByValue(int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
- countByValue(int, Ordering<T>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
- countByValue(Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return the count of each unique value in this RDD as a local map of (value, count) pairs.
- countByValueAndWindow(Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
- countByValueAndWindow(Duration, Duration, int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
- countByValueAndWindow(Duration, Duration, int, Ordering<T>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
- countByValueApprox(long) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of countByValue().
- countByValueApprox(long, double) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of countByValue().
- countByValueApprox(long, double, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Approximate version of countByValue().
- countByValueApproxNotSupportArraysError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- countByWindow(Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by counting the number of elements in a window over this DStream.
- countByWindow(Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by counting the number of elements in a sliding window over this DStream.
- countDistinct(String, String...) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- countDistinct(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- countDistinct(Column, Column...) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- countDistinct(Column, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- COUNTER() - Static method in class org.apache.spark.metrics.sink.StatsdMetricType
- CountingWritableChannel - Class in org.apache.spark.storage
- CountingWritableChannel(WritableByteChannel) - Constructor for class org.apache.spark.storage.CountingWritableChannel
- countMinSketch(String, double, double, int) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- countMinSketch(String, int, int, int) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- countMinSketch(Column, double, double, int) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- countMinSketch(Column, int, int, int) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- CountMinSketch - Class in org.apache.spark.util.sketch
-
A Count-min sketch is a probabilistic data structure used for cardinality estimation using sub-linear space.
- CountMinSketch() - Constructor for class org.apache.spark.util.sketch.CountMinSketch
- CountMinSketch.Version - Enum Class in org.apache.spark.util.sketch
- CountStar - Class in org.apache.spark.sql.connector.expressions.aggregate
-
An aggregate function that returns the number of rows in a group.
- CountStar() - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.CountStar
- countTowardsTaskFailures() - Method in class org.apache.spark.ExecutorLostFailure
- countTowardsTaskFailures() - Method in class org.apache.spark.FetchFailed
-
Fetch failures lead to a different failure handling path: (1) we don't abort the stage after 4 task failures, instead we immediately go back to the stage which generated the map output, and regenerate the missing data.
- countTowardsTaskFailures() - Static method in class org.apache.spark.Resubmitted
- countTowardsTaskFailures() - Method in class org.apache.spark.TaskCommitDenied
-
If a task failed because its attempt to commit was denied, do not count this failure towards failing the stage.
- countTowardsTaskFailures() - Method in interface org.apache.spark.TaskFailedReason
-
Whether this task failure should be counted towards the maximum number of times the task is allowed to fail before the stage is aborted.
- countTowardsTaskFailures() - Method in class org.apache.spark.TaskKilled
- countTowardsTaskFailures() - Static method in class org.apache.spark.TaskResultLost
- countTowardsTaskFailures() - Static method in class org.apache.spark.UnknownReason
- CountVectorizer - Class in org.apache.spark.ml.feature
-
Extracts a vocabulary from document collections and generates a
CountVectorizerModel
. - CountVectorizer() - Constructor for class org.apache.spark.ml.feature.CountVectorizer
- CountVectorizer(String) - Constructor for class org.apache.spark.ml.feature.CountVectorizer
- CountVectorizerModel - Class in org.apache.spark.ml.feature
-
Converts a text document to a sparse vector of token counts.
- CountVectorizerModel(String[]) - Constructor for class org.apache.spark.ml.feature.CountVectorizerModel
- CountVectorizerModel(String, String[]) - Constructor for class org.apache.spark.ml.feature.CountVectorizerModel
- CountVectorizerParams - Interface in org.apache.spark.ml.feature
-
Params for
CountVectorizer
andCountVectorizerModel
. - cov() - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
- cov(String, String) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Calculate the sample covariance of two numerical columns of a DataFrame.
- cov(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- covar_pop(String, String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population covariance for two columns.
- covar_pop(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population covariance for two columns.
- covar_samp(String, String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample covariance for two columns.
- covar_samp(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample covariance for two columns.
- covs() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
- cpus() - Method in class org.apache.spark.BarrierTaskContext
- cpus() - Method in class org.apache.spark.TaskContext
-
CPUs allocated to the task.
- cpus(int) - Method in class org.apache.spark.resource.TaskResourceRequests
-
Specify number of cpus per Task.
- CPUS() - Static method in class org.apache.spark.resource.ResourceProfile
-
built-in task resource: cpus
- crc32(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the cyclic redundancy check value (CRC32) of a binary column and returns the value as a bigint.
- CreatableRelationProvider - Interface in org.apache.spark.sql.sources
- create() - Method in interface org.apache.spark.sql.CreateTableWriter
- create(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.api.java.StorageLevels
-
Create a new StorageLevel object.
- create(double, double, int) - Static method in class org.apache.spark.util.sketch.CountMinSketch
- create(int, int, int) - Static method in class org.apache.spark.util.sketch.CountMinSketch
- create(long) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Creates a
BloomFilter
with the expected number of insertions and a default expected false positive probability of 3%. - create(long, double) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Creates a
BloomFilter
with the expected number of insertions and expected false positive probability. - create(long, long) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Creates a
BloomFilter
with givenexpectedNumItems
andnumBits
, it will pick an optimalnumHashFunctions
which can minimizefpp
for the bloom filter. - create(Object...) - Static method in class org.apache.spark.sql.RowFactory
-
Create a
Row
from the given arguments. - create(String, DataType) - Static method in interface org.apache.spark.sql.connector.catalog.Column
- create(String, DataType, boolean) - Static method in interface org.apache.spark.sql.connector.catalog.Column
- create(String, DataType, boolean, String, String) - Static method in interface org.apache.spark.sql.connector.catalog.Column
- create(String, DataType, boolean, String, String, String) - Static method in interface org.apache.spark.sql.connector.catalog.Column
- create(String, DataType, boolean, String, ColumnDefaultValue, String) - Static method in interface org.apache.spark.sql.connector.catalog.Column
- create(String, DataType, boolean, String, IdentityColumnSpec, String) - Static method in interface org.apache.spark.sql.connector.catalog.Column
- create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
Create an RDD that executes a SQL query on a JDBC connection and reads results.
- create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int, Function<ResultSet, T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
Create an RDD that executes a SQL query on a JDBC connection and reads results.
- create(RDD<T>, Function1<Object, Object>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
Create a PartitionPruningRDD.
- createArrayType(String) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- createArrayType(DataType) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates an ArrayType by specifying the data type of elements (
elementType
). - createArrayType(DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates an ArrayType by specifying the data type of elements (
elementType
) and whether the array contains null values (containsNull
). - createArrayWithElementsExceedLimitError(String, Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- createAttrGroupForAttrNames(String, int, boolean, boolean) - Static method in class org.apache.spark.ml.feature.OneHotEncoderCommon
-
Creates an `AttributeGroup` with the required number of `BinaryAttribute`.
- createBatchWriterFactory(PhysicalWriteInfo) - Method in interface org.apache.spark.sql.connector.write.BatchWrite
-
Creates a writer factory which will be serialized and sent to executors.
- createBatchWriterFactory(PhysicalWriteInfo) - Method in interface org.apache.spark.sql.connector.write.DeltaBatchWrite
- createCharType(int) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a CharType with the given length.
- createChunkBlockInfosFromMetaResponse(int, int, int, long, RoaringBitmap[]) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the
iterator.next()
is invoked and the iterator processes a response of typeShuffleBlockFetcherIterator.PushMergedRemoteMetaFetchResult
. - createColumnarReader(InputPartition) - Method in interface org.apache.spark.sql.connector.read.PartitionReaderFactory
-
Returns a columnar partition reader to read data from the given
InputPartition
. - createColumnarReader(InputPartition) - Method in interface org.apache.spark.sql.connector.read.streaming.ContinuousPartitionReaderFactory
- createCombiner() - Method in class org.apache.spark.Aggregator
- createCompiledClass(String, File, String, String, Seq<URL>, Seq<String>, String, Option<String>) - Static method in class org.apache.spark.TestUtils
- createCompiledClass(String, File, String, String, Seq<URL>, Seq<String>, String, Option<String>) - Method in interface org.apache.spark.util.SparkTestUtils
-
Creates a compiled class with the given name.
- createCompiledClass(String, File, SparkTestUtils.JavaSourceFromString, Seq<URL>) - Static method in class org.apache.spark.TestUtils
- createCompiledClass(String, File, SparkTestUtils.JavaSourceFromString, Seq<URL>) - Method in interface org.apache.spark.util.SparkTestUtils
-
Creates a compiled class with the source file.
- createCompiledClass$default$3() - Static method in class org.apache.spark.TestUtils
- createCompiledClass$default$4() - Static method in class org.apache.spark.TestUtils
- createCompiledClass$default$5() - Static method in class org.apache.spark.TestUtils
- createCompiledClass$default$6() - Static method in class org.apache.spark.TestUtils
- createCompiledClass$default$7() - Static method in class org.apache.spark.TestUtils
- createCompiledClass$default$8() - Static method in class org.apache.spark.TestUtils
- createConnectionFactory(JDBCOptions) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns a factory for creating connections to the given JDBC URL.
- createConnectionFactory(JDBCOptions) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- createContinuousReaderFactory() - Method in interface org.apache.spark.sql.connector.read.streaming.ContinuousStream
-
Returns a factory to create a
ContinuousPartitionReader
for eachInputPartition
. - createCryptoInputStream(InputStream, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Helper method to wrap
InputStream
withCryptoInputStream
for decryption. - createCryptoOutputStream(OutputStream, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Helper method to wrap
OutputStream
withCryptoOutputStream
for encryption. - createDataFrame(List<?>, Class<?>) - Method in class org.apache.spark.sql.api.SparkSession
-
Applies a schema to a List of Java Beans.
- createDataFrame(List<?>, Class<?>) - Method in class org.apache.spark.sql.SparkSession
- createDataFrame(List<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
- createDataFrame(List<Row>, StructType) - Method in class org.apache.spark.sql.api.SparkSession
- createDataFrame(List<Row>, StructType) - Method in class org.apache.spark.sql.SparkSession
- createDataFrame(List<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
- createDataFrame(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SparkSession
-
Applies a schema to an RDD of Java Beans.
- createDataFrame(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
- createDataFrame(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SparkSession
- createDataFrame(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
- createDataFrame(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SparkSession
-
Applies a schema to an RDD of Java Beans.
- createDataFrame(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
- createDataFrame(RDD<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SparkSession
-
Creates a
DataFrame
from an RDD of Product (e.g. - createDataFrame(RDD<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLContext
- createDataFrame(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SparkSession
- createDataFrame(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
- createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a
DataFrame
from a local Seq of Product. - createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SparkSession
- createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLContext
- createDataset(List<T>, Encoder<T>) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a
Dataset
from ajava.util.List
of a given type. - createDataset(List<T>, Encoder<T>) - Method in class org.apache.spark.sql.SparkSession
- createDataset(List<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLContext
- createDataset(RDD<T>, Encoder<T>) - Method in class org.apache.spark.sql.SparkSession
-
Creates a
Dataset
from an RDD of a given type. - createDataset(RDD<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLContext
- createDataset(Seq<T>, Encoder<T>) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a
Dataset
from a local Seq of data of a given type. - createDataset(Seq<T>, Encoder<T>) - Method in class org.apache.spark.sql.SparkSession
- createDataset(Seq<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLContext
- createDayTimeIntervalType() - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a DayTimeIntervalType with default start and end fields: interval day to second.
- createDayTimeIntervalType(byte, byte) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a DayTimeIntervalType by specifying the start and end fields.
- createDecimalType() - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a DecimalType with default precision and scale, which are 10 and 0.
- createDecimalType(int, int) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a DecimalType by specifying the precision and scale.
- createDF(RDD<byte[]>, StructType, SparkSession) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- createDirectory(File) - Method in interface org.apache.spark.util.SparkFileUtils
-
Create a directory given the abstract pathname
- createDirectory(File) - Static method in class org.apache.spark.util.Utils
- createDirectory(String, String) - Method in interface org.apache.spark.util.SparkFileUtils
-
Create a directory inside the given parent directory.
- createDirectory(String, String) - Static method in class org.apache.spark.util.Utils
- createDirectory$default$2() - Static method in class org.apache.spark.util.Utils
- createEvaluator() - Method in interface org.apache.spark.PartitionEvaluatorFactory
-
Creates a partition evaluator.
- createExternalTable(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Deprecated.use createTable instead. Since 2.2.0.
- createExternalTable(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
- createExternalTable(String, String) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.use sparkSession.catalog.createTable instead. Since 2.2.0.
- createExternalTable(String, String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Deprecated.use createTable instead. Since 2.2.0.
- createExternalTable(String, String, String) - Method in class org.apache.spark.sql.catalog.Catalog
- createExternalTable(String, String, String) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.use sparkSession.catalog.createTable instead. Since 2.2.0.
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Deprecated.use createTable instead. Since 2.2.0.
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.use sparkSession.catalog.createTable instead. Since 2.2.0.
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Deprecated.use createTable instead. Since 2.2.0.
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.use sparkSession.catalog.createTable instead. Since 2.2.0.
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Deprecated.use createTable instead. Since 2.2.0.
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.use sparkSession.catalog.createTable instead. Since 2.2.0.
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Deprecated.use createTable instead. Since 2.2.0.
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.use sparkSession.catalog.createTable instead. Since 2.2.0.
- createExternalTableWithoutLocationError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- createFailedToGetTokenMessage(String, Throwable) - Static method in class org.apache.spark.util.Utils
-
Returns a string message about delegation token generation failure
- createFuncWithBothIfNotExistsAndReplaceError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- createGlobalTempView(String) - Method in class org.apache.spark.sql.api.Dataset
-
Creates a global temporary view using the given name.
- createIndex(String, Identifier, NamedReference[], Map<NamedReference, Map<String, String>>, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Build a create index SQL statement.
- createIndex(String, Identifier, NamedReference[], Map<NamedReference, Map<String, String>>, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- createIndex(String, Identifier, NamedReference[], Map<NamedReference, Map<String, String>>, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- createIndex(String, Identifier, NamedReference[], Map<NamedReference, Map<String, String>>, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- createIndex(String, NamedReference[], Map<NamedReference, Map<String, String>>, Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.index.SupportsIndex
-
Creates an index.
- createJar(Seq<File>, File, Option<String>, Option<String>) - Static method in class org.apache.spark.TestUtils
-
Create a jar file that contains this set of files.
- createJarWithClasses(Seq<String>, String, Seq<Tuple2<String, String>>, Seq<URL>) - Static method in class org.apache.spark.TestUtils
-
Create a jar that defines classes with the given names.
- createJarWithFiles(Map<String, String>, File) - Static method in class org.apache.spark.TestUtils
-
Create a jar file containing multiple files.
- createKey(SparkConf) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Creates a new encryption key.
- createKVStore(Option<File>, boolean, SparkConf) - Static method in class org.apache.spark.status.KVUtils
- createListeners(SparkConf, ElementTrackingStore) - Method in interface org.apache.spark.status.AppHistoryServerPlugin
-
Creates listeners to replay the event logs.
- createLogForDriver(SparkConf, String, Configuration) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
Create a WriteAheadLog for the driver.
- createLogForReceiver(SparkConf, String, Configuration) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
Create a WriteAheadLog for the receiver.
- createMapOutputWriter(int, long, int) - Method in interface org.apache.spark.shuffle.api.ShuffleExecutorComponents
-
Called once per map task to create a writer that will be responsible for persisting all the partitioned bytes written by that map task.
- createMapType(DataType, DataType) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a MapType by specifying the data type of keys (
keyType
) and values (keyType
). - createMapType(DataType, DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a MapType by specifying the data type of keys (
keyType
), the data type of values (keyType
), and whether values contain any null value (valueContainsNull
). - createMetrics(long) - Static method in class org.apache.spark.status.LiveEntityHelpers
- createMetrics(long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long) - Static method in class org.apache.spark.status.LiveEntityHelpers
- createModel(DenseVector<Object>) - Method in interface org.apache.spark.ml.ann.Layer
-
Returns the instance of the layer based on weights provided.
- createNamespace(String[], Map<String, String>) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- createNamespace(String[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
Create a namespace in the catalog.
- createOrReplace() - Method in interface org.apache.spark.sql.CreateTableWriter
-
Create a new table or replace an existing table with the contents of the data frame.
- createOrReplaceGlobalTempView(String) - Method in class org.apache.spark.sql.api.Dataset
-
Creates or replaces a global temporary view using the given name.
- createOrReplaceTempView(String) - Method in class org.apache.spark.sql.api.Dataset
-
Creates a local temporary view using the given name.
- createOutputOperationFailureForUI(String) - Static method in class org.apache.spark.streaming.ui.UIUtils
- createPartition(InternalRow, Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.SupportsAtomicPartitionManagement
- createPartition(InternalRow, Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Create a partition in table.
- createPartitions(InternalRow[], Map<String, String>[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsAtomicPartitionManagement
-
Create an array of partitions atomically in table.
- createPersistedViewFromDatasetAPINotAllowedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- createPMMLModelExport(Object) - Static method in class org.apache.spark.mllib.pmml.export.PMMLModelExportFactory
-
Factory object to help creating the necessary PMMLModelExport implementation taking as input the machine learning model (for example KMeansModel).
- createProxyHandler(Function1<String, Option<String>>) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a handler for proxying request to Workers and Application Drivers
- createProxyLocationHeader(String, HttpServletRequest, URI) - Static method in class org.apache.spark.ui.JettyUtils
- createProxyURI(String, String, String, String) - Static method in class org.apache.spark.ui.JettyUtils
- createRDDFromArray(JavaSparkContext, byte[][]) - Static method in class org.apache.spark.api.r.RRDD
-
Create an RRDD given a sequence of byte arrays.
- createRDDFromFile(JavaSparkContext, String, int) - Static method in class org.apache.spark.api.r.RRDD
-
Create an RRDD given a temporary file name.
- createReadableChannel(ReadableByteChannel, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Wrap a
ReadableByteChannel
for decryption. - createReader(InputPartition) - Method in interface org.apache.spark.sql.connector.read.PartitionReaderFactory
-
Returns a row-based partition reader to read data from the given
InputPartition
. - createReader(InputPartition) - Method in interface org.apache.spark.sql.connector.read.streaming.ContinuousPartitionReaderFactory
- createReaderFactory() - Method in interface org.apache.spark.sql.connector.read.Batch
-
Returns a factory to create a
PartitionReader
for eachInputPartition
. - createReaderFactory() - Method in interface org.apache.spark.sql.connector.read.streaming.MicroBatchStream
-
Returns a factory to create a
PartitionReader
for eachInputPartition
. - createRedirectHandler(String, String, Function1<HttpServletRequest, BoxedUnit>, String, Set<String>) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a handler that always redirects the user to the given path
- createRelation(SQLContext, SaveMode, Map<String, String>, Dataset<Row>) - Method in interface org.apache.spark.sql.sources.CreatableRelationProvider
-
Saves a DataFrame to a destination (using data source-specific parameters)
- createRelation(SQLContext, Map<String, String>) - Method in interface org.apache.spark.sql.sources.RelationProvider
-
Returns a new base relation with the given parameters.
- createRelation(SQLContext, Map<String, String>, StructType) - Method in interface org.apache.spark.sql.sources.SchemaRelationProvider
-
Returns a new base relation with the given parameters and user defined schema.
- createSchedulerBackend(SparkContext, String, TaskScheduler) - Method in interface org.apache.spark.scheduler.ExternalClusterManager
-
Create a scheduler backend for the given SparkContext and scheduler.
- createSchema(Statement, String, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Create schema with an optional comment.
- createSchema(Statement, String, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- createSecret(SparkConf) - Static method in class org.apache.spark.util.Utils
- createServletHandler(String, HttpServlet, String) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a context handler that responds to a request with the given path prefix
- createServletHandler(String, JettyUtils.ServletParams<T>, SparkConf, String) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a context handler that responds to a request with the given path prefix
- createSingleFileMapOutputWriter(int, long) - Method in interface org.apache.spark.shuffle.api.ShuffleExecutorComponents
-
An optional extension for creating a map output writer that can optimize the transfer of a single partition file, as the entire result of a map task, to the backing store.
- createSink(SQLContext, Map<String, String>, Seq<String>, OutputMode) - Method in interface org.apache.spark.sql.sources.StreamSinkProvider
- createSource(SQLContext, String, Option<StructType>, String, Map<String, String>) - Method in interface org.apache.spark.sql.sources.StreamSourceProvider
- createSparkContext(String, String, String, String[], Map<Object, Object>, Map<Object, Object>) - Static method in class org.apache.spark.api.r.RRDD
- createStaticHandler(String, String) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a handler for serving files from a static directory
- createStream(JavaStreamingContext, String, String, String, String, int, Duration, int, StorageLevel, String, String, String, String, String) - Method in class org.apache.spark.streaming.kinesis.KinesisUtilsPythonHelper
- createStreamingSourceNotSpecifySchemaError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- createStreamingWriterFactory(PhysicalWriteInfo) - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingWrite
-
Creates a writer factory which will be serialized and sent to executors.
- createStructField(String, String, boolean) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- createStructField(String, DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructField with empty metadata.
- createStructField(String, DataType, boolean, Metadata) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructField by specifying the name (
name
), data type (dataType
) and whether values of this field can be null values (nullable
). - createStructType(List<StructField>) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructType with the given list of StructFields (
fields
). - createStructType(StructField[]) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructType with the given StructField array (
fields
). - createStructType(Seq<StructField>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- createTable(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Creates a table from the given path and returns the corresponding DataFrame.
- createTable(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Creates a table from the given path based on a data source and returns the corresponding DataFrame.
- createTable(String, String, String) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Creates a table based on the dataset in a data source and a set of options.
- createTable(String, String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
(Scala-specific) Creates a table based on the dataset in a data source and a set of options.
- createTable(String, String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Creates a table based on the dataset in a data source and a set of options.
- createTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, StructType, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Create a table based on the dataset in a data source, a schema and a set of options.
- createTable(String, String, StructType, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, StructType, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
(Scala-specific) Create a table based on the dataset in a data source, a schema and a set of options.
- createTable(String, String, StructType, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
Create a table based on the dataset in a data source, a schema and a set of options.
- createTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
(Scala-specific) Create a table based on the dataset in a data source, a schema and a set of options.
- createTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.api.Catalog
-
(Scala-specific) Creates a table based on the dataset in a data source and a set of options.
- createTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
- createTable(Statement, String, String, JdbcOptionsInWrite) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Create the table if the table does not exist.
- createTable(Statement, String, String, JdbcOptionsInWrite) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- createTable(Identifier, Column[], Transform[], Map<String, String>) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- createTable(Identifier, Column[], Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Create a table in the catalog.
- createTable(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- createTable(Identifier, StructType, Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Deprecated.This is deprecated. Please override
TableCatalog.createTable(Identifier, Column[], Transform[], Map)
instead. - createTableAsSelectWithNonEmptyDirectoryError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- createTableColumnTypesOptionColumnNotFoundInSchemaError(String, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- createTableDeprecatedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- CreateTableWriter<T> - Interface in org.apache.spark.sql
-
Trait to restrict calls to create and replace operations.
- createTaskScheduler(SparkContext, String) - Method in interface org.apache.spark.scheduler.ExternalClusterManager
-
Create a task scheduler instance for the given SparkContext
- createTempDir() - Method in interface org.apache.spark.util.SparkFileUtils
-
Create a temporary directory inside the
java.io.tmpdir
prefixed withspark
. - createTempDir(String, String) - Method in interface org.apache.spark.util.SparkFileUtils
-
Create a temporary directory inside the given parent directory.
- createTempDir(String, String) - Static method in class org.apache.spark.util.Utils
-
Create a temporary directory inside the given parent directory.
- createTempDir$default$1() - Static method in class org.apache.spark.util.Utils
- createTempDir$default$2() - Static method in class org.apache.spark.util.Utils
- createTempJsonFile(File, String, JValue) - Static method in class org.apache.spark.TestUtils
-
Creates a temp JSON file that contains the input JSON record.
- createTempScriptWithExpectedOutput(File, String, String) - Static method in class org.apache.spark.TestUtils
-
Creates a temp bash script that prints the given output.
- createTempTableNotSpecifyProviderError(SqlBaseParser.CreateTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- createTempView(String) - Method in class org.apache.spark.sql.api.Dataset
-
Creates a local temporary view using the given name.
- createUnsafe(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
Creates a decimal from unscaled, precision and scale without checking the bounds.
- createURI(String) - Method in interface org.apache.spark.util.SparkTestUtils
- createVarcharType(int) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a VarcharType with the given length.
- createView(ViewInfo) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Create a view in the catalog.
- createViewWithBothIfNotExistsAndReplaceError(SqlBaseParser.CreateViewContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- createWorkspace(int) - Static method in class org.apache.spark.mllib.optimization.NNLS
- createWritableChannel(WritableByteChannel, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Wrap a
WritableByteChannel
for encryption. - createWriter(int, long) - Method in interface org.apache.spark.sql.connector.write.DataWriterFactory
-
Returns a data writer to do the actual writing work.
- createWriter(int, long) - Method in interface org.apache.spark.sql.connector.write.DeltaWriterFactory
- createWriter(int, long, long) - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingDataWriterFactory
-
Returns a data writer to do the actual writing work.
- createYearMonthIntervalType() - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a YearMonthIntervalType with default start and end fields: interval year to month.
- createYearMonthIntervalType(byte, byte) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a YearMonthIntervalType by specifying the start and end fields.
- crossJoin(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Explicit cartesian join with another
DataFrame
. - crossJoin(Dataset<?>) - Method in class org.apache.spark.sql.Dataset
- crosstab(String, String) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Computes a pair-wise frequency table of the given columns.
- crosstab(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- CrossValidator - Class in org.apache.spark.ml.tuning
-
K-fold cross validation performs model selection by splitting the dataset into a set of non-overlapping randomly partitioned folds which are used as separate training and test datasets e.g., with k=3 folds, K-fold cross validation will generate 3 (training, test) dataset pairs, each of which uses 2/3 of the data for training and 1/3 for testing.
- CrossValidator() - Constructor for class org.apache.spark.ml.tuning.CrossValidator
- CrossValidator(String) - Constructor for class org.apache.spark.ml.tuning.CrossValidator
- CrossValidatorModel - Class in org.apache.spark.ml.tuning
-
CrossValidatorModel contains the model with the highest average cross-validation metric across folds and uses this model to transform input data.
- CrossValidatorModel.CrossValidatorModelWriter - Class in org.apache.spark.ml.tuning
-
Writer for CrossValidatorModel.
- CrossValidatorParams - Interface in org.apache.spark.ml.tuning
-
Params for
CrossValidator
andCrossValidatorModel
. - CryptoStreamUtils - Class in org.apache.spark.security
-
A util class for manipulating IO encryption and decryption streams.
- CryptoStreamUtils() - Constructor for class org.apache.spark.security.CryptoStreamUtils
- csc(Column) - Static method in class org.apache.spark.sql.functions
- csv(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads a CSV file and returns the result as a
DataFrame
. - csv(String) - Method in class org.apache.spark.sql.DataFrameReader
- csv(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
in CSV format at the specified path. - csv(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads a CSV file stream and returns the result as a
DataFrame
. - csv(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads CSV files and returns the result as a
DataFrame
. - csv(String...) - Method in class org.apache.spark.sql.DataFrameReader
- csv(Dataset) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads an
Dataset[String]
storing CSV rows and returns the result as aDataFrame
. - csv(Dataset<String>) - Method in class org.apache.spark.sql.DataFrameReader
- csv(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads CSV files and returns the result as a
DataFrame
. - csv(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- cube(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
- cube(String, String...) - Method in class org.apache.spark.sql.Dataset
- cube(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
- cube(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- cube(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
- cube(Column...) - Method in class org.apache.spark.sql.Dataset
- cube(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns, so we can run aggregation on them.
- cube(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- CubeType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.CubeType$
- cume_dist() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the cumulative distribution of values within a window partition, i.e.
- curdate() - Static method in class org.apache.spark.sql.functions
-
Returns the current date at the start of query evaluation as a date column.
- curId() - Static method in class org.apache.spark.sql.Dataset
- current_catalog() - Static method in class org.apache.spark.sql.functions
-
Returns the current catalog.
- current_database() - Static method in class org.apache.spark.sql.functions
-
Returns the current database.
- current_date() - Static method in class org.apache.spark.sql.functions
-
Returns the current date at the start of query evaluation as a date column.
- current_schema() - Static method in class org.apache.spark.sql.functions
-
Returns the current schema.
- current_timestamp() - Static method in class org.apache.spark.sql.functions
-
Returns the current timestamp at the start of query evaluation as a timestamp column.
- current_timezone() - Static method in class org.apache.spark.sql.functions
-
Returns the current session local timezone.
- current_user() - Static method in class org.apache.spark.sql.functions
-
Returns the user name of current execution context.
- currentAttemptId() - Method in interface org.apache.spark.SparkStageInfo
- currentAttemptId() - Method in class org.apache.spark.SparkStageInfoImpl
- currentCatalog() - Method in class org.apache.spark.sql.api.Catalog
-
Returns the current catalog in this session.
- currentCatalog() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
-
Returns the current catalog set.
- currentCatalog() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The current catalog when the view is created.
- currentCatalog() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- currentDatabase() - Method in class org.apache.spark.sql.api.Catalog
-
Returns the current database (namespace) in this session.
- currentMetricsValues() - Method in interface org.apache.spark.sql.connector.read.PartitionReader
-
Returns an array of custom task metrics.
- currentMetricsValues() - Method in interface org.apache.spark.sql.connector.write.DataWriter
-
Returns an array of custom task metrics.
- currentNamespace() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The current namespace when the view is created.
- currentNamespace() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- currentResult() - Method in interface org.apache.spark.partial.ApproximateEvaluator
- currentRow() - Static method in class org.apache.spark.sql.expressions.Window
-
Value representing the current row.
- currPrefLocs(Partition, RDD<?>) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- CUSTOM_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- CustomAvgMetric - Class in org.apache.spark.sql.connector.metric
-
Built-in `CustomMetric` that computes average of metric values.
- CustomAvgMetric() - Constructor for class org.apache.spark.sql.connector.metric.CustomAvgMetric
- customCollectionClsNotResolvedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- CustomMetric - Interface in org.apache.spark.sql.connector.metric
-
A custom metric.
- customMetrics() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- CustomSumMetric - Class in org.apache.spark.sql.connector.metric
-
Built-in `CustomMetric` that sums up metric values.
- CustomSumMetric() - Constructor for class org.apache.spark.sql.connector.metric.CustomSumMetric
- CustomTaskMetric - Interface in org.apache.spark.sql.connector.metric
-
A custom task metric.
D
- DAGSchedulerEvent - Interface in org.apache.spark.scheduler
-
Types of events that can be handled by the DAGScheduler.
- dapply(Dataset<Row>, byte[], byte[], Object[], StructType) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
The helper function for dapply() on R side.
- data() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
- data() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
- data() - Method in class org.apache.spark.storage.ShuffleFetchCompletionListener
- Data(double[], double[], double[][]) - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
- Data(double[], double[], double[][], String) - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
- Data(int) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
- Data(Vector, double) - Constructor for class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
- Data(Vector, double, Option<Object>) - Constructor for class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
- DATA_DISTRIBUTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- Data$() - Constructor for class org.apache.spark.ml.feature.Word2VecModel.Data$
- Data$() - Constructor for class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data$
- Data$() - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data$
- Data$() - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data$
- Data$() - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data$
- Data$() - Constructor for class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data$
- database() - Method in class org.apache.spark.sql.catalog.Function
- database() - Method in class org.apache.spark.sql.catalog.Table
- Database - Class in org.apache.spark.sql.catalog
-
A database in Spark, as returned by the
listDatabases
method defined inCatalog
. - Database(String, String, String) - Constructor for class org.apache.spark.sql.catalog.Database
- Database(String, String, String, String) - Constructor for class org.apache.spark.sql.catalog.Database
- databaseExists(String) - Method in class org.apache.spark.sql.api.Catalog
-
Check if the database (namespace) with the specified name exists (the name can be qualified with catalog).
- databaseFromV1SessionCatalogNotSpecifiedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- databaseNameConflictWithSystemPreservedDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- databaseTypeDefinition() - Method in class org.apache.spark.sql.jdbc.JdbcType
- DatabricksDialect - Class in org.apache.spark.sql.jdbc
- DatabricksDialect() - Constructor for class org.apache.spark.sql.jdbc.DatabricksDialect
- dataDistribution() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- DataFrame - Enum constant in enum class org.apache.spark.QueryContextType
- DATAFRAME_DAPPLY() - Static method in class org.apache.spark.api.r.RRunnerModes
- DATAFRAME_GAPPLY() - Static method in class org.apache.spark.api.r.RRunnerModes
- DataFrameNaFunctions - Class in org.apache.spark.sql.api
-
Functionality for working with missing data in
DataFrame
s. - DataFrameNaFunctions - Class in org.apache.spark.sql
-
Functionality for working with missing data in
DataFrame
s. - DataFrameNaFunctions() - Constructor for class org.apache.spark.sql.api.DataFrameNaFunctions
- DataFrameReader - Class in org.apache.spark.sql.api
-
Interface used to load a
Dataset
from external storage systems (e.g. - DataFrameReader - Class in org.apache.spark.sql
-
Interface used to load a
Dataset
from external storage systems (e.g. - DataFrameReader() - Constructor for class org.apache.spark.sql.api.DataFrameReader
- DataFrameStatFunctions - Class in org.apache.spark.sql.api
-
Statistic functions for
DataFrame
s. - DataFrameStatFunctions - Class in org.apache.spark.sql
-
Statistic functions for
DataFrame
s. - DataFrameStatFunctions() - Constructor for class org.apache.spark.sql.api.DataFrameStatFunctions
- DataFrameWriter<T> - Class in org.apache.spark.sql
-
Interface used to write a
Dataset
to external storage systems (e.g. - DataFrameWriter() - Constructor for class org.apache.spark.sql.DataFrameWriter
- DataFrameWriterV2<T> - Class in org.apache.spark.sql
-
Interface used to write a
Dataset
to external storage using the v2 API. - DataFrameWriterV2() - Constructor for class org.apache.spark.sql.DataFrameWriterV2
- dataPathNotExistError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataPathNotSpecifiedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- dataSchemaNotSpecifiedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataSchemaNotSpecifiedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataset() - Method in class org.apache.spark.ml.FitStart
- Dataset<T> - Class in org.apache.spark.sql.api
-
A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.
- Dataset<T> - Class in org.apache.spark.sql
-
A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.
- Dataset() - Constructor for class org.apache.spark.sql.api.Dataset
- Dataset(SparkSession, LogicalPlan, Encoder<T>) - Constructor for class org.apache.spark.sql.Dataset
- Dataset(SQLContext, LogicalPlan, Encoder<T>) - Constructor for class org.apache.spark.sql.Dataset
- DATASET_ID_KEY() - Static method in class org.apache.spark.sql.Dataset
- DATASET_ID_TAG() - Static method in class org.apache.spark.sql.Dataset
- DatasetHolder<T> - Class in org.apache.spark.sql
-
A container for a
Dataset
, used for implicit conversions in Scala. - DatasetUtils - Class in org.apache.spark.ml.util
- DatasetUtils() - Constructor for class org.apache.spark.ml.util.DatasetUtils
- dataSource() - Method in class org.apache.spark.sql.SparkSession
-
A collection of methods for registering user-defined data sources.
- dataSource() - Method in interface org.apache.spark.ui.PagedTable
- dataSourceAlreadyExists(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataSourceDoesNotExist(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataSourceNotFoundError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- dataSourceOutputModeUnsupportedError(String, OutputMode) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- DataSourceRegister - Interface in org.apache.spark.sql.sources
-
Data sources should implement this trait so that they can register an alias to their data source.
- DataSourceRegistration - Class in org.apache.spark.sql
-
Functions for registering user-defined data sources.
- dataSourceTableSchemaMismatchError(StructType, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- DataStreamReader - Class in org.apache.spark.sql.streaming
-
Interface used to load a streaming
Dataset
from external storage systems (e.g. - DataStreamWriter<T> - Class in org.apache.spark.sql.streaming
-
Interface used to write a streaming
Dataset
to external storage systems (e.g. - dataTablesHeaderNodes(HttpServletRequest) - Static method in class org.apache.spark.ui.UIUtils
- dataType() - Method in class org.apache.spark.sql.avro.SchemaConverters.SchemaType
- dataType() - Method in class org.apache.spark.sql.catalog.Column
- dataType() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns the data type of this table column.
- dataType() - Method in interface org.apache.spark.sql.connector.catalog.MetadataColumn
-
The data type of values in this metadata column.
- dataType() - Method in interface org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter
-
Returns the data type of this parameter.
- dataType() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- dataType() - Method in class org.apache.spark.sql.connector.expressions.Cast
- dataType() - Method in class org.apache.spark.sql.connector.expressions.filter.AlwaysFalse
- dataType() - Method in class org.apache.spark.sql.connector.expressions.filter.AlwaysTrue
- dataType() - Method in interface org.apache.spark.sql.connector.expressions.Literal
-
Returns the SQL data type of the literal.
- dataType() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.The
DataType
of the returned value of thisUserDefinedAggregateFunction
. - dataType() - Method in class org.apache.spark.sql.sources.CollatedEqualNullSafe
- dataType() - Method in class org.apache.spark.sql.sources.CollatedEqualTo
- dataType() - Method in class org.apache.spark.sql.sources.CollatedFilter
- dataType() - Method in class org.apache.spark.sql.sources.CollatedGreaterThan
- dataType() - Method in class org.apache.spark.sql.sources.CollatedGreaterThanOrEqual
- dataType() - Method in class org.apache.spark.sql.sources.CollatedIn
- dataType() - Method in class org.apache.spark.sql.sources.CollatedLessThan
- dataType() - Method in class org.apache.spark.sql.sources.CollatedLessThanOrEqual
- dataType() - Method in class org.apache.spark.sql.sources.CollatedStringContains
- dataType() - Method in class org.apache.spark.sql.sources.CollatedStringEndsWith
- dataType() - Method in class org.apache.spark.sql.sources.CollatedStringStartsWith
- dataType() - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- dataType() - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- dataType() - Static method in class org.apache.spark.sql.types.DateTypeExpression
- dataType() - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- dataType() - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- dataType() - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- dataType() - Static method in class org.apache.spark.sql.types.LongTypeExpression
- dataType() - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- dataType() - Method in class org.apache.spark.sql.types.StructField
- dataType() - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- dataType() - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the data type of this column vector.
- DataType - Class in org.apache.spark.sql.types
-
The base type of all Spark SQL data types.
- DataType() - Constructor for class org.apache.spark.sql.types.DataType
- DataTypeErrors - Class in org.apache.spark.sql.errors
-
Object for grouping error messages from (most) exceptions thrown during query execution.
- DataTypeErrors() - Constructor for class org.apache.spark.sql.errors.DataTypeErrors
- DataTypeErrorsBase - Interface in org.apache.spark.sql.errors
- dataTypeMismatchForDeserializerError(DataType, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataTypeOperationUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- DataTypes - Class in org.apache.spark.sql.types
-
To get/create specific data type, users should use singleton objects and factory methods provided by this class.
- DataTypes() - Constructor for class org.apache.spark.sql.types.DataTypes
- dataTypeUnsupportedByClassError(DataType, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataTypeUnsupportedByDataSourceError(String, StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataTypeUnsupportedByExtractValueError(DataType, Expression, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dataTypeUnsupportedError(String, String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- dataTypeUnsupportedError(String, SqlBaseParser.PrimitiveDataTypeContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- DataValidators - Class in org.apache.spark.mllib.util
-
A collection of methods used to validate data before applying ML algorithms.
- DataValidators() - Constructor for class org.apache.spark.mllib.util.DataValidators
- DataWriter<T> - Interface in org.apache.spark.sql.connector.write
-
A data writer returned by
DataWriterFactory.createWriter(int, long)
and is responsible for writing data for an input RDD partition. - DataWriterFactory - Interface in org.apache.spark.sql.connector.write
-
A factory of
DataWriter
returned byBatchWrite.createBatchWriterFactory(PhysicalWriteInfo)
, which is responsible for creating and initializing the actual data writer at executor side. - date() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type date. - DATE - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- DATE - Static variable in class org.apache.spark.types.variant.VariantUtil
- DATE() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable date type.
- date_add(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is
days
days afterstart
- date_add(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is
days
days afterstart
- date_diff(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of days from
start
toend
. - date_format(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts a date/timestamp/string to a value of string in the format specified by the date format given by the second argument.
- date_from_unix_date(Column) - Static method in class org.apache.spark.sql.functions
-
Create date from the number of
days
since 1970-01-01. - date_part(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Extracts a part of the date/timestamp or interval source.
- date_sub(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is
days
days beforestart
- date_sub(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is
days
days beforestart
- date_trunc(String, Column) - Static method in class org.apache.spark.sql.functions
-
Returns timestamp truncated to the unit specified by the format.
- dateadd(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is
days
days afterstart
- datediff(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of days from
start
toend
. - datepart(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Extracts a part of the date/timestamp or interval source.
- DateType - Class in org.apache.spark.sql.types
-
The date type represents a valid date in the proleptic Gregorian calendar.
- DateType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the DateType object.
- DateType() - Constructor for class org.apache.spark.sql.types.DateType
- DateTypeExpression - Class in org.apache.spark.sql.types
- DateTypeExpression() - Constructor for class org.apache.spark.sql.types.DateTypeExpression
- day(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the day of the month as an integer from a given date/timestamp/string.
- DAY() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- DAY_TIME_INTERVAL - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- DAY_TIME_INTERVAL - Static variable in class org.apache.spark.types.variant.VariantUtil
- dayname(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the three-letter abbreviated day name from a given date/timestamp/string.
- dayofmonth(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the day of the month as an integer from a given date/timestamp/string.
- dayofweek(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the day of the week as an integer from a given date/timestamp/string.
- dayofyear(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the day of the year as an integer from a given date/timestamp/string.
- days - Variable in class org.apache.spark.unsafe.types.CalendarInterval
- days(String) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a daily transform for a timestamp or date column.
- days(Column) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) A transform for timestamps and dates to partition data into days.
- days(Column) - Method in class org.apache.spark.sql.functions.partitioning$
-
(Scala-specific) A transform for timestamps and dates to partition data into days.
- days(NamedReference) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- dayTimeFields() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- DayTimeIntervalType - Class in org.apache.spark.sql.types
-
The type represents day-time intervals of the SQL standard.
- DayTimeIntervalType(byte, byte) - Constructor for class org.apache.spark.sql.types.DayTimeIntervalType
- DayTimeIntervalUtils - Class in org.apache.spark.util
- DayTimeIntervalUtils() - Constructor for class org.apache.spark.util.DayTimeIntervalUtils
- DB2Dialect - Class in org.apache.spark.sql.jdbc
- DB2Dialect() - Constructor for class org.apache.spark.sql.jdbc.DB2Dialect
- DB2Dialect.DB2SQLBuilder - Class in org.apache.spark.sql.jdbc
- DB2Dialect.DB2SQLQueryBuilder - Class in org.apache.spark.sql.jdbc
- DB2SQLBuilder() - Constructor for class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- DB2SQLQueryBuilder(JdbcDialect, JDBCOptions) - Constructor for class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLQueryBuilder
- DCT - Class in org.apache.spark.ml.feature
-
A feature transformer that takes the 1D discrete cosine transform of a real vector.
- DCT() - Constructor for class org.apache.spark.ml.feature.DCT
- DCT(String) - Constructor for class org.apache.spark.ml.feature.DCT
- ddlUnsupportedTemporarilyError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- ddlWithoutHiveSupportEnabledError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- deallocate() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
- decayFactor() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
- decimal() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type decimal. - decimal(int, int) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type decimal. - Decimal - Class in org.apache.spark.sql.types
-
A mutable implementation of BigDecimal that can hold a Long if values are small enough.
- Decimal() - Constructor for class org.apache.spark.sql.types.Decimal
- DECIMAL - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- DECIMAL() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable decimal type.
- Decimal.DecimalAsIfIntegral$ - Class in org.apache.spark.sql.types
-
A
Integral
evidence parameter for Decimals. - Decimal.DecimalIsConflicted - Interface in org.apache.spark.sql.types
-
Common methods for Decimal evidence parameters
- Decimal.DecimalIsFractional$ - Class in org.apache.spark.sql.types
-
A
Fractional
evidence parameter for Decimals. - DECIMAL16 - Static variable in class org.apache.spark.types.variant.VariantUtil
- DECIMAL4 - Static variable in class org.apache.spark.types.variant.VariantUtil
- DECIMAL8 - Static variable in class org.apache.spark.types.variant.VariantUtil
- DecimalAsIfIntegral$() - Constructor for class org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
- decimalCannotGreaterThanPrecisionError(int, int) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- DecimalExactNumeric - Class in org.apache.spark.sql.types
- DecimalExactNumeric() - Constructor for class org.apache.spark.sql.types.DecimalExactNumeric
- DecimalExpression - Class in org.apache.spark.sql.types
- DecimalExpression() - Constructor for class org.apache.spark.sql.types.DecimalExpression
- DecimalIsFractional$() - Constructor for class org.apache.spark.sql.types.Decimal.DecimalIsFractional$
- decimalPrecisionExceedsMaxPrecisionError(int, int) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- DecimalType - Class in org.apache.spark.sql.types
-
The data type representing
java.math.BigDecimal
values. - DecimalType() - Constructor for class org.apache.spark.sql.types.DecimalType
- DecimalType(int) - Constructor for class org.apache.spark.sql.types.DecimalType
- DecimalType(int, int) - Constructor for class org.apache.spark.sql.types.DecimalType
- DecimalType.Fixed$ - Class in org.apache.spark.sql.types
- DecisionTree - Class in org.apache.spark.mllib.tree
-
A class which implements a decision tree learning algorithm for classification and regression.
- DecisionTree(Strategy) - Constructor for class org.apache.spark.mllib.tree.DecisionTree
- DecisionTreeClassificationModel - Class in org.apache.spark.ml.classification
-
Decision tree model (http://en.wikipedia.org/wiki/Decision_tree_learning) for classification.
- DecisionTreeClassifier - Class in org.apache.spark.ml.classification
-
Decision tree learning algorithm (http://en.wikipedia.org/wiki/Decision_tree_learning) for classification.
- DecisionTreeClassifier() - Constructor for class org.apache.spark.ml.classification.DecisionTreeClassifier
- DecisionTreeClassifier(String) - Constructor for class org.apache.spark.ml.classification.DecisionTreeClassifier
- DecisionTreeClassifierParams - Interface in org.apache.spark.ml.tree
- DecisionTreeModel - Class in org.apache.spark.mllib.tree.model
-
Decision tree model for classification or regression.
- DecisionTreeModel - Interface in org.apache.spark.ml.tree
-
Abstraction for Decision Tree models.
- DecisionTreeModel(Node, Enumeration.Value) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel
- DecisionTreeModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.tree.model
- DecisionTreeModel.SaveLoadV1_0$.NodeData - Class in org.apache.spark.mllib.tree.model
-
Model data for model import/export
- DecisionTreeModel.SaveLoadV1_0$.NodeData$ - Class in org.apache.spark.mllib.tree.model
- DecisionTreeModel.SaveLoadV1_0$.PredictData - Class in org.apache.spark.mllib.tree.model
- DecisionTreeModel.SaveLoadV1_0$.PredictData$ - Class in org.apache.spark.mllib.tree.model
- DecisionTreeModel.SaveLoadV1_0$.SplitData - Class in org.apache.spark.mllib.tree.model
- DecisionTreeModel.SaveLoadV1_0$.SplitData$ - Class in org.apache.spark.mllib.tree.model
- DecisionTreeModelReadWrite - Class in org.apache.spark.ml.tree
-
Helper classes for tree model persistence
- DecisionTreeModelReadWrite() - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
- DecisionTreeModelReadWrite.NodeData - Class in org.apache.spark.ml.tree
-
Info for a
Node
- DecisionTreeModelReadWrite.NodeData$ - Class in org.apache.spark.ml.tree
- DecisionTreeModelReadWrite.SplitData - Class in org.apache.spark.ml.tree
-
Info for a
Split
- DecisionTreeModelReadWrite.SplitData$ - Class in org.apache.spark.ml.tree
- DecisionTreeParams - Interface in org.apache.spark.ml.tree
-
Parameters for Decision Tree-based algorithms.
- DecisionTreeRegressionModel - Class in org.apache.spark.ml.regression
-
Decision tree (Wikipedia) model for regression.
- DecisionTreeRegressor - Class in org.apache.spark.ml.regression
-
Decision tree learning algorithm for regression.
- DecisionTreeRegressor() - Constructor for class org.apache.spark.ml.regression.DecisionTreeRegressor
- DecisionTreeRegressor(String) - Constructor for class org.apache.spark.ml.regression.DecisionTreeRegressor
- DecisionTreeRegressorParams - Interface in org.apache.spark.ml.tree
- decode(Column, String) - Static method in class org.apache.spark.sql.functions
-
Computes the first argument into a string from a binary using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16', 'UTF-32').
- decodeFileNameInURI(URI) - Static method in class org.apache.spark.util.Utils
-
Get the file name from uri's raw path and decode it.
- decodeStructField(StructField, boolean) - Method in interface org.apache.spark.ml.attribute.AttributeFactory
-
Creates an
Attribute
from aStructField
instance, optionally preserving name. - decodeURLParameter(MultivaluedMap<String, String>) - Static method in class org.apache.spark.ui.UIUtils
-
Decode URLParameter if URL is encoded by YARN-WebAppProxyServlet.
- decodeURLParameter(String) - Static method in class org.apache.spark.ui.UIUtils
-
Decode URLParameter if URL is encoded by YARN-WebAppProxyServlet.
- DecommissionBlockManager$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.DecommissionBlockManager$
- DecommissionBlockManagers(Seq<String>) - Constructor for class org.apache.spark.storage.BlockManagerMessages.DecommissionBlockManagers
- DecommissionBlockManagers$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.DecommissionBlockManagers$
- DecommissionExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.DecommissionExecutor$
- DecommissionExecutorsOnHost(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.DecommissionExecutorsOnHost
- DecommissionExecutorsOnHost$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.DecommissionExecutorsOnHost$
- decommissionFinished() - Static method in class org.apache.spark.scheduler.ExecutorLossMessage
- decorrelateInnerQueryThroughPlanUnsupportedError(LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- DECREASING_RUNTIME - Enum constant in enum class org.apache.spark.status.api.v1.TaskSorting
- DEFAULT() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- DEFAULT() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- DEFAULT_DRIVER_MEM_MB() - Static method in class org.apache.spark.util.Utils
-
Define a default value for driver memory here since this value is referenced across the code base and nearly all files already use Utils.scala
- DEFAULT_NUM_OUTPUT_ROWS() - Static method in class org.apache.spark.sql.streaming.SinkProgress
- DEFAULT_NUMBER_EXECUTORS() - Static method in class org.apache.spark.scheduler.cluster.SchedulerBackendUtils
- DEFAULT_RESOURCE_PROFILE_ID() - Static method in class org.apache.spark.resource.ResourceProfile
- DEFAULT_SASL_KERBEROS_SERVICE_NAME() - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- DEFAULT_SASL_TOKEN_MECHANISM() - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- DEFAULT_SCALE() - Static method in class org.apache.spark.sql.types.DecimalType
- DEFAULT_SECURITY_PROTOCOL_CONFIG() - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- DEFAULT_SHUTDOWN_PRIORITY() - Static method in class org.apache.spark.util.ShutdownHookManager
- DEFAULT_TARGET_SERVERS_REGEX() - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- defaultAttr() - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
-
The default binary attribute.
- defaultAttr() - Static method in class org.apache.spark.ml.attribute.NominalAttribute
-
The default nominal attribute.
- defaultAttr() - Static method in class org.apache.spark.ml.attribute.NumericAttribute
-
The default numeric attribute.
- defaultColumnNotEnabledError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- defaultColumnNotImplementedYetError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- defaultColumnReferencesNotAllowedInPartitionSpec(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- defaultCopy(ParamMap) - Method in interface org.apache.spark.ml.param.Params
-
Default implementation of copy with extra params.
- defaultCorrName() - Static method in class org.apache.spark.mllib.stat.correlation.CorrelationNames
- DefaultCredentials - Class in org.apache.spark.streaming.kinesis
-
Returns DefaultAWSCredentialsProviderChain for authentication.
- DefaultCredentials() - Constructor for class org.apache.spark.streaming.kinesis.DefaultCredentials
- defaultDatabaseNotExistsError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
- defaultMinPartitions() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Default min number of partitions for Hadoop RDDs when not given by user
- defaultMinPartitions() - Method in class org.apache.spark.SparkContext
-
Default min number of partitions for Hadoop RDDs when not given by user Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
- defaultModuleOptionArray() - Static method in class org.apache.spark.launcher.JavaModuleOptions
-
Returns the default JVM runtime option array used by Spark.
- defaultModuleOptions() - Static method in class org.apache.spark.launcher.JavaModuleOptions
-
Returns the default JVM runtime options used by Spark.
- defaultNamespace() - Method in interface org.apache.spark.sql.connector.catalog.CatalogPlugin
-
Return a default namespace for the catalog.
- defaultNamespace() - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- defaultNullOrdering() - Method in enum class org.apache.spark.sql.connector.expressions.SortDirection
-
Returns the default null ordering to use if no null ordering is specified explicitly.
- defaultParallelism() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Default level of parallelism to use when not given by user (e.g.
- defaultParallelism() - Method in interface org.apache.spark.scheduler.SchedulerBackend
- defaultParallelism() - Method in interface org.apache.spark.scheduler.TaskScheduler
- defaultParallelism() - Method in class org.apache.spark.SparkContext
-
Default level of parallelism to use when not given by user (e.g.
- defaultParamMap() - Method in interface org.apache.spark.ml.param.Params
-
Internal param map for default values.
- defaultParams(String) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
Returns default configuration for the boosting algorithm
- defaultParams(Enumeration.Value) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
Returns default configuration for the boosting algorithm
- DefaultParamsReadable<T> - Interface in org.apache.spark.ml.util
-
Helper trait for making simple
Params
types readable. - DefaultParamsWritable - Interface in org.apache.spark.ml.util
-
Helper trait for making simple
Params
types writable. - DefaultPartitionCoalescer - Class in org.apache.spark.rdd
-
Coalesce the partitions of a parent RDD (
prev
) into fewer partitions, so that each partition of this RDD computes one or more of the parent ones. - DefaultPartitionCoalescer(double) - Constructor for class org.apache.spark.rdd.DefaultPartitionCoalescer
- DefaultPartitionCoalescer.partitionGroupOrdering$ - Class in org.apache.spark.rdd
- defaultPartitioner(RDD<?>, Seq<RDD<?>>) - Static method in class org.apache.spark.Partitioner
-
Choose a partitioner to use for a cogroup-like operation between a number of RDDs.
- DefaultProfileExecutorResources$() - Constructor for class org.apache.spark.resource.ResourceProfile.DefaultProfileExecutorResources$
- defaultReferencesNotAllowedInComplexExpressionsInInsertValuesList() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defaultReferencesNotAllowedInComplexExpressionsInMergeInsertsOrUpdates() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defaultReferencesNotAllowedInComplexExpressionsInUpdateSetClause() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defaultReferencesNotAllowedInDataSource(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defaultSize() - Method in class org.apache.spark.sql.types.ArrayType
-
The default size of a value of the ArrayType is the default size of the element type.
- defaultSize() - Method in class org.apache.spark.sql.types.BinaryType
-
The default size of a value of the BinaryType is 100 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.BooleanType
-
The default size of a value of the BooleanType is 1 byte.
- defaultSize() - Method in class org.apache.spark.sql.types.ByteType
-
The default size of a value of the ByteType is 1 byte.
- defaultSize() - Method in class org.apache.spark.sql.types.CalendarIntervalType
- defaultSize() - Method in class org.apache.spark.sql.types.CharType
- defaultSize() - Method in class org.apache.spark.sql.types.DataType
-
The default size of a value of this data type, used internally for size estimation.
- defaultSize() - Method in class org.apache.spark.sql.types.DateType
-
The default size of a value of the DateType is 4 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.DayTimeIntervalType
-
The day-time interval type has constant precision.
- defaultSize() - Method in class org.apache.spark.sql.types.DecimalType
-
The default size of a value of the DecimalType is 8 bytes when precision is at most 18, and 16 bytes otherwise.
- defaultSize() - Method in class org.apache.spark.sql.types.DoubleType
-
The default size of a value of the DoubleType is 8 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.FloatType
-
The default size of a value of the FloatType is 4 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.IntegerType
-
The default size of a value of the IntegerType is 4 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.LongType
-
The default size of a value of the LongType is 8 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.MapType
-
The default size of a value of the MapType is (the default size of the key type + the default size of the value type).
- defaultSize() - Method in class org.apache.spark.sql.types.NullType
- defaultSize() - Method in class org.apache.spark.sql.types.ObjectType
- defaultSize() - Method in class org.apache.spark.sql.types.ShortType
-
The default size of a value of the ShortType is 2 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.StringType
-
The default size of a value of the StringType is 20 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.StructType
-
The default size of a value of the StructType is the total default sizes of all field types.
- defaultSize() - Method in class org.apache.spark.sql.types.TimestampNTZType
-
The default size of a value of the TimestampNTZType is 8 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.TimestampType
-
The default size of a value of the TimestampType is 8 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.UserDefinedType
- defaultSize() - Method in class org.apache.spark.sql.types.VarcharType
- defaultSize() - Method in class org.apache.spark.sql.types.VariantType
- defaultSize() - Method in class org.apache.spark.sql.types.YearMonthIntervalType
-
Year-month interval values always occupy 4 bytes.
- defaultStrategy(String) - Static method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Construct a default set of parameters for
DecisionTree
- defaultStrategy(Enumeration.Value) - Static method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Construct a default set of parameters for
DecisionTree
- DefaultTopologyMapper - Class in org.apache.spark.storage
-
A TopologyMapper that assumes all nodes are in the same rack
- DefaultTopologyMapper(SparkConf) - Constructor for class org.apache.spark.storage.DefaultTopologyMapper
- defaultValue() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns the default value of this table column.
- defaultValue() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- defaultValue(String) - Method in class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Builder
-
Sets the default value expression of the parameter.
- defaultValueExpression() - Method in interface org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter
-
Returns the SQL string (Spark SQL dialect) of the default value expression of this parameter or null if not provided.
- defaultValueNotConstantError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defaultValuesDataTypeError(String, String, String, DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defaultValuesMayNotContainSubQueryExpressions(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defaultValuesUnresolvedExprError(String, String, String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- defineTempFuncWithIfNotExistsError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- defineTempViewWithIfNotExistsError(SqlBaseParser.CreateViewContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- DEFLATE - Enum constant in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- degree() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
-
The polynomial degree to expand, which should be greater than equal to 1.
- degrees() - Method in class org.apache.spark.graphx.GraphOps
- degrees(String) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
- degrees(Column) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
- degreesOfFreedom() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- degreesOfFreedom() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Degrees of freedom
- degreesOfFreedom() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
- degreesOfFreedom() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
- degreesOfFreedom() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
Returns the degree(s) of freedom of the hypothesis test.
- delegate() - Method in class org.apache.spark.ContextAwareIterator
-
Deprecated.
- delegate() - Method in class org.apache.spark.InterruptibleIterator
- DelegatingCatalogExtension - Class in org.apache.spark.sql.connector.catalog
-
A simple implementation of
CatalogExtension
, which implements all the catalog functions by calling the built-in session catalog directly. - DelegatingCatalogExtension() - Constructor for class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- delegationTokensRequired(SparkConf, Configuration) - Method in interface org.apache.spark.security.HadoopDelegationTokenProvider
-
Returns true if delegation tokens are required for this service.
- delete() - Method in class org.apache.spark.sql.WhenMatched
-
Specifies an action to delete matched rows from the DataFrame.
- delete() - Method in class org.apache.spark.sql.WhenNotMatchedBySource
-
Specifies an action to delete non-matched rows from the target DataFrame when not matched by the source.
- delete(T, T) - Method in interface org.apache.spark.sql.connector.write.DeltaWriter
-
Deletes a row.
- DELETE - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableWritePrivilege
-
The privilege for deleting rows from the table.
- DELETE - Enum constant in enum class org.apache.spark.sql.connector.write.RowLevelOperation.Command
- deleteCheckpointFiles() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
Remove any remaining checkpoint files from training.
- deleteColumn(String[], Boolean) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for deleting a field.
- deleteIfExists(String) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to delete and purge state variable if defined previously
- deleteRecursively(File) - Method in interface org.apache.spark.util.SparkFileUtils
-
Delete a file or directory and its contents recursively.
- deleteRecursively(File) - Static method in class org.apache.spark.util.Utils
-
Delete a file or directory and its contents recursively.
- deleteTimer(long) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to delete a processing/event time based timer for given implicit grouping key and provided timestamp
- deleteWhere(Predicate[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsDelete
- deleteWhere(Predicate[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsDeleteV2
-
Delete data from a data source table that matches filter expressions.
- deleteWhere(Filter[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsDelete
-
Delete data from a data source table that matches filter expressions.
- delta() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
-
Constant used in initialization and deviance to avoid numerical issues.
- DeltaBatchWrite - Interface in org.apache.spark.sql.connector.write
-
An interface that defines how to write a delta of rows during batch processing.
- DeltaWrite - Interface in org.apache.spark.sql.connector.write
-
A logical representation of a data source write that handles a delta of rows.
- DeltaWriteBuilder - Interface in org.apache.spark.sql.connector.write
-
An interface for building a
DeltaWrite
. - DeltaWriter<T> - Interface in org.apache.spark.sql.connector.write
-
A data writer returned by
DeltaWriterFactory.createWriter(int, long)
and is responsible for writing a delta of rows. - DeltaWriterFactory - Interface in org.apache.spark.sql.connector.write
-
A factory for creating
DeltaWriter
s returned byDeltaBatchWrite.createBatchWriterFactory(PhysicalWriteInfo)
, which is responsible for creating and initializing writers at the executor side. - dense(double[]) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a dense vector from a double array.
- dense(double[]) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a dense vector from a double array.
- dense(double, double...) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a dense vector from its values.
- dense(double, double...) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a dense vector from its values.
- dense(double, Seq<Object>) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a dense vector from its values.
- dense(double, Seq<Object>) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a dense vector from its values.
- dense(int, int, double[]) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Creates a column-major dense matrix.
- dense(int, int, double[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Creates a column-major dense matrix.
- dense_rank() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the rank of rows within a window partition, without any gaps.
- DenseMatrix - Class in org.apache.spark.ml.linalg
-
Column-major dense matrix.
- DenseMatrix - Class in org.apache.spark.mllib.linalg
-
Column-major dense matrix.
- DenseMatrix(int, int, double[]) - Constructor for class org.apache.spark.ml.linalg.DenseMatrix
-
Column-major dense matrix.
- DenseMatrix(int, int, double[]) - Constructor for class org.apache.spark.mllib.linalg.DenseMatrix
-
Column-major dense matrix.
- DenseMatrix(int, int, double[], boolean) - Constructor for class org.apache.spark.ml.linalg.DenseMatrix
- DenseMatrix(int, int, double[], boolean) - Constructor for class org.apache.spark.mllib.linalg.DenseMatrix
- DenseVector - Class in org.apache.spark.ml.linalg
-
A dense vector represented by a value array.
- DenseVector - Class in org.apache.spark.mllib.linalg
-
A dense vector represented by a value array.
- DenseVector(double[]) - Constructor for class org.apache.spark.ml.linalg.DenseVector
- DenseVector(double[]) - Constructor for class org.apache.spark.mllib.linalg.DenseVector
- dependencies() - Method in class org.apache.spark.rdd.RDD
-
Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.
- dependencies() - Method in class org.apache.spark.streaming.dstream.DStream
-
List of parent DStreams on which this DStream depends on
- dependencies() - Method in class org.apache.spark.streaming.dstream.InputDStream
- Dependency<T> - Class in org.apache.spark
-
:: DeveloperApi :: Base class for dependencies.
- Dependency() - Constructor for class org.apache.spark.Dependency
- DependencyUtils - Class in org.apache.spark.util
- DependencyUtils() - Constructor for class org.apache.spark.util.DependencyUtils
- DEPLOY_MODE - Static variable in class org.apache.spark.launcher.SparkLauncher
-
The Spark deploy mode.
- deployMode() - Method in class org.apache.spark.SparkContext
- DEPRECATED_CHILD_CONNECTION_TIMEOUT - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Deprecated.use `CHILD_CONNECTION_TIMEOUT`
- depth() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- depth() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- depth() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
-
Depth of the tree.
- depth() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Get depth of tree.
- depth() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Depth of this
CountMinSketch
. - DerbyDialect - Class in org.apache.spark.sql.jdbc
- DerbyDialect() - Constructor for class org.apache.spark.sql.jdbc.DerbyDialect
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
- derivative() - Method in interface org.apache.spark.ml.ann.ActivationFunction
-
Implements a derivative of a function (needed for the back propagation)
- desc() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- desc() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on the descending order of the column.
- desc() - Method in class org.apache.spark.util.MethodIdentifier
- desc(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on the descending order of the column.
- DESC_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- DESC_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- desc_nulls_first() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on the descending order of the column, and null values appear before non-null values.
- desc_nulls_first(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on the descending order of the column, and null values appear before non-null values.
- desc_nulls_last() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on the descending order of the column, and null values appear after non-null values.
- desc_nulls_last(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on the descending order of the column, and null values appear after non-null values.
- descColumnForPartitionUnsupportedError(SqlBaseParser.DescribeRelationContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- DESCENDING - Enum constant in enum class org.apache.spark.sql.connector.expressions.SortDirection
- descPartitionNotAllowedOnTempView(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- descPartitionNotAllowedOnView(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- describe() - Method in interface org.apache.spark.sql.connector.expressions.Expression
-
Format the expression as a human readable SQL-like string.
- describe(String...) - Method in class org.apache.spark.sql.api.Dataset
-
Computes basic statistics for numeric and string columns, including count, mean, stddev, min, and max.
- describe(String...) - Method in class org.apache.spark.sql.Dataset
- describe(Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Computes basic statistics for numeric and string columns, including count, mean, stddev, min, and max.
- describe(Seq<String>) - Method in class org.apache.spark.sql.Dataset
- describeDoesNotSupportPartitionForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- describeTopics() - Method in class org.apache.spark.ml.clustering.LDAModel
- describeTopics() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Return the topics described by weighted terms.
- describeTopics(int) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Return the topics described by their top-weighted terms.
- describeTopics(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- describeTopics(int) - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Return the topics described by weighted terms.
- describeTopics(int) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- description() - Method in class org.apache.spark.ErrorStateInfo
- description() - Method in class org.apache.spark.ExceptionFailure
- description() - Method in class org.apache.spark.sql.catalog.CatalogMetadata
- description() - Method in class org.apache.spark.sql.catalog.Column
- description() - Method in class org.apache.spark.sql.catalog.Database
- description() - Method in class org.apache.spark.sql.catalog.Function
- description() - Method in class org.apache.spark.sql.catalog.Table
- description() - Method in interface org.apache.spark.sql.connector.catalog.functions.UnboundFunction
-
Returns Function documentation.
- description() - Method in interface org.apache.spark.sql.connector.catalog.procedures.Procedure
-
Returns the description of this procedure.
- description() - Method in interface org.apache.spark.sql.connector.metric.CustomMetric
-
Returns the description of custom metric.
- description() - Method in interface org.apache.spark.sql.connector.read.Scan
-
A description string of this scan, which may includes information like: what filters are configured for this scan, what's the value of some important options like path, etc.
- description() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperation
-
Returns the description associated with this row-level operation.
- description() - Method in interface org.apache.spark.sql.connector.write.Write
-
Returns the description associated with this write.
- description() - Method in class org.apache.spark.sql.streaming.SinkProgress
- description() - Method in class org.apache.spark.sql.streaming.SourceProgress
- description() - Method in class org.apache.spark.status.api.v1.JobData
- description() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- description() - Method in class org.apache.spark.status.api.v1.StageData
- description() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- description() - Method in class org.apache.spark.status.LiveStage
- description() - Method in class org.apache.spark.storage.StorageLevel
- description() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- DESCRIPTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- DESCRIPTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- DESCRIPTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- DESCRIPTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- DESCRIPTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- descriptorParseError(Throwable) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- descriptorParseError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- DESER_CPU_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- DESER_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- DeserializationStream - Class in org.apache.spark.serializer
-
:: DeveloperApi :: A stream for reading serialized objects.
- DeserializationStream() - Constructor for class org.apache.spark.serializer.DeserializationStream
- deserialize(byte[]) - Method in interface org.apache.spark.status.protobuf.ProtobufSerDe
-
Deserialize the input
Array[Byte]
to an object of the typeT
. - deserialize(byte[]) - Method in interface org.apache.spark.util.SparkSerDeUtils
-
Deserialize an object using Java serialization
- deserialize(byte[]) - Static method in class org.apache.spark.util.Utils
- deserialize(byte[], ClassLoader) - Method in interface org.apache.spark.util.SparkSerDeUtils
-
Deserialize an object using Java serialization and the given ClassLoader
- deserialize(byte[], ClassLoader) - Static method in class org.apache.spark.util.Utils
- deserialize(Object) - Method in class org.apache.spark.mllib.linalg.VectorUDT
- deserialize(Object) - Method in class org.apache.spark.sql.types.UserDefinedType
-
Convert a SQL datum to the user type
- deserialize(ByteBuffer, ClassLoader, ClassTag<T>) - Method in class org.apache.spark.serializer.DummySerializerInstance
- deserialize(ByteBuffer, ClassLoader, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
- deserialize(ByteBuffer, ClassTag<T>) - Method in class org.apache.spark.serializer.DummySerializerInstance
- deserialize(ByteBuffer, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
- deserialize(List<StoreTypes.AccumulableInfo>) - Static method in class org.apache.spark.status.protobuf.AccumulableInfoSerializer
- deserialize(StoreTypes.DeterministicLevel) - Static method in class org.apache.spark.status.protobuf.DeterministicLevelSerializer
- deserialize(StoreTypes.ExecutorMetrics) - Static method in class org.apache.spark.status.protobuf.ExecutorMetricsSerializer
- deserialize(StoreTypes.ExecutorStageSummary) - Static method in class org.apache.spark.status.protobuf.ExecutorStageSummarySerializer
- deserialize(StoreTypes.JobExecutionStatus) - Static method in class org.apache.spark.status.protobuf.JobExecutionStatusSerializer
- deserialize(StoreTypes.SinkProgress) - Static method in class org.apache.spark.status.protobuf.sql.SinkProgressSerializer
- deserialize(StoreTypes.SQLPlanMetric) - Static method in class org.apache.spark.status.protobuf.sql.SQLPlanMetricSerializer
- deserialize(StoreTypes.StageStatus) - Static method in class org.apache.spark.status.protobuf.StageStatusSerializer
- deserialize(StoreTypes.StreamingQueryProgress) - Static method in class org.apache.spark.status.protobuf.sql.StreamingQueryProgressSerializer
- deserialized() - Method in class org.apache.spark.storage.StorageLevel
- DESERIALIZED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- DeserializedMemoryEntry<T> - Class in org.apache.spark.storage.memory
- DeserializedMemoryEntry(Object, long, MemoryMode, ClassTag<T>) - Constructor for class org.apache.spark.storage.memory.DeserializedMemoryEntry
- DeserializedValuesHolder<T> - Class in org.apache.spark.storage.memory
-
A holder for storing the deserialized values.
- DeserializedValuesHolder(ClassTag<T>, MemoryMode) - Constructor for class org.apache.spark.storage.memory.DeserializedValuesHolder
- deserializeFromChunkedBuffer(SerializerInstance, ChunkedByteBuffer, ClassTag<T>) - Static method in class org.apache.spark.serializer.SerializerHelper
- deserializeLongValue(byte[]) - Static method in class org.apache.spark.util.Utils
-
Deserialize a Long value (used for
PythonPartitioner
) - deserializeOffset(String) - Method in interface org.apache.spark.sql.connector.read.streaming.SparkDataStream
-
Deserialize a JSON string into an Offset of the implementation-defined offset type.
- deserializer() - Method in interface org.apache.spark.sql.avro.AvroUtils.RowReader
- deserializeStream(InputStream) - Method in class org.apache.spark.serializer.DummySerializerInstance
- deserializeStream(InputStream) - Method in class org.apache.spark.serializer.SerializerInstance
- deserializeToArray(List<StoreTypes.SourceProgress>) - Static method in class org.apache.spark.status.protobuf.sql.SourceProgressSerializer
- deserializeToArray(List<StoreTypes.StateOperatorProgress>) - Static method in class org.apache.spark.status.protobuf.sql.StateOperatorProgressSerializer
- deserializeViaNestedStream(InputStream, SerializerInstance, Function1<DeserializationStream, BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Deserialize via nested stream using specific serializer
- destroy() - Method in class org.apache.spark.broadcast.Broadcast
-
Destroy all data and metadata related to this broadcast variable.
- destroy() - Method in class org.apache.spark.ui.HttpSecurityFilter
- details() - Method in class org.apache.spark.scheduler.StageInfo
- details() - Method in class org.apache.spark.status.api.v1.StageData
- DETAILS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- DETAILS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- detailsUINode(boolean, String) - Static method in class org.apache.spark.ui.UIUtils
- DETERMINATE() - Static method in class org.apache.spark.rdd.DeterministicLevel
- determineBounds(ArrayBuffer<Tuple2<K, Object>>, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.RangePartitioner
-
Determines the bounds for range partitioning from candidates with weights indicating how many items each represents.
- deterministic() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Returns true iff this function is deterministic, i.e.
- deterministic() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns true iff the UDF is deterministic, i.e.
- DETERMINISTIC_LEVEL_DETERMINATE - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_DETERMINATE = 1;
- DETERMINISTIC_LEVEL_DETERMINATE_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_DETERMINATE = 1;
- DETERMINISTIC_LEVEL_INDETERMINATE - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_INDETERMINATE = 3;
- DETERMINISTIC_LEVEL_INDETERMINATE_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_INDETERMINATE = 3;
- DETERMINISTIC_LEVEL_UNORDERED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_UNORDERED = 2;
- DETERMINISTIC_LEVEL_UNORDERED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_UNORDERED = 2;
- DETERMINISTIC_LEVEL_UNSPECIFIED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_UNSPECIFIED = 0;
- DETERMINISTIC_LEVEL_UNSPECIFIED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
DETERMINISTIC_LEVEL_UNSPECIFIED = 0;
- DeterministicLevel - Class in org.apache.spark.rdd
-
The deterministic level of RDD's output (i.e.
- DeterministicLevel() - Constructor for class org.apache.spark.rdd.DeterministicLevel
- DeterministicLevelSerializer - Class in org.apache.spark.status.protobuf
- DeterministicLevelSerializer() - Constructor for class org.apache.spark.status.protobuf.DeterministicLevelSerializer
- deviance() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
- devianceResiduals() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- dfToCols(Dataset<Row>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- dfToRowRDD(Dataset<Row>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- dgemm(double, DenseMatrix<Object>, DenseMatrix<Object>, double, DenseMatrix<Object>) - Static method in class org.apache.spark.ml.ann.BreezeUtil
-
DGEMM: C := alpha * A * B + beta * C
- dgemv(double, DenseMatrix<Object>, DenseVector<Object>, double, DenseVector<Object>) - Static method in class org.apache.spark.ml.ann.BreezeUtil
-
DGEMV: y := alpha * A * x + beta * y
- diag(Vector) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a diagonal matrix in
DenseMatrix
format from the supplied values. - diag(Vector) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a diagonal matrix in
Matrix
format from the supplied values. - diag(Vector) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a diagonal matrix in
DenseMatrix
format from the supplied values. - diag(Vector) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a diagonal matrix in
Matrix
format from the supplied values. - dialectFunctionName(String) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- dialectFunctionName(String) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLBuilder
- diff(VertexRDD<VD>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- diff(VertexRDD<VD>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each vertex present in both
this
andother
,diff
returns only those vertices with differing values; for values that are different, keeps the values fromother
. - diff(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- diff(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each vertex present in both
this
andother
,diff
returns only those vertices with differing values; for values that are different, keeps the values fromother
. - DifferentiableLossAggregator<Datum,
Agg extends DifferentiableLossAggregator<Datum, Agg>> - Interface in org.apache.spark.ml.optim.aggregator -
A parent trait for aggregators used in fitting MLlib models.
- DifferentiableRegularization<T> - Interface in org.apache.spark.ml.optim.loss
-
A Breeze diff function which represents a cost function for differentiable regularization of parameters.
- dim() - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
The dimension of the gradient array.
- dir() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- direction() - Method in interface org.apache.spark.sql.connector.expressions.SortOrder
-
Returns the sort direction.
- directory(File) - Method in class org.apache.spark.launcher.SparkLauncher
-
Sets the working directory of spark-submit.
- directoryPathAndOptionsPathBothSpecifiedError(SqlBaseParser.InsertOverwriteDirContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- DirectPoolMemory - Class in org.apache.spark.metrics
- DirectPoolMemory() - Constructor for class org.apache.spark.metrics.DirectPoolMemory
- disconnect() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Disconnects the handle from the application, without stopping it.
- discoverResource(ResourceRequest, SparkConf) - Method in interface org.apache.spark.api.resource.ResourceDiscoveryPlugin
-
Discover the addresses of the requested resource.
- discoverResource(ResourceRequest, SparkConf) - Method in class org.apache.spark.resource.ResourceDiscoveryScriptPlugin
- DISCOVERY_SCRIPT() - Static method in class org.apache.spark.resource.ResourceUtils
- DISCOVERY_SCRIPT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- discoveryScript() - Method in class org.apache.spark.resource.ExecutorResourceRequest
- discoveryScript() - Method in class org.apache.spark.resource.ResourceRequest
- DISK_BYTES_SPILLED() - Static method in class org.apache.spark.InternalAccumulator
- DISK_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- DISK_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- DISK_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- DISK_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- DISK_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- DISK_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- DISK_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- DISK_ONLY - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- DISK_ONLY - Static variable in class org.apache.spark.api.java.StorageLevels
- DISK_ONLY() - Static method in class org.apache.spark.storage.StorageLevel
- DISK_ONLY_2 - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- DISK_ONLY_2 - Static variable in class org.apache.spark.api.java.StorageLevels
- DISK_ONLY_2() - Static method in class org.apache.spark.storage.StorageLevel
- DISK_ONLY_3 - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- DISK_ONLY_3 - Static variable in class org.apache.spark.api.java.StorageLevels
- DISK_ONLY_3() - Static method in class org.apache.spark.storage.StorageLevel
- DISK_SIZE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- DISK_SPILL() - Static method in class org.apache.spark.status.TaskIndexNames
- DISK_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- DISK_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- DISK_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- DISK_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- DiskBlockData - Class in org.apache.spark.storage
- DiskBlockData(long, long, File, long) - Constructor for class org.apache.spark.storage.DiskBlockData
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.StageData
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- diskSize() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- diskSize() - Method in class org.apache.spark.storage.BlockStatus
- diskSize() - Method in class org.apache.spark.storage.BlockUpdatedInfo
- diskSize() - Method in class org.apache.spark.storage.RDDInfo
- diskUsed() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- diskUsed() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- diskUsed() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
- diskUsed() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- diskUsed() - Method in class org.apache.spark.status.LiveRDD
- diskUsed() - Method in class org.apache.spark.status.LiveRDDDistribution
- diskUsed() - Method in class org.apache.spark.status.LiveRDDPartition
- dispersion() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- displayOrder() - Method in interface org.apache.spark.status.AppHistoryServerPlugin
-
The position of a plugin tab relative to the other plugin tabs in the history UI.
- dispose() - Method in interface org.apache.spark.storage.BlockData
- dispose() - Method in class org.apache.spark.storage.DiskBlockData
- dispose(ByteBuffer) - Static method in class org.apache.spark.storage.StorageUtils
-
Attempt to clean up a ByteBuffer if it is direct or memory-mapped.
- distanceMeasure() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- distanceMeasure() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- distanceMeasure() - Method in class org.apache.spark.ml.clustering.KMeans
- distanceMeasure() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- distanceMeasure() - Method in class org.apache.spark.ml.clustering.KMeansModel
- distanceMeasure() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
-
param for distance measure to be used in evaluation (supports
"squaredEuclidean"
(default),"cosine"
) - distanceMeasure() - Method in interface org.apache.spark.ml.param.shared.HasDistanceMeasure
-
Param for The distance measure.
- distanceMeasure() - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
- distanceMeasure() - Method in class org.apache.spark.mllib.clustering.KMeansModel
- distinct() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct() - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct() - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset that contains only the unique rows from this Dataset.
- distinct() - Method in class org.apache.spark.sql.Dataset
- distinct(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Creates a
Column
for this UDAF using the distinct values of the givenColumn
s as input arguments. - distinct(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Creates a
Column
for this UDAF using the distinct values of the givenColumn
s as input arguments. - distinctCount() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
- distinctInverseDistributionFunctionUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- distributeByUnsupportedError(SqlBaseParser.QueryOrganizationContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- DistributedLDAModel - Class in org.apache.spark.ml.clustering
-
Distributed model fitted by
LDA
. - DistributedLDAModel - Class in org.apache.spark.mllib.clustering
-
Distributed LDA model.
- DistributedMatrix - Interface in org.apache.spark.mllib.linalg.distributed
-
Represents a distributively stored matrix backed by one or more RDDs.
- distribution(LiveExecutor) - Method in class org.apache.spark.status.LiveRDD
- Distribution - Interface in org.apache.spark.sql.connector.distributions
-
An interface that defines how data is distributed across partitions.
- distributionOpt(LiveExecutor) - Method in class org.apache.spark.status.LiveRDD
- Distributions - Class in org.apache.spark.sql.connector.distributions
-
Helper methods to create distributions to pass into Spark.
- distributionStrictlyRequired() - Method in interface org.apache.spark.sql.connector.write.RequiresDistributionAndOrdering
-
Returns if the distribution required by this write is strictly required or best effort only.
- div(double, double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- div(float, float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- div(Decimal, Decimal) - Method in class org.apache.spark.sql.types.Decimal.DecimalIsFractional$
- div(Duration) - Method in class org.apache.spark.streaming.Duration
- divide(Object) - Method in class org.apache.spark.sql.Column
-
Division this expression by another expression.
- divideByZeroError(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- doc() - Method in class org.apache.spark.ml.param.Param
- docConcentration() - Method in class org.apache.spark.ml.clustering.LDA
- docConcentration() - Method in class org.apache.spark.ml.clustering.LDAModel
- docConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
- docConcentration() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- docConcentration() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
- docConcentration() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- docFreq() - Method in class org.apache.spark.ml.feature.IDFModel
-
Returns the document frequency
- docFreq() - Method in class org.apache.spark.mllib.feature.IDFModel
- DocumentFrequencyAggregator() - Constructor for class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
- DocumentFrequencyAggregator(int) - Constructor for class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
- doesDirectoryContainAnyNewFiles(File, long) - Static method in class org.apache.spark.util.Utils
-
Determines if a directory contains any files newer than cutoff seconds.
- doExecuteBroadcastNotImplementedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- doFetchFile(String, File, String, SparkConf, Configuration) - Static method in class org.apache.spark.util.Utils
-
Download a file or directory to target directory.
- doFilter(ServletRequest, ServletResponse, FilterChain) - Method in class org.apache.spark.ui.HttpSecurityFilter
- doFilter(ServletRequest, ServletResponse, FilterChain) - Method in class org.apache.spark.ui.JWSFilter
- doPostEvent(L, E) - Method in interface org.apache.spark.util.ListenerBus
-
Post an event to the specified listener.
- doPostEvent(SparkListenerInterface, SparkListenerEvent) - Method in interface org.apache.spark.scheduler.SparkListenerBus
- dot(Vector) - Method in interface org.apache.spark.ml.linalg.Vector
-
Calculate the dot product of this vector with another.
- dot(Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
dot(x, y)
- dot(Vector) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Calculate the dot product of this vector with another.
- dot(Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
dot(x, y)
- Dot - Class in org.apache.spark.ml.feature
- Dot() - Constructor for class org.apache.spark.ml.feature.Dot
- doTest(DStream<Tuple2<StatCounter, StatCounter>>) - Method in interface org.apache.spark.mllib.stat.test.StreamingTestMethod
-
Perform streaming 2-sample statistical significance testing.
- doTest(DStream<Tuple2<StatCounter, StatCounter>>) - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
- doTest(DStream<Tuple2<StatCounter, StatCounter>>) - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
- DOUBLE - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- DOUBLE - Static variable in class org.apache.spark.types.variant.VariantUtil
- DOUBLE() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable double type.
- doubleAccumulator() - Method in class org.apache.spark.SparkContext
-
Create and register a double accumulator, which starts with 0 and accumulates inputs by
add
. - doubleAccumulator(String) - Method in class org.apache.spark.SparkContext
-
Create and register a double accumulator, which starts with 0 and accumulates inputs by
add
. - DoubleAccumulator - Class in org.apache.spark.util
-
An
accumulator
for computing sum, count, and averages for double precision floating numbers. - DoubleAccumulator() - Constructor for class org.apache.spark.util.DoubleAccumulator
- DoubleAccumulatorSource - Class in org.apache.spark.metrics.source
- DoubleAccumulatorSource() - Constructor for class org.apache.spark.metrics.source.DoubleAccumulatorSource
- DoubleArrayArrayParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Array[Array[Double}]
for Java. - DoubleArrayArrayParam(Params, String, String) - Constructor for class org.apache.spark.ml.param.DoubleArrayArrayParam
- DoubleArrayArrayParam(Params, String, String, Function1<double[][], Object>) - Constructor for class org.apache.spark.ml.param.DoubleArrayArrayParam
- DoubleArrayParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Array[Double}
for Java. - DoubleArrayParam(Params, String, String) - Constructor for class org.apache.spark.ml.param.DoubleArrayParam
- DoubleArrayParam(Params, String, String, Function1<double[], Object>) - Constructor for class org.apache.spark.ml.param.DoubleArrayParam
- DoubleAsIfIntegral$() - Constructor for class org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral$
- doubleColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
- DoubleExactNumeric - Class in org.apache.spark.sql.types
- DoubleExactNumeric() - Constructor for class org.apache.spark.sql.types.DoubleExactNumeric
- DoubleFlatMapFunction<T> - Interface in org.apache.spark.api.java.function
-
A function that returns zero or more records of type Double from each input record.
- DoubleFunction<T> - Interface in org.apache.spark.api.java.function
-
A function that returns Doubles, and can be used to construct DoubleRDDs.
- doubleNamedArgumentReference(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- DoubleParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Double]
for Java. - DoubleParam(String, String, String) - Constructor for class org.apache.spark.ml.param.DoubleParam
- DoubleParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.DoubleParam
- DoubleParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.DoubleParam
- DoubleParam(Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.DoubleParam
- DoubleRDDFunctions - Class in org.apache.spark.rdd
-
Extra functions available on RDDs of Doubles through an implicit conversion.
- DoubleRDDFunctions(RDD<Object>) - Constructor for class org.apache.spark.rdd.DoubleRDDFunctions
- doubleRDDToDoubleRDDFunctions(RDD<Object>) - Static method in class org.apache.spark.rdd.RDD
- DoubleType - Class in org.apache.spark.sql.types
-
The data type representing
Double
values. - DoubleType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the DoubleType object.
- DoubleType() - Constructor for class org.apache.spark.sql.types.DoubleType
- DoubleType.DoubleAsIfIntegral - Interface in org.apache.spark.sql.types
- DoubleType.DoubleAsIfIntegral$ - Class in org.apache.spark.sql.types
- DoubleType.DoubleIsConflicted - Interface in org.apache.spark.sql.types
- DoubleTypeExpression - Class in org.apache.spark.sql.types
- DoubleTypeExpression() - Constructor for class org.apache.spark.sql.types.DoubleTypeExpression
- downloadFile(String, File, SparkConf, Configuration) - Static method in class org.apache.spark.util.DependencyUtils
-
Download a file from the remote to a local temporary directory.
- downloadFileList(String, File, SparkConf, Configuration) - Static method in class org.apache.spark.util.DependencyUtils
-
Download a list of remote files to temp local files.
- driver() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
- driver() - Method in interface org.apache.spark.shuffle.api.ShuffleDataIO
-
Called once on driver process to bootstrap the shuffle metadata modules that are maintained by the driver.
- DRIVER() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
- DRIVER_DEFAULT_EXTRA_CLASS_PATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver default extra class path.
- DRIVER_DEFAULT_EXTRA_CLASS_PATH_VALUE - Static variable in class org.apache.spark.launcher.SparkLauncher
- DRIVER_DEFAULT_JAVA_OPTIONS - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the default driver VM options.
- DRIVER_EXTRA_CLASSPATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver class path.
- DRIVER_EXTRA_JAVA_OPTIONS - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver VM options.
- DRIVER_EXTRA_LIBRARY_PATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver native library path.
- DRIVER_MEMORY - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver memory.
- DRIVER_TIMEOUT() - Static method in class org.apache.spark.util.SparkExitCode
-
Exit because the driver is running over the given threshold.
- driverAttributes() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- driverLogs() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- driverPlugin() - Method in interface org.apache.spark.api.plugin.SparkPlugin
-
Return the plugin's driver-side component.
- DriverPlugin - Interface in org.apache.spark.api.plugin
-
:: DeveloperApi :: Driver component of a
SparkPlugin
. - drop() - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that drops rows containing any null or NaN values. - drop() - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(int) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that drops rows containing less thanminNonNulls
non-null and non-NaN values. - drop(int) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(int, String[]) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that drops rows containing less thanminNonNulls
non-null and non-NaN values in the specified columns. - drop(int, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(int, Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that drops rows containing less thanminNonNulls
non-null and non-NaN values in the specified columns. - drop(int, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(String) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that drops rows containing null or NaN values. - drop(String) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with a column dropped.
- drop(String) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(String) - Method in class org.apache.spark.sql.Dataset
- drop(String[]) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that drops rows containing any null or NaN values in the specified columns. - drop(String...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with columns dropped.
- drop(String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(String...) - Method in class org.apache.spark.sql.Dataset
- drop(String, String[]) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that drops rows containing null or NaN values in the specified columns. - drop(String, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(String, Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that drops rows containing null or NaN values in the specified columns. - drop(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(Column) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with column dropped.
- drop(Column) - Method in class org.apache.spark.sql.Dataset
- drop(Column, Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with columns dropped.
- drop(Column, Column...) - Method in class org.apache.spark.sql.Dataset
- drop(Column, Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with columns dropped.
- drop(Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- drop(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that drops rows containing any null or NaN values in the specified columns. - drop(Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with columns dropped.
- drop(Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- drop(Seq<String>) - Method in class org.apache.spark.sql.Dataset
- dropDuplicates() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset that contains only the unique rows from this Dataset.
- dropDuplicates() - Method in class org.apache.spark.sql.Dataset
- dropDuplicates(String[]) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with duplicate rows removed, considering only the subset of columns.
- dropDuplicates(String[]) - Method in class org.apache.spark.sql.Dataset
- dropDuplicates(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new
Dataset
with duplicate rows removed, considering only the subset of columns. - dropDuplicates(String, String...) - Method in class org.apache.spark.sql.Dataset
- dropDuplicates(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new
Dataset
with duplicate rows removed, considering only the subset of columns. - dropDuplicates(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- dropDuplicates(Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset with duplicate rows removed, considering only the subset of columns.
- dropDuplicates(Seq<String>) - Method in class org.apache.spark.sql.Dataset
- dropDuplicatesWithinWatermark() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with duplicates rows removed, within watermark.
- dropDuplicatesWithinWatermark() - Method in class org.apache.spark.sql.Dataset
- dropDuplicatesWithinWatermark(String[]) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with duplicates rows removed, considering only the subset of columns, within watermark.
- dropDuplicatesWithinWatermark(String[]) - Method in class org.apache.spark.sql.Dataset
- dropDuplicatesWithinWatermark(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with duplicates rows removed, considering only the subset of columns, within watermark.
- dropDuplicatesWithinWatermark(String, String...) - Method in class org.apache.spark.sql.Dataset
- dropDuplicatesWithinWatermark(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with duplicates rows removed, considering only the subset of columns, within watermark.
- dropDuplicatesWithinWatermark(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- dropDuplicatesWithinWatermark(Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with duplicates rows removed, considering only the subset of columns, within watermark.
- dropDuplicatesWithinWatermark(Seq<String>) - Method in class org.apache.spark.sql.Dataset
- dropFields(Seq<String>) - Method in class org.apache.spark.sql.Column
-
An expression that drops fields in
StructType
by name. - dropFromMemory(BlockId, Function0<Either<Object, ChunkedByteBuffer>>, ClassTag<T>) - Method in interface org.apache.spark.storage.memory.BlockEvictionHandler
-
Drop a block from memory, possibly putting it on disk if applicable.
- dropGlobalTempView(String) - Method in class org.apache.spark.sql.api.Catalog
-
Drops the global temporary view with the given view name in the catalog.
- dropIndex(String) - Method in interface org.apache.spark.sql.connector.catalog.index.SupportsIndex
-
Drops the index with the given name.
- dropIndex(String, Identifier) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Build a drop index SQL statement.
- dropIndex(String, Identifier) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- dropIndex(String, Identifier) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- dropIndex(String, Identifier) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- dropLast() - Method in class org.apache.spark.ml.feature.OneHotEncoder
- dropLast() - Method in interface org.apache.spark.ml.feature.OneHotEncoderBase
-
Whether to drop the last category in the encoded vector (default: true)
- dropLast() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- dropNamespace(String[], boolean) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- dropNamespace(String[], boolean) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
Drop a namespace from the catalog with cascade mode, recursively dropping all objects within the namespace if cascade is true.
- dropNonExistentColumnsNotSupportedError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- dropPartition(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsAtomicPartitionManagement
- dropPartition(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Drop a partition from table.
- dropPartitions(InternalRow[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsAtomicPartitionManagement
-
Drop an array of partitions atomically from table.
- dropSchema(String, boolean) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- dropSchema(String, boolean) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- dropSchema(String, boolean) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- dropSchema(String, boolean) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- dropTable(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Build a SQL statement to drop the given table.
- dropTable(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- dropTable(Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- dropTable(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Drop a table in the catalog.
- dropTempTable(String) - Method in class org.apache.spark.sql.SQLContext
- dropTempView(String) - Method in class org.apache.spark.sql.api.Catalog
-
Drops the local temporary view with the given view name in the catalog.
- dropView(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Drop a view in the catalog.
- dspmv(int, double, DenseVector, DenseVector, double, DenseVector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y := alpha*A*x + beta*y
- Dst - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose the destination and edge fields but not the source field.
- dstAttr() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex attribute of the edge's destination vertex.
- dstAttr() - Method in class org.apache.spark.graphx.EdgeTriplet
-
The destination vertex attribute
- dstAttr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- dstCol() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- dstCol() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
-
Name of the input column for destination vertex IDs.
- dstId() - Method in class org.apache.spark.graphx.Edge
- dstId() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex id of the edge's destination vertex.
- dstId() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- DstOnly - Enum constant in enum class org.apache.spark.graphx.impl.EdgeActiveness
-
The destination vertex must be active.
- dstream() - Method in class org.apache.spark.streaming.api.java.JavaDStream
- dstream() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
- dstream() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
- DStream<T> - Class in org.apache.spark.streaming.dstream
-
A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous sequence of RDDs (of the same type) representing a continuous stream of data (see org.apache.spark.rdd.RDD in the Spark core documentation for more details on RDDs).
- DStream(StreamingContext, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.DStream
- dtypes() - Method in class org.apache.spark.sql.api.Dataset
-
Returns all column names and their data types as an array.
- DummyInvocationHandler - Class in org.apache.spark.serializer
- DummyInvocationHandler() - Constructor for class org.apache.spark.serializer.DummyInvocationHandler
- DummySerializerInstance - Class in org.apache.spark.serializer
-
Unfortunately, we need a serializer instance in order to construct a DiskBlockObjectWriter.
- duplicateArgumentNamesError(Seq<String>, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- duplicateClausesError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- duplicateCteDefinitionNamesError(String, SqlBaseParser.CtesContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- duplicatedFieldNameInArrowStructError(Seq<String>) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- duplicatedFieldNameInArrowStructError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- duplicatedTablePathsFoundError(String, String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- duplicateKeysError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- duplicateMapKeyFoundError(Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- duplicateTableColumnDescriptor(ParserRuleContext, String, String, boolean, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- duration() - Method in class org.apache.spark.scheduler.TaskInfo
- duration() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- duration() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- duration() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- duration() - Method in class org.apache.spark.status.api.v1.TaskData
- duration() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- duration() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
Return the duration of this output operation.
- Duration - Class in org.apache.spark.streaming
- Duration(long) - Constructor for class org.apache.spark.streaming.Duration
- DURATION() - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder that serializes instances of the
java.time.Duration
class to the internal representation of nullable Catalyst's DayTimeIntervalType. - DURATION() - Static method in class org.apache.spark.status.TaskIndexNames
- DURATION() - Static method in class org.apache.spark.ui.ToolTips
- DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- DURATION_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- durationCalledOnUnfinishedTaskError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- durationDataPadding(Tuple2<Object, Map<String, Long>>[]) - Static method in class org.apache.spark.ui.UIUtils
-
There may be different duration labels in each batch.
- durationMs() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- Durations - Class in org.apache.spark.streaming
- Durations() - Constructor for class org.apache.spark.streaming.Durations
- dynamicPartitionKeyNotAmongWrittenPartitionPathsError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- dynamicPartitionOverwriteUnsupportedByTableError(Table) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
E
- e() - Static method in class org.apache.spark.sql.functions
-
Returns Euler's number.
- Edge<ED> - Class in org.apache.spark.graphx
-
A single directed edge consisting of a source id, target id, and the data associated with the edge.
- Edge(long, long, ED) - Constructor for class org.apache.spark.graphx.Edge
- EdgeActiveness - Enum Class in org.apache.spark.graphx.impl
-
Criteria for filtering edges based on activeness.
- EdgeContext<VD,
ED, A> - Class in org.apache.spark.graphx -
Represents an edge along with its neighboring vertices and allows sending messages along the edge.
- EdgeContext() - Constructor for class org.apache.spark.graphx.EdgeContext
- EdgeDirection - Class in org.apache.spark.graphx
-
The direction of a directed edge relative to a vertex.
- EdgeDirection() - Constructor for class org.apache.spark.graphx.EdgeDirection
- edgeListFile(SparkContext, String, boolean, int, StorageLevel, StorageLevel) - Static method in class org.apache.spark.graphx.GraphLoader
-
Loads a graph from an edge list formatted file where each line contains two integers: a source id and a target id.
- EdgeOnly - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose only the edge field and not the source or destination field.
- EdgePartition1D$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
- EdgePartition2D$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
- EdgeRDD<ED> - Class in org.apache.spark.graphx
-
EdgeRDD[ED, VD]
extendsRDD[Edge[ED}
by storing the edges in columnar format on each partition for performance. - EdgeRDD(SparkContext, Seq<Dependency<?>>) - Constructor for class org.apache.spark.graphx.EdgeRDD
- EdgeRDDImpl<ED,
VD> - Class in org.apache.spark.graphx.impl - edges() - Method in class org.apache.spark.graphx.Graph
-
An RDD containing the edges and their associated attributes.
- edges() - Method in class org.apache.spark.graphx.impl.GraphImpl
- edges() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- EDGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- EDGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- EdgeTriplet<VD,
ED> - Class in org.apache.spark.graphx -
An edge triplet represents an edge along with the vertex attributes of its neighboring vertices.
- EdgeTriplet() - Constructor for class org.apache.spark.graphx.EdgeTriplet
- EigenValueDecomposition - Class in org.apache.spark.mllib.linalg
-
Compute eigen-decomposition.
- EigenValueDecomposition() - Constructor for class org.apache.spark.mllib.linalg.EigenValueDecomposition
- Either - Enum constant in enum class org.apache.spark.graphx.impl.EdgeActiveness
-
At least one vertex must be active.
- Either() - Static method in class org.apache.spark.graphx.EdgeDirection
-
Edges originating from *or* arriving at a vertex of interest.
- elasticNetParam() - Method in class org.apache.spark.ml.classification.LogisticRegression
- elasticNetParam() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- elasticNetParam() - Method in interface org.apache.spark.ml.param.shared.HasElasticNetParam
-
Param for the ElasticNet mixing parameter, in range [0, 1].
- elasticNetParam() - Method in class org.apache.spark.ml.regression.LinearRegression
- elasticNetParam() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- elem(String, Function1<Object, Object>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- elem(Parsers) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- element_at(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns element of array at given index in value if column is array.
- elementsOfTupleExceedLimitError() - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- elementsOfTupleExceedLimitError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- elementType() - Method in class org.apache.spark.sql.types.ArrayType
- ElementwiseProduct - Class in org.apache.spark.ml.feature
-
Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.
- ElementwiseProduct - Class in org.apache.spark.mllib.feature
-
Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.
- ElementwiseProduct() - Constructor for class org.apache.spark.ml.feature.ElementwiseProduct
- ElementwiseProduct(String) - Constructor for class org.apache.spark.ml.feature.ElementwiseProduct
- ElementwiseProduct(Vector) - Constructor for class org.apache.spark.mllib.feature.ElementwiseProduct
- elems() - Method in class org.apache.spark.status.api.v1.StackTrace
- elt(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the
n
-th input, e.g., returnsinput2
whenn
is 2. - elt(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the
n
-th input, e.g., returnsinput2
whenn
is 2. - emittedRowsAreOlderThanWatermark(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- EMLDAOptimizer - Class in org.apache.spark.mllib.clustering
-
Optimizer for EM algorithm which stores data + parameter graph, plus algorithm parameters.
- EMLDAOptimizer() - Constructor for class org.apache.spark.mllib.clustering.EMLDAOptimizer
- empty() - Static method in class org.apache.spark.api.java.Optional
- empty() - Static method in class org.apache.spark.ml.param.ParamMap
-
Returns an empty param map.
- empty() - Method in class org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
-
An empty
PrefixSpan.Prefix
instance. - empty() - Static method in class org.apache.spark.sql.types.Metadata
-
Returns an empty Metadata.
- empty() - Static method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- empty() - Static method in class org.apache.spark.storage.BlockStatus
- EMPTY_EXPRESSION - Static variable in interface org.apache.spark.sql.connector.expressions.Expression
- EMPTY_NAMED_REFERENCE - Static variable in interface org.apache.spark.sql.connector.expressions.Expression
-
`EMPTY_EXPRESSION` is only used as an input when the default `references` method builds the result array to avoid repeatedly allocating an empty array.
- EMPTY_USER_GROUPS() - Static method in class org.apache.spark.util.Utils
- emptyCollectionError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- emptyDataFrame() - Method in class org.apache.spark.sql.api.SparkSession
-
Returns a
DataFrame
with no rows or columns. - emptyDataFrame() - Method in class org.apache.spark.sql.SparkSession
- emptyDataFrame() - Method in class org.apache.spark.sql.SQLContext
-
Returns a
DataFrame
with no rows or columns. - emptyDataset(Encoder<T>) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a new
Dataset
of type T containing zero elements. - emptyDataset(Encoder<T>) - Method in class org.apache.spark.sql.SparkSession
- emptyInputForTableSampleError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- emptyJsonFieldValueError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- emptyMultipartIdentifierError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- emptyNode(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return a node with the given node id (but nothing else set).
- emptyOptionError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- emptyPartitionKeyError(String, SqlBaseParser.PartitionSpecContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- emptyRDD() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD that has no partitions or elements.
- emptyRDD(ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Get an RDD that has no partitions or elements.
- emptyRDDError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- emptySourceForMergeError(SqlBaseParser.MergeIntoTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- EmptyTerm - Class in org.apache.spark.ml.feature
-
Placeholder term for the result of undefined interactions, e.g.
- EmptyTerm() - Constructor for class org.apache.spark.ml.feature.EmptyTerm
- emptyWindowExpressionError(Window) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- enableHiveSupport() - Method in class org.apache.spark.sql.SparkSession.Builder
-
Enables Hive support, including connectivity to a persistent Hive metastore, support for Hive serdes, and Hive user-defined functions.
- enableReceiverLog(SparkConf) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- encode(Column, String) - Static method in class org.apache.spark.sql.functions
-
Computes the first argument into a binary from a string using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16', 'UTF-32').
- encodeFileNameToURIRawPath(String) - Static method in class org.apache.spark.util.Utils
-
A file name may contain some invalid URI characters, such as " ".
- encoder() - Method in class org.apache.spark.sql.api.Dataset
- encoder() - Method in class org.apache.spark.sql.Dataset
- Encoder<T> - Interface in org.apache.spark.sql
-
Used to convert a JVM object of type
T
to and from the internal Spark SQL representation. - encodeRelativeUnixPathToURIRawPath(String) - Static method in class org.apache.spark.util.Utils
-
Same as
Utils.encodeFileNameToURIRawPath(java.lang.String)
but returns the relative UNIX path. - Encoders - Class in org.apache.spark.sql
-
Methods for creating an
Encoder
. - Encoders() - Constructor for class org.apache.spark.sql.Encoders
- END_OFFSET_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- END_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- END_TIMESTAMP_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- endField - Variable in class org.apache.spark.types.variant.VariantUtil.IntervalFields
- endField() - Method in class org.apache.spark.sql.types.DayTimeIntervalType
- endField() - Method in class org.apache.spark.sql.types.YearMonthIntervalType
- endLabelWithoutBeginLabel(Origin, String) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- endOffset() - Method in class org.apache.spark.sql.streaming.SourceProgress
- endOffset() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- endOfIteratorError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- endOfStreamError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- endOfStreamError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- endReduceId() - Method in class org.apache.spark.storage.ShuffleBlockBatchId
- endswith(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a boolean.
- endsWith(String) - Method in class org.apache.spark.sql.Column
-
String ends with another string literal.
- endsWith(Column) - Method in class org.apache.spark.sql.Column
-
String ends with.
- endTime() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- endTime() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- endTime() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- EnsembleCombiningStrategy - Class in org.apache.spark.mllib.tree.configuration
-
Enum to select ensemble combining strategy for base learners
- EnsembleCombiningStrategy() - Constructor for class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- EnsembleModelReadWrite - Class in org.apache.spark.ml.tree
- EnsembleModelReadWrite() - Constructor for class org.apache.spark.ml.tree.EnsembleModelReadWrite
- EnsembleModelReadWrite.EnsembleNodeData - Class in org.apache.spark.ml.tree
-
Info for one
Node
in a tree ensemble - EnsembleModelReadWrite.EnsembleNodeData$ - Class in org.apache.spark.ml.tree
- EnsembleNodeData(int, DecisionTreeModelReadWrite.NodeData) - Constructor for class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
- EnsembleNodeData$() - Constructor for class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
- entries() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
- Entropy - Class in org.apache.spark.mllib.tree.impurity
-
Class for calculating entropy during multiclass classification.
- Entropy() - Constructor for class org.apache.spark.mllib.tree.impurity.Entropy
- entrySet() - Method in class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
- entrySet() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- EnumUtil - Class in org.apache.spark.util
- EnumUtil() - Constructor for class org.apache.spark.util.EnumUtil
- environmentDetails() - Method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
- environmentUpdateFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- environmentUpdateToJson(SparkListenerEnvironmentUpdate, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- eofExceptionWhileReadPortNumberError(String, Option<Object>) - Static method in class org.apache.spark.errors.SparkCoreErrors
- eps() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
param for eps.
- epsilon() - Method in class org.apache.spark.ml.regression.LinearRegression
- epsilon() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- epsilon() - Method in interface org.apache.spark.ml.regression.LinearRegressionParams
-
The shape parameter to control the amount of robustness.
- EPSILON() - Static method in class org.apache.spark.ml.impl.Utils
- eqNullSafe(Object) - Method in class org.apache.spark.sql.Column
-
Equality test that is safe for null values.
- equal_null(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns same result as the EQUAL(=) operator for non-null operands, but returns true if both are null, false if one of the them is null.
- EqualNullSafe - Class in org.apache.spark.sql.sources
-
Performs equality comparison, similar to
EqualTo
. - EqualNullSafe(String, Object) - Constructor for class org.apache.spark.sql.sources.EqualNullSafe
- equals(Object) - Method in class org.apache.spark.api.java.Optional
- equals(Object) - Static method in class org.apache.spark.ExpireDeadHosts
- equals(Object) - Method in class org.apache.spark.graphx.EdgeDirection
- equals(Object) - Method in class org.apache.spark.HashPartitioner
- equals(Object) - Static method in class org.apache.spark.metrics.DirectPoolMemory
- equals(Object) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- equals(Object) - Static method in class org.apache.spark.metrics.JVMHeapMemory
- equals(Object) - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- equals(Object) - Static method in class org.apache.spark.metrics.MappedPoolMemory
- equals(Object) - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- equals(Object) - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- equals(Object) - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- equals(Object) - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- equals(Object) - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- equals(Object) - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- equals(Object) - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- equals(Object) - Method in class org.apache.spark.ml.attribute.AttributeGroup
- equals(Object) - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- equals(Object) - Method in class org.apache.spark.ml.attribute.NominalAttribute
- equals(Object) - Method in class org.apache.spark.ml.attribute.NumericAttribute
- equals(Object) - Static method in class org.apache.spark.ml.feature.Dot
- equals(Object) - Static method in class org.apache.spark.ml.feature.EmptyTerm
- equals(Object) - Method in class org.apache.spark.ml.linalg.DenseMatrix
- equals(Object) - Method in class org.apache.spark.ml.linalg.DenseVector
- equals(Object) - Method in class org.apache.spark.ml.linalg.SparseMatrix
- equals(Object) - Method in class org.apache.spark.ml.linalg.SparseVector
- equals(Object) - Method in interface org.apache.spark.ml.linalg.Vector
- equals(Object) - Method in class org.apache.spark.ml.param.Param
- equals(Object) - Method in class org.apache.spark.ml.tree.CategoricalSplit
- equals(Object) - Method in class org.apache.spark.ml.tree.ContinuousSplit
- equals(Object) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- equals(Object) - Method in class org.apache.spark.mllib.linalg.DenseVector
- equals(Object) - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- equals(Object) - Method in class org.apache.spark.mllib.linalg.SparseVector
- equals(Object) - Method in interface org.apache.spark.mllib.linalg.Vector
- equals(Object) - Method in class org.apache.spark.mllib.linalg.VectorUDT
- equals(Object) - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- equals(Object) - Method in class org.apache.spark.mllib.tree.model.Predict
- equals(Object) - Method in class org.apache.spark.partial.BoundedDouble
- equals(Object) - Method in interface org.apache.spark.Partition
- equals(Object) - Method in class org.apache.spark.RangePartitioner
- equals(Object) - Method in class org.apache.spark.resource.ExecutorResourceRequest
- equals(Object) - Method in class org.apache.spark.resource.ResourceID
- equals(Object) - Method in class org.apache.spark.resource.ResourceInformation
- equals(Object) - Method in class org.apache.spark.resource.ResourceProfile
- equals(Object) - Method in class org.apache.spark.resource.ResourceRequest
- equals(Object) - Method in class org.apache.spark.resource.TaskResourceRequest
- equals(Object) - Static method in class org.apache.spark.Resubmitted
- equals(Object) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- equals(Object) - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- equals(Object) - Method in class org.apache.spark.scheduler.InputFormatInfo
- equals(Object) - Static method in class org.apache.spark.scheduler.JobSucceeded
- equals(Object) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- equals(Object) - Method in class org.apache.spark.scheduler.SplitInfo
- equals(Object) - Static method in class org.apache.spark.scheduler.StopCoordinator
- equals(Object) - Method in class org.apache.spark.sql.Column
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.ColumnDefaultValue
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.IdentityColumnSpec
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.After
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.ClusterBy
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.RemoveProperty
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.SetProperty
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnDefaultValue
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnNullability
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnPosition
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
- equals(Object) - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- equals(Object) - Method in record class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
Indicates whether some other object is "equal to" this one.
- equals(Object) - Method in class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- equals(Object) - Method in class org.apache.spark.sql.connector.expressions.aggregate.UserDefinedAggregateFunc
- equals(Object) - Method in class org.apache.spark.sql.connector.expressions.GeneralScalarExpression
- equals(Object) - Method in class org.apache.spark.sql.connector.expressions.UserDefinedScalarFunc
- equals(Object) - Method in class org.apache.spark.sql.connector.read.streaming.CompositeReadLimit
- equals(Object) - Method in class org.apache.spark.sql.connector.read.streaming.Offset
-
Equality based on JSON string representation.
- equals(Object) - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxBytes
- equals(Object) - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxFiles
- equals(Object) - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxRows
- equals(Object) - Method in class org.apache.spark.sql.connector.read.streaming.ReadMinRows
- equals(Object) - Method in interface org.apache.spark.sql.Row
- equals(Object) - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- equals(Object) - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- equals(Object) - Method in class org.apache.spark.sql.sources.In
- equals(Object) - Static method in class org.apache.spark.sql.types.BinaryType
- equals(Object) - Static method in class org.apache.spark.sql.types.BooleanType
- equals(Object) - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- equals(Object) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- equals(Object) - Static method in class org.apache.spark.sql.types.ByteType
- equals(Object) - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- equals(Object) - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- equals(Object) - Static method in class org.apache.spark.sql.types.DateType
- equals(Object) - Static method in class org.apache.spark.sql.types.DateTypeExpression
- equals(Object) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- equals(Object) - Method in class org.apache.spark.sql.types.Decimal
- equals(Object) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- equals(Object) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- equals(Object) - Static method in class org.apache.spark.sql.types.DoubleType
- equals(Object) - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- equals(Object) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- equals(Object) - Static method in class org.apache.spark.sql.types.FloatType
- equals(Object) - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- equals(Object) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- equals(Object) - Static method in class org.apache.spark.sql.types.IntegerType
- equals(Object) - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- equals(Object) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- equals(Object) - Static method in class org.apache.spark.sql.types.LongType
- equals(Object) - Static method in class org.apache.spark.sql.types.LongTypeExpression
- equals(Object) - Method in class org.apache.spark.sql.types.Metadata
- equals(Object) - Static method in class org.apache.spark.sql.types.NullType
- equals(Object) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- equals(Object) - Static method in class org.apache.spark.sql.types.ShortType
- equals(Object) - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- equals(Object) - Method in class org.apache.spark.sql.types.StringType
- equals(Object) - Static method in class org.apache.spark.sql.types.StringTypeExpression
- equals(Object) - Method in class org.apache.spark.sql.types.StructType
- equals(Object) - Static method in class org.apache.spark.sql.types.TimestampNTZType
- equals(Object) - Static method in class org.apache.spark.sql.types.TimestampType
- equals(Object) - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- equals(Object) - Method in class org.apache.spark.sql.types.UserDefinedType
- equals(Object) - Static method in class org.apache.spark.sql.types.VariantType
- equals(Object) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- equals(Object) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- equals(Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- equals(Object) - Static method in class org.apache.spark.StopMapOutputTracker
- equals(Object) - Method in class org.apache.spark.storage.BlockManagerId
- equals(Object) - Method in class org.apache.spark.storage.StorageLevel
- equals(Object) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- equals(Object) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- equals(Object) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- equals(Object) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- equals(Object) - Static method in class org.apache.spark.Success
- equals(Object) - Static method in class org.apache.spark.TaskResultLost
- equals(Object) - Static method in class org.apache.spark.TaskSchedulerIsSet
- equals(Object) - Static method in class org.apache.spark.UnknownReason
- equals(Object) - Method in class org.apache.spark.unsafe.types.CalendarInterval
- equalsIgnoreCaseAndNullability(DataType, DataType) - Static method in class org.apache.spark.sql.types.DataType
-
Compares two types, ignoring nullability of ArrayType, MapType, StructType, and ignoring case sensitivity of field names in StructType.
- equalsIgnoreNullability(DataType, DataType) - Static method in class org.apache.spark.sql.types.DataType
-
Compares two types, ignoring nullability of ArrayType, MapType, StructType.
- equalsStructurally(DataType, DataType, boolean) - Static method in class org.apache.spark.sql.types.DataType
-
Returns true if the two data types share the same "shape", i.e.
- equalsStructurallyByName(DataType, DataType, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.types.DataType
-
Returns true if the two data types have the same field names in order recursively.
- equalTo(Object) - Method in class org.apache.spark.sql.Column
-
Equality test.
- EqualTo - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the column evaluates to a value equal tovalue
. - EqualTo(String, Object) - Constructor for class org.apache.spark.sql.sources.EqualTo
- equiv(T, T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- equiv(T, T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- equiv(T, T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- equiv(T, T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- equiv(T, T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- equiv(T, T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- equiv(T, T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- err(String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- Error() - Static method in class org.apache.spark.ml.feature.RFormulaParser
- ERROR() - Static method in class org.apache.spark.status.TaskIndexNames
- ERROR_COMMAND_NOT_FOUND() - Static method in class org.apache.spark.util.SparkExitCode
-
Exception indicate command not found.
- ERROR_MESSAGE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- ERROR_MESSAGE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- ERROR_MESSAGE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- ERROR_MISUSE_SHELL_BUILTIN() - Static method in class org.apache.spark.util.SparkExitCode
-
Exception indicate invalid usage of some shell built-in command.
- ERROR_PATH_NOT_FOUND() - Static method in class org.apache.spark.util.SparkExitCode
-
Exception appears when the computer cannot find the specified path.
- errorClass() - Method in exception org.apache.spark.sql.AnalysisException
- ErrorClassesJsonReader - Class in org.apache.spark
-
A reader to load error information from one or more JSON files.
- ErrorClassesJsonReader(Seq<URL>) - Constructor for class org.apache.spark.ErrorClassesJsonReader
- errorClassOnException() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
- ErrorIfExists - Enum constant in enum class org.apache.spark.sql.SaveMode
-
ErrorIfExists mode means that when saving a DataFrame to a data source, if data already exists, an exception is expected to be thrown.
- ErrorInfo - Class in org.apache.spark
-
Information associated with an error class.
- ErrorInfo(Seq<String>, Option<Map<String, ErrorSubInfo>>, Option<String>) - Constructor for class org.apache.spark.ErrorInfo
- errorMessage() - Method in class org.apache.spark.status.api.v1.TaskData
- errorMessage() - Method in class org.apache.spark.status.LiveTask
- errorMessageCell(String) - Static method in class org.apache.spark.ui.UIUtils
- ErrorMessageFormat - Class in org.apache.spark
- ErrorMessageFormat() - Constructor for class org.apache.spark.ErrorMessageFormat
- errorReader() - Static method in class org.apache.spark.SparkThrowableHelper
- ErrorStateInfo - Class in org.apache.spark
-
Information associated with an error state / SQLSTATE.
- ErrorStateInfo(String, String, String, List<String>) - Constructor for class org.apache.spark.ErrorStateInfo
- ErrorSubInfo - Class in org.apache.spark
-
Information associated with an error subclass.
- ErrorSubInfo(Seq<String>) - Constructor for class org.apache.spark.ErrorSubInfo
- errorSummary(String) - Static method in class org.apache.spark.ui.UIUtils
-
This function works exactly the same as utils.errorSummary(javascript), it shall be
- escapeCharacterAtTheEndError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- escapeCharacterInTheMiddleError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- escapeMetaCharacters(String) - Static method in class org.apache.spark.sql.util.SchemaUtils
- escapeMetaCharacters(String) - Static method in class org.apache.spark.util.SparkSchemaUtils
- estimate(double[]) - Method in class org.apache.spark.mllib.stat.KernelDensity
-
Estimates probability density function at the given array of points.
- estimate(Object) - Static method in class org.apache.spark.util.SizeEstimator
-
Estimate the number of bytes that the given object takes up on the JVM heap.
- estimateCount(Object) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Returns the estimated frequency of
item
. - estimatedDocConcentration() - Method in class org.apache.spark.ml.clustering.LDAModel
-
Value for
LDAModel.docConcentration()
estimated from data. - estimatedSize() - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
- estimatedSize() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
- estimatedSize() - Method in interface org.apache.spark.storage.memory.ValuesHolder
- estimatedSize() - Method in interface org.apache.spark.util.KnownSizeEstimation
- estimateStatistics() - Method in interface org.apache.spark.sql.connector.read.SupportsReportStatistics
-
Returns the estimated statistics of this data source scan.
- estimator() - Method in class org.apache.spark.ml.FitEnd
- estimator() - Method in class org.apache.spark.ml.FitStart
- estimator() - Method in class org.apache.spark.ml.tuning.CrossValidator
- estimator() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- estimator() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- estimator() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- estimator() - Method in interface org.apache.spark.ml.tuning.ValidatorParams
-
param for the estimator to be validated
- Estimator<M extends Model<M>> - Class in org.apache.spark.ml
-
Abstract class for estimators that fit models to data.
- Estimator() - Constructor for class org.apache.spark.ml.Estimator
- estimatorParamMaps() - Method in class org.apache.spark.ml.tuning.CrossValidator
- estimatorParamMaps() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- estimatorParamMaps() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- estimatorParamMaps() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- estimatorParamMaps() - Method in interface org.apache.spark.ml.tuning.ValidatorParams
-
param for estimator param maps
- eval() - Method in interface org.apache.spark.ml.ann.ActivationFunction
-
Implements a function
- eval(int, Seq<Iterator<T>>) - Method in interface org.apache.spark.PartitionEvaluator
-
Evaluates the RDD partition at the given index.
- eval(DenseMatrix<Object>, DenseMatrix<Object>) - Method in interface org.apache.spark.ml.ann.LayerModel
-
Evaluates the data (process the data through the layer).
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.classification.FMClassificationModel
-
Evaluates the model on a test dataset.
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
Evaluates the model on a test dataset.
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Evaluates the model on a test dataset.
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
Evaluates the model on a test dataset.
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
Evaluates the model on a test dataset.
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.Evaluator
-
Evaluates model output and returns a scalar metric.
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
Evaluate the model on the given dataset, returning a summary of the results.
- evaluate(Dataset<?>) - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
Evaluates the model on a test dataset.
- evaluate(Dataset<?>, ParamMap) - Method in class org.apache.spark.ml.evaluation.Evaluator
-
Evaluates model output and returns a scalar metric.
- evaluate(Row) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Calculates the final result of this
UserDefinedAggregateFunction
based on the given aggregation buffer. - evaluateBooleanCondition(SparkSession, LeafStatementExec) - Method in interface org.apache.spark.sql.scripting.NonLeafStatementExec
-
Evaluate the boolean condition represented by the statement.
- evaluateEachIteration(RDD<org.apache.spark.ml.feature.Instance>, DecisionTreeRegressionModel[], double[], Loss, Enumeration.Value) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to compute error or loss for every iteration of gradient boosting.
- evaluateEachIteration(RDD<LabeledPoint>, Loss) - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Method to compute error or loss for every iteration of gradient boosting.
- evaluateEachIteration(Dataset<?>) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
Method to compute error or loss for every iteration of gradient boosting.
- evaluateEachIteration(Dataset<?>, String) - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
Method to compute error or loss for every iteration of gradient boosting.
- evaluator() - Method in class org.apache.spark.ml.tuning.CrossValidator
- evaluator() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- evaluator() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- evaluator() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- evaluator() - Method in interface org.apache.spark.ml.tuning.ValidatorParams
-
param for the evaluator used to select hyper-parameters that maximize the validated metric
- Evaluator - Class in org.apache.spark.ml.evaluation
-
Abstract class for evaluators that compute metrics from predictions.
- Evaluator() - Constructor for class org.apache.spark.ml.evaluation.Evaluator
- EVENT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- eventRates() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- eventTime() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- EventTime() - Static method in class org.apache.spark.sql.streaming.TimeMode
-
Stateful processor that uses event time to register timers.
- EventTimeTimeout() - Static method in class org.apache.spark.sql.streaming.GroupStateTimeout
-
Timeout based on event-time.
- every(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if all values of
e
are true. - EXCEED_MAX_EXECUTOR_FAILURES() - Static method in class org.apache.spark.util.SparkExitCode
-
Exit due to executor failures exceeds the threshold.
- exceedMapSizeLimitError(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- exceedMaxLimit(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- except(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing rows in this Dataset but not in another Dataset.
- except(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- exceptAll(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing rows in this Dataset but not in another Dataset while preserving the duplicates.
- exceptAll(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- exception() - Method in class org.apache.spark.ExceptionFailure
- exception() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns the
StreamingQueryException
if the query was terminated by an exception. - exception() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
- EXCEPTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- ExceptionFailure - Class in org.apache.spark
-
:: DeveloperApi :: Task failed due to a runtime exception.
- ExceptionFailure(String, String, StackTraceElement[], String, Option<ThrowableSerializationWrapper>, Seq<AccumulableInfo>, Seq<AccumulatorV2<?, ?>>, Seq<Object>) - Constructor for class org.apache.spark.ExceptionFailure
- exceptionFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- exceptionString(Throwable) - Static method in class org.apache.spark.util.Utils
-
Return a nice string representation of the exception.
- exceptionToJson(Exception, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- EXCLUDED_IN_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- ExcludedExecutor - Class in org.apache.spark.scheduler
- ExcludedExecutor(String, long) - Constructor for class org.apache.spark.scheduler.ExcludedExecutor
- excludedExecutors() - Method in class org.apache.spark.status.LiveStage
- excludedInStages() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- excludedNodes() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
- EXEC_CPU_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- EXEC_RUN_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- execId() - Method in class org.apache.spark.ExecutorLostFailure
- execId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
- execId() - Method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
- execId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
- executeAndGetOutput(Seq<String>, File, Map<String, String>, boolean) - Static method in class org.apache.spark.util.Utils
-
Execute a command and get its output, throwing an exception if it yields a code other than 0.
- executeBroadcastTimeoutError(long, Option<TimeoutException>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- executeCodePathUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- executeCommand(String, String, Map<String, String>) - Method in class org.apache.spark.sql.SparkSession
-
Execute an arbitrary string command inside an external execution engine rather than Spark.
- executeCommand(String, CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.ExternalCommandRunner
-
Execute the given command.
- executeCommand(Seq<String>, File, Map<String, String>, boolean) - Static method in class org.apache.spark.util.Utils
-
Execute a command and return the process running the command.
- EXECUTION_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- EXECUTION_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- ExecutionData - Class in org.apache.spark.status.api.v1.sql
- ExecutionErrors - Interface in org.apache.spark.sql.errors
- ExecutionListenerManager - Class in org.apache.spark.sql.util
-
Manager for
QueryExecutionListener
. - executor() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
- executor() - Method in interface org.apache.spark.shuffle.api.ShuffleDataIO
-
Called once on executor processes to bootstrap the shuffle data storage modules that are only invoked on the executors.
- EXECUTOR() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
- EXECUTOR() - Static method in class org.apache.spark.status.TaskIndexNames
- EXECUTOR_CORES - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the number of executor CPU cores.
- EXECUTOR_CPU_TIME() - Static method in class org.apache.spark.InternalAccumulator
- EXECUTOR_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- EXECUTOR_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- EXECUTOR_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- EXECUTOR_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- EXECUTOR_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- EXECUTOR_DEFAULT_EXTRA_CLASS_PATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the executor default extra class path.
- EXECUTOR_DEFAULT_EXTRA_CLASS_PATH_VALUE - Static variable in class org.apache.spark.launcher.SparkLauncher
- EXECUTOR_DEFAULT_JAVA_OPTIONS - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the default executor VM options.
- EXECUTOR_DESERIALIZE_CPU_TIME() - Static method in class org.apache.spark.InternalAccumulator
- EXECUTOR_DESERIALIZE_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- EXECUTOR_DESERIALIZE_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- EXECUTOR_DESERIALIZE_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- EXECUTOR_DESERIALIZE_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- EXECUTOR_DESERIALIZE_CPU_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- EXECUTOR_DESERIALIZE_TIME() - Static method in class org.apache.spark.InternalAccumulator
- EXECUTOR_DESERIALIZE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- EXECUTOR_DESERIALIZE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- EXECUTOR_DESERIALIZE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- EXECUTOR_DESERIALIZE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- EXECUTOR_DESERIALIZE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- EXECUTOR_EXTRA_CLASSPATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the executor class path.
- EXECUTOR_EXTRA_JAVA_OPTIONS - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the executor VM options.
- EXECUTOR_EXTRA_LIBRARY_PATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the executor native library path.
- EXECUTOR_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- EXECUTOR_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- EXECUTOR_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- EXECUTOR_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- EXECUTOR_LOGS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- EXECUTOR_LOGS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- EXECUTOR_MEMORY - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the executor memory.
- EXECUTOR_METRICS_DISTRIBUTIONS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- EXECUTOR_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- EXECUTOR_RESOURCES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- EXECUTOR_RUN_TIME() - Static method in class org.apache.spark.InternalAccumulator
- EXECUTOR_RUN_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- EXECUTOR_RUN_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- EXECUTOR_RUN_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- EXECUTOR_RUN_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- EXECUTOR_RUN_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- EXECUTOR_SUMMARY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- executorAddedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- executorAddedToJson(SparkListenerExecutorAdded, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- executorCpuTime() - Method in class org.apache.spark.status.api.v1.StageData
- executorCpuTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- executorCpuTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- executorDecommission(String) - Method in interface org.apache.spark.scheduler.Schedulable
- executorDecommission(String, ExecutorDecommissionInfo) - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Process a decommissioning executor.
- ExecutorDecommissioning(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ExecutorDecommissioning
- ExecutorDecommissioning$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ExecutorDecommissioning$
- ExecutorDecommissionSigReceived$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ExecutorDecommissionSigReceived$
- executorDeserializeCpuTime() - Method in class org.apache.spark.status.api.v1.StageData
- executorDeserializeCpuTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- executorDeserializeCpuTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- executorDeserializeTime() - Method in class org.apache.spark.status.api.v1.StageData
- executorDeserializeTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- executorDeserializeTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- executorFailures() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- executorFailures() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- executorFailures() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
- executorFailures() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- executorHeartbeatReceived(String, Tuple2<Object, Seq<AccumulatorV2<?, ?>>>[], BlockManagerId, Map<Tuple2<Object, Object>, ExecutorMetrics>) - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Update metrics for in-progress tasks and executor metrics, and let the master know that the BlockManager is still alive.
- executorHost() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- executorHost() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- executorId() - Method in class org.apache.spark.ExecutorRegistered
- executorId() - Method in class org.apache.spark.ExecutorRemoved
- executorId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ExecutorDecommissioning
- executorId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
- executorId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.IsExecutorAlive
- executorId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchedExecutor
- executorId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- executorId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
- executorId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
Deprecated.
- executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
- executorId() - Method in class org.apache.spark.scheduler.TaskInfo
- executorId() - Method in class org.apache.spark.SparkEnv
- executorId() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- executorId() - Method in class org.apache.spark.status.api.v1.TaskData
- executorId() - Method in class org.apache.spark.status.LiveRDDDistribution
- executorId() - Method in class org.apache.spark.storage.BlockManagerId
- executorId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
- executorId() - Method in class org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive
- executorId() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- executorId() - Method in class org.apache.spark.ui.storage.ExecutorStreamSummary
- executorID() - Method in interface org.apache.spark.api.plugin.PluginContext
-
Executor ID of the process.
- executorIds() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
- executorIds() - Method in class org.apache.spark.storage.BlockManagerMessages.DecommissionBlockManagers
- executorInfo() - Method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
- ExecutorInfo - Class in org.apache.spark.scheduler.cluster
-
:: DeveloperApi :: Stores information about an executor to pass from the scheduler to SparkListeners.
- ExecutorInfo(String, int, Map<String, String>) - Constructor for class org.apache.spark.scheduler.cluster.ExecutorInfo
- ExecutorInfo(String, int, Map<String, String>, Map<String, String>) - Constructor for class org.apache.spark.scheduler.cluster.ExecutorInfo
- ExecutorInfo(String, int, Map<String, String>, Map<String, String>, Map<String, ResourceInformation>) - Constructor for class org.apache.spark.scheduler.cluster.ExecutorInfo
- ExecutorInfo(String, int, Map<String, String>, Map<String, String>, Map<String, ResourceInformation>, int) - Constructor for class org.apache.spark.scheduler.cluster.ExecutorInfo
- ExecutorInfo(String, int, Map<String, String>, Map<String, String>, Map<String, ResourceInformation>, int, Option<Object>, Option<Object>) - Constructor for class org.apache.spark.scheduler.cluster.ExecutorInfo
- executorInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- executorInfoToJson(ExecutorInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- ExecutorKilled - Class in org.apache.spark.scheduler
- ExecutorKilled() - Constructor for class org.apache.spark.scheduler.ExecutorKilled
- executorLogs() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- executorLogs() - Method in class org.apache.spark.status.api.v1.TaskData
- ExecutorLossMessage - Class in org.apache.spark.scheduler
- ExecutorLossMessage() - Constructor for class org.apache.spark.scheduler.ExecutorLossMessage
- executorLost(String, String, ExecutorLossReason) - Method in interface org.apache.spark.scheduler.Schedulable
- executorLost(String, ExecutorLossReason) - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Process a lost executor
- ExecutorLostFailure - Class in org.apache.spark
-
:: DeveloperApi :: The task failed because the executor that it was running on was lost.
- ExecutorLostFailure(String, boolean, Option<String>) - Constructor for class org.apache.spark.ExecutorLostFailure
- executorMetrics() - Method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
- executorMetrics() - Method in class org.apache.spark.status.api.v1.ExecutorPeakMetricsDistributions
- executorMetricsDistributions() - Method in class org.apache.spark.status.api.v1.StageData
- ExecutorMetricsDistributions - Class in org.apache.spark.status.api.v1
- executorMetricsFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
Extract the executor metrics from JSON.
- ExecutorMetricsSerializer - Class in org.apache.spark.status.protobuf
- ExecutorMetricsSerializer() - Constructor for class org.apache.spark.status.protobuf.ExecutorMetricsSerializer
- executorMetricsToJson(ExecutorMetrics, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
Convert executor metrics to JSON.
- executorMetricsUpdateFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- executorMetricsUpdateToJson(SparkListenerExecutorMetricsUpdate, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- ExecutorMetricType - Interface in org.apache.spark.metrics
-
Executor metric types for executor-level metrics stored in ExecutorMetrics.
- executorOffHeapMemorySizeAsMb(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Convert MEMORY_OFFHEAP_SIZE to MB Unit, return 0 if MEMORY_OFFHEAP_ENABLED is false.
- executorPct() - Method in class org.apache.spark.scheduler.RuntimePercentage
- ExecutorPeakMetricsDistributions - Class in org.apache.spark.status.api.v1
- executorPlugin() - Method in interface org.apache.spark.api.plugin.SparkPlugin
-
Return the plugin's executor-side component.
- ExecutorPlugin - Interface in org.apache.spark.api.plugin
-
:: DeveloperApi :: Executor component of a
SparkPlugin
. - executorRef() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- ExecutorRegistered - Class in org.apache.spark
- ExecutorRegistered(String) - Constructor for class org.apache.spark.ExecutorRegistered
- ExecutorRemoved - Class in org.apache.spark
- ExecutorRemoved(String) - Constructor for class org.apache.spark.ExecutorRemoved
- executorRemovedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- executorRemovedToJson(SparkListenerExecutorRemoved, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- ExecutorResourceRequest - Class in org.apache.spark.resource
-
An Executor resource request.
- ExecutorResourceRequest(String, long, String, String) - Constructor for class org.apache.spark.resource.ExecutorResourceRequest
- executorResourceRequestFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- executorResourceRequestMapFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- executorResourceRequestMapToJson(Map<String, ExecutorResourceRequest>, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- ExecutorResourceRequests - Class in org.apache.spark.resource
-
A set of Executor resource requests.
- ExecutorResourceRequests() - Constructor for class org.apache.spark.resource.ExecutorResourceRequests
- executorResourceRequestToJson(ExecutorResourceRequest, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- executorResourceRequestToRequirement(Seq<ExecutorResourceRequest>) - Static method in class org.apache.spark.resource.ResourceUtils
- executorResources() - Method in class org.apache.spark.resource.ResourceProfile
- executorResources() - Method in class org.apache.spark.resource.ResourceProfileBuilder
- executorResources() - Method in class org.apache.spark.status.api.v1.ResourceProfileInfo
- executorResources() - Method in class org.apache.spark.status.LiveResourceProfile
- executorResourcesJMap() - Method in class org.apache.spark.resource.ResourceProfile
-
(Java-specific) gets a Java Map of resources to ExecutorResourceRequest
- executorResourcesJMap() - Method in class org.apache.spark.resource.ResourceProfileBuilder
-
(Java-specific) gets a Java Map of resources to ExecutorResourceRequest
- ExecutorResourcesOrDefaults$() - Constructor for class org.apache.spark.resource.ResourceProfile.ExecutorResourcesOrDefaults$
- executorRunTime() - Method in class org.apache.spark.status.api.v1.StageData
- executorRunTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- executorRunTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- executors() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
- executors() - Method in class org.apache.spark.status.LiveRDDPartition
- EXECUTORS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- ExecutorStageSummary - Class in org.apache.spark.status.api.v1
- ExecutorStageSummarySerializer - Class in org.apache.spark.status.protobuf
- ExecutorStageSummarySerializer() - Constructor for class org.apache.spark.status.protobuf.ExecutorStageSummarySerializer
- ExecutorStreamSummary - Class in org.apache.spark.ui.storage
- ExecutorStreamSummary(Seq<org.apache.spark.status.StreamBlockData>) - Constructor for class org.apache.spark.ui.storage.ExecutorStreamSummary
- executorSummaries() - Method in class org.apache.spark.status.LiveStage
- executorSummary() - Method in class org.apache.spark.status.api.v1.StageData
- executorSummary(String) - Method in class org.apache.spark.status.LiveStage
- ExecutorSummary - Class in org.apache.spark.status.api.v1
- executorUpdates() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
- exists() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Whether state exists or not.
- exists() - Method in interface org.apache.spark.sql.streaming.ListState
-
Whether state exists or not.
- exists() - Method in interface org.apache.spark.sql.streaming.MapState
-
Whether state exists or not.
- exists() - Method in interface org.apache.spark.sql.streaming.ValueState
-
Whether state exists or not.
- exists() - Method in class org.apache.spark.streaming.State
-
Whether the state already exists
- exists(String) - Static method in class org.apache.spark.sql.types.UDTRegistration
-
Queries if a given user class is already registered or not.
- exists(Column, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Returns whether a predicate holds for one or more elements in the array.
- EXIT_FAILURE() - Static method in class org.apache.spark.util.SparkExitCode
-
Failed termination.
- EXIT_SUCCESS() - Static method in class org.apache.spark.util.SparkExitCode
-
Successful termination.
- exitCausedByApp() - Method in class org.apache.spark.ExecutorLostFailure
- exitCode() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown
- exitCode() - Method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
- exitFn() - Method in interface org.apache.spark.util.CommandLineLoggingUtils
- exp(String) - Static method in class org.apache.spark.sql.functions
-
Computes the exponential of the given column.
- exp(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the exponential of the given value.
- ExpectationAggregator - Class in org.apache.spark.ml.clustering
-
ExpectationAggregator computes the partial expectation results.
- ExpectationAggregator(int, Broadcast<double[]>, Broadcast<Tuple2<DenseVector, DenseVector>[]>) - Constructor for class org.apache.spark.ml.clustering.ExpectationAggregator
- ExpectationSum - Class in org.apache.spark.mllib.clustering
- ExpectationSum(double, double[], DenseVector<Object>[], DenseMatrix<Object>[]) - Constructor for class org.apache.spark.mllib.clustering.ExpectationSum
- expectedFpp() - Method in class org.apache.spark.util.sketch.BloomFilter
-
Returns the probability that BloomFilter.mightContain(Object) erroneously return
true
for an object that has not actually been put in theBloomFilter
. - expectPermanentViewNotTempViewError(Seq<String>, String, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- expectPersistentFuncError(String, String, Option<String>, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- expectTableNotViewError(Seq<String>, String, boolean, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- expectViewNotTableError(Seq<String>, String, boolean, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- experimental() - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
- experimental() - Method in class org.apache.spark.sql.SQLContext
-
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
- ExperimentalMethods - Class in org.apache.spark.sql
-
:: Experimental :: Holder for experimental methods for the bravest.
- ExpireDeadHosts - Class in org.apache.spark
- ExpireDeadHosts() - Constructor for class org.apache.spark.ExpireDeadHosts
- ExpiredTimerInfo - Interface in org.apache.spark.sql.streaming
-
Class used to provide access to expired timer's expiry time.
- expiryTime() - Method in class org.apache.spark.scheduler.ExcludedExecutor
- explain() - Method in class org.apache.spark.sql.api.Dataset
-
Prints the physical plan to the console for debugging purposes.
- explain() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Prints the physical plan to the console for debugging purposes.
- explain(boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Prints the plans (logical and physical) to the console for debugging purposes.
- explain(boolean) - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Prints the physical plan to the console for debugging purposes.
- explain(boolean) - Method in class org.apache.spark.sql.Column
-
Prints the expression to the console for debugging purposes.
- explain(String) - Method in class org.apache.spark.sql.api.Dataset
-
Prints the plans (logical and physical) with a format specified by a given explain mode.
- explain(String) - Method in class org.apache.spark.sql.Dataset
- explainedVariance() - Method in class org.apache.spark.ml.feature.PCAModel
- explainedVariance() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns the explained variance regression score.
- explainedVariance() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns the variance explained by regression.
- explainedVariance() - Method in class org.apache.spark.mllib.feature.PCAModel
- explainParam(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Explains a param.
- explainParams() - Method in interface org.apache.spark.ml.param.Params
-
Explains all params of this instance.
- explicitCollationMismatchError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- explode(String, String, Function1<A, IterableOnce<B>>, TypeTags.TypeTag<B>) - Method in class org.apache.spark.sql.api.Dataset
-
Deprecated.use flatMap() or select() with functions.explode() instead. Since 2.0.0.
- explode(String, String, Function1<A, IterableOnce<B>>, TypeTags.TypeTag<B>) - Method in class org.apache.spark.sql.Dataset
-
Deprecated.use flatMap() or select() with functions.explode() instead. Since 2.0.0.
- explode(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element in the given array or map column.
- explode(Seq<Column>, Function1<Row, IterableOnce<A>>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.api.Dataset
-
Deprecated.use flatMap() or select() with functions.explode() instead. Since 2.0.0.
- explode(Seq<Column>, Function1<Row, IterableOnce<A>>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.Dataset
-
Deprecated.use flatMap() or select() with functions.explode() instead. Since 2.0.0.
- explode_outer(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element in the given array or map column.
- explodeNestedFieldNames(StructType) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Returns all column names in this schema as a flat list.
- expm1(String) - Static method in class org.apache.spark.sql.functions
-
Computes the exponential of the given column minus one.
- expm1(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the exponential of the given value minus one.
- ExponentialGenerator - Class in org.apache.spark.mllib.random
-
Generates i.i.d.
- ExponentialGenerator(double) - Constructor for class org.apache.spark.mllib.random.ExponentialGenerator
- exponentialJavaRDD(JavaSparkContext, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.exponentialJavaRDD
with the default number of partitions and the default seed. - exponentialJavaRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.exponentialJavaRDD
with the default seed. - exponentialJavaRDD(JavaSparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.exponentialRDD
. - exponentialJavaVectorRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.exponentialJavaVectorRDD
with the default number of partitions and the default seed. - exponentialJavaVectorRDD(JavaSparkContext, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.exponentialJavaVectorRDD
with the default seed. - exponentialJavaVectorRDD(JavaSparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.exponentialVectorRDD
. - exponentialRDD(SparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples from the exponential distribution with the input mean. - exponentialVectorRDD(SparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing
i.i.d.
samples drawn from the exponential distribution with the input mean. - expr(String) - Static method in class org.apache.spark.sql.functions
-
Parses the expression string into the column that it represents, similar to
Dataset.selectExpr(java.lang.String...)
. - expression() - Method in class org.apache.spark.sql.connector.expressions.Cast
- expression() - Method in interface org.apache.spark.sql.connector.expressions.SortOrder
-
Returns the sort expression.
- Expression - Interface in org.apache.spark.sql.connector.expressions
-
Base class of the public logical expression API.
- expressionDataType() - Method in class org.apache.spark.sql.connector.expressions.Cast
- expressionDecodingError(Exception, Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- expressionEncodingError(Exception, Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- Expressions - Class in org.apache.spark.sql.connector.expressions
-
Helper methods to create logical transforms to pass into Spark.
- expressionWithMultiWindowExpressionsError(NamedExpression, Seq<WindowSpecDefinition>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- expressionWithoutWindowExpressionError(NamedExpression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ExtendedExplainGenerator - Interface in org.apache.spark.sql
-
A trait for a session extension to implement that provides addition explain plan information.
- ExternalClusterManager - Interface in org.apache.spark.scheduler
-
A cluster manager interface to plugin external scheduler.
- ExternalCommandRunner - Interface in org.apache.spark.sql.connector
-
An interface to execute an arbitrary string command inside an external execution engine rather than Spark.
- externalDataSourceException(Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- externalShuffleServicePort(SparkConf) - Static method in class org.apache.spark.storage.StorageUtils
-
Get the port used by the external shuffle service.
- extract(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Extracts a part of the date/timestamp or interval source.
- Extract - Class in org.apache.spark.sql.connector.expressions
-
Represent an extract function, which extracts and returns the value of a specified datetime field from a datetime or interval value expression.
- Extract(String, Expression) - Constructor for class org.apache.spark.sql.connector.expressions.Extract
- ExtractableLiteral - Class in org.apache.spark.sql.columnar
- ExtractableLiteral() - Constructor for class org.apache.spark.sql.columnar.ExtractableLiteral
- extractAsDuration() - Method in class org.apache.spark.unsafe.types.CalendarInterval
-
Extracts the time part of the interval.
- extractAsPeriod() - Method in class org.apache.spark.unsafe.types.CalendarInterval
-
Extracts the date part of the interval.
- extractCatalog(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.SupportsCatalogOptions
-
Return the name of a catalog that can be used to check the existence of, load, and create a table for this DataSource given the identifier that will be extracted by
extractIdentifier
. - extractDistribution(Function1<BatchInfo, Option<Object>>) - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
- extractDoubleDistribution(Seq<Tuple2<TaskInfo, TaskMetrics>>, Function2<TaskInfo, TaskMetrics, Object>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- extractFn() - Method in class org.apache.spark.ui.JettyUtils.ServletParams
- extractHostPortFromSparkUrl(String) - Static method in class org.apache.spark.util.Utils
-
Return a pair of host and port extracted from the
sparkUrl
. - extractIdentifier(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.SupportsCatalogOptions
-
Return a
Identifier
instance that can identify a table for a DataSource given DataFrame[Reader|Writer] options. - extractLongDistribution(Seq<Tuple2<TaskInfo, TaskMetrics>>, Function2<TaskInfo, TaskMetrics, Object>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- extractMavenCoordinates(String) - Static method in class org.apache.spark.util.MavenUtils
-
Extracts maven coordinates from a comma-delimited string.
- extractParamMap() - Method in interface org.apache.spark.ml.param.Params
-
extractParamMap
with no extra values. - extractParamMap(ParamMap) - Method in interface org.apache.spark.ml.param.Params
-
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
- extractTimeTravelTimestamp(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.SupportsCatalogOptions
-
Extracts the timestamp string for time travel from the given options.
- extractTimeTravelVersion(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.SupportsCatalogOptions
-
Extracts the version string for time travel from the given options.
- extractWeightedLabeledPoints(Dataset<?>) - Method in interface org.apache.spark.ml.regression.IsotonicRegressionBase
-
Extracts (label, feature, weight) from input dataset.
- extraOptimizations() - Method in class org.apache.spark.sql.ExperimentalMethods
- extraStrategies() - Method in class org.apache.spark.sql.ExperimentalMethods
-
Allows extra strategies to be injected into the query planner at runtime.
- eye(int) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate an Identity Matrix in
DenseMatrix
format. - eye(int) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a dense Identity Matrix in
Matrix
format. - eye(int) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate an Identity Matrix in
DenseMatrix
format. - eye(int) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a dense Identity Matrix in
Matrix
format.
F
- f1Measure() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns document-based f1-measure averaged by the number of documents
- f1Measure(double) - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns f1-measure for a given label (category)
- factorial(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the factorial of the given value.
- FactorizationMachines - Interface in org.apache.spark.ml.regression
- FactorizationMachinesParams - Interface in org.apache.spark.ml.regression
-
Params for Factorization Machines
- factors() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- factors() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- factorSize() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- factorSize() - Method in class org.apache.spark.ml.classification.FMClassifier
- factorSize() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
-
Param for dimensionality of the factors (>= 0)
- factorSize() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- factorSize() - Method in class org.apache.spark.ml.regression.FMRegressor
- failed() - Method in class org.apache.spark.scheduler.TaskInfo
- FAILED - Enum constant in enum class org.apache.spark.JobExecutionStatus
- FAILED - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The application finished with a failed status.
- FAILED - Enum constant in enum class org.apache.spark.status.api.v1.StageStatus
- FAILED - Enum constant in enum class org.apache.spark.status.api.v1.TaskStatus
- FAILED() - Static method in class org.apache.spark.TaskState
- FAILED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- FAILED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- FAILED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- failedExecuteUserDefinedFunctionError(String, String, String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedJobIds() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- failedMergingSchemaError(StructType, StructType, SparkException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedParsingStructTypeError(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- failedRenameTempFileError(File, File) - Static method in class org.apache.spark.errors.SparkCoreErrors
- failedRenameTempFileError(Path, Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedStages() - Method in class org.apache.spark.status.LiveJob
- failedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- failedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- failedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- failedTasks() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- failedTasks() - Method in class org.apache.spark.status.LiveJob
- failedTasks() - Method in class org.apache.spark.status.LiveStage
- failedToCastValueToDataTypeForPartitionColumnError(String, DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToCommitStateFileError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToCompileMsg(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToExecuteQueryError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToFindAvroDataSourceError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- failedToFindKafkaDataSourceError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- failedToGenerateEpochMarkerError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToInstantiateConstructorForCatalogError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToLoadRoutineError(Seq<String>, Exception) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- failedToMergeIncompatibleSchemasError(StructType, StructType, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToPushRowIntoRowQueueError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToReadDataError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToReadDeltaFileKeySizeError(Path, String, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToReadDeltaFileNotExistsError(Path, String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToReadSnapshotFileKeySizeError(Path, String, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToReadSnapshotFileValueSizeError(Path, String, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToReadStreamingStateFileError(Path, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failedToRebuildExpressionError(Filter) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- failToConvertValueToJsonError(Object, Class<?>, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failToCreateCheckpointPathError(Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
- failToCreateDirectoryError(String, int) - Static method in class org.apache.spark.errors.SparkCoreErrors
- failToGetBlockWithLockError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- failToGetNonShuffleBlockError(BlockId, Throwable) - Static method in class org.apache.spark.errors.SparkCoreErrors
- failToParseDateTimeInNewParserError(String, Throwable) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- failToParseDateTimeInNewParserError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failToRecognizePatternAfterUpgradeError(String, Throwable) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- failToRecognizePatternAfterUpgradeError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failToRecognizePatternError(String, Throwable) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- failToRecognizePatternError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failToResolveDataSourceForTableError(CatalogTable, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- failToSerializeTaskError(Throwable) - Static method in class org.apache.spark.errors.SparkCoreErrors
- failToSetOriginalACLBackError(String, Path, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- failToStoreBlockOnBlockManagerError(BlockManagerId, BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- failToTruncateTableWhenRemovingDataError(String, Path, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- failure(String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- Failure() - Static method in class org.apache.spark.ml.feature.RFormulaParser
- FAILURE_REASON_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- failureReason() - Method in class org.apache.spark.scheduler.StageInfo
-
If the stage failed, the reason why.
- failureReason() - Method in class org.apache.spark.status.api.v1.StageData
- failureReason() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- failureReason() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- failureReasonCell(String, int, boolean) - Static method in class org.apache.spark.streaming.ui.UIUtils
- FAIR() - Static method in class org.apache.spark.scheduler.SchedulingMode
- fallbackV1RelationReportsInconsistentSchemaError(StructType, StructType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- FALSE - Static variable in class org.apache.spark.types.variant.VariantUtil
- falsePositiveRate(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns false positive rate for a given label (category)
- FalsePositiveRate - Class in org.apache.spark.mllib.evaluation.binary
-
False positive rate.
- FalsePositiveRate() - Constructor for class org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
- falsePositiveRateByLabel() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns false positive rate for each label (category).
- family() - Method in class org.apache.spark.ml.classification.LogisticRegression
- family() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- family() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
Param for the name of family which is a description of the label distribution to be used in the model.
- family() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- family() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Param for the name of family which is a description of the error distribution to be used in the model.
- family() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- Family$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
- FamilyAndLink$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
- fdr() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- fdr() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- fdr() - Method in interface org.apache.spark.ml.feature.SelectorParams
-
The upper bound of the expected false discovery rate.
- fdr() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- feature() - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
- feature() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- feature() - Method in class org.apache.spark.mllib.tree.model.Split
- FeatureHasher - Class in org.apache.spark.ml.feature
-
Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space).
- FeatureHasher() - Constructor for class org.apache.spark.ml.feature.FeatureHasher
- FeatureHasher(String) - Constructor for class org.apache.spark.ml.feature.FeatureHasher
- featureImportances() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- featureImportances() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- featureImportances() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- featureImportances() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- featureImportances() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- featureImportances() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- featureIndex() - Method in class org.apache.spark.ml.regression.IsotonicRegression
- featureIndex() - Method in interface org.apache.spark.ml.regression.IsotonicRegressionBase
-
Param for the index of the feature if
featuresCol
is a vector column (default:0
), no effect otherwise. - featureIndex() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- featureIndex() - Method in class org.apache.spark.ml.tree.CategoricalSplit
- featureIndex() - Method in class org.apache.spark.ml.tree.ContinuousSplit
- featureIndex() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
- featureIndex() - Method in interface org.apache.spark.ml.tree.Split
-
Index of feature which this split tests
- features() - Method in class org.apache.spark.ml.feature.LabeledPoint
- features() - Method in class org.apache.spark.mllib.regression.LabeledPoint
- featuresCol() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Field in "predictions" which gives the features of each instance as a vector.
- featuresCol() - Method in class org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
- featuresCol() - Method in class org.apache.spark.ml.classification.OneVsRest
- featuresCol() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- featuresCol() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- featuresCol() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- featuresCol() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
- featuresCol() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- featuresCol() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- featuresCol() - Method in class org.apache.spark.ml.clustering.KMeans
- featuresCol() - Method in class org.apache.spark.ml.clustering.KMeansModel
- featuresCol() - Method in class org.apache.spark.ml.clustering.LDA
- featuresCol() - Method in class org.apache.spark.ml.clustering.LDAModel
- featuresCol() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- featuresCol() - Method in class org.apache.spark.ml.feature.RFormula
- featuresCol() - Method in class org.apache.spark.ml.feature.RFormulaModel
- featuresCol() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- featuresCol() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- featuresCol() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- featuresCol() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- featuresCol() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- featuresCol() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- featuresCol() - Method in interface org.apache.spark.ml.param.shared.HasFeaturesCol
-
Param for features column name.
- featuresCol() - Method in class org.apache.spark.ml.PredictionModel
- featuresCol() - Method in class org.apache.spark.ml.Predictor
- featuresCol() - Method in class org.apache.spark.ml.regression.IsotonicRegression
- featuresCol() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- featuresCol() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- featureSubsetStrategy() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- featureSubsetStrategy() - Method in class org.apache.spark.ml.classification.GBTClassifier
- featureSubsetStrategy() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- featureSubsetStrategy() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- featureSubsetStrategy() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- featureSubsetStrategy() - Method in class org.apache.spark.ml.regression.GBTRegressor
- featureSubsetStrategy() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- featureSubsetStrategy() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- featureSubsetStrategy() - Method in interface org.apache.spark.ml.tree.TreeEnsembleParams
-
The number of features to consider for splits at each tree node.
- featureSum() - Method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
- featureType() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- featureType() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- featureType() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
-
The feature type.
- featureType() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- featureType() - Method in class org.apache.spark.mllib.tree.model.Split
- FeatureType - Class in org.apache.spark.mllib.tree.configuration
-
Enum to describe whether a feature is "continuous" or "categorical"
- FeatureType() - Constructor for class org.apache.spark.mllib.tree.configuration.FeatureType
- FETCH_WAIT_TIME() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- FETCH_WAIT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- FETCH_WAIT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- fetchAllPushMergedLocalBlocks(LinkedHashSet<BlockId>) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the iterator is initialized.
- FetchFailed - Class in org.apache.spark
-
:: DeveloperApi :: Task failed to fetch shuffle data from a remote node.
- FetchFailed(BlockManagerId, int, long, int, int, String) - Constructor for class org.apache.spark.FetchFailed
- fetchFailedError(BlockManagerId, int, long, int, int, String, Throwable) - Static method in class org.apache.spark.errors.SparkCoreErrors
- fetchFile(String, File, SparkConf, Configuration, long, boolean, boolean) - Static method in class org.apache.spark.util.Utils
-
Download a file or directory to target directory.
- fetchPct() - Method in class org.apache.spark.scheduler.RuntimePercentage
- fetchWaitTime() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- fetchWaitTime() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- field() - Method in class org.apache.spark.sql.connector.expressions.Extract
- field() - Method in class org.apache.spark.storage.BroadcastBlockId
- fieldCannotBeNullError(int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- fieldCannotBeNullMsg(int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- fieldDiffersFromDerivedLocalDateError(ChronoField, int, int, LocalDate) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- fieldDiffersFromDerivedLocalDateError(ChronoField, int, int, LocalDate) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- FieldEntry(String, int, int) - Constructor for class org.apache.spark.types.variant.VariantBuilder.FieldEntry
- fieldIndex(String) - Method in interface org.apache.spark.sql.Row
-
Returns the index of a given field name.
- fieldIndex(String) - Method in class org.apache.spark.sql.types.StructType
-
Returns the index of a given field.
- fieldIndexOnRowWithoutSchemaError(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- fieldNames() - Method in interface org.apache.spark.sql.connector.catalog.TableChange.ColumnChange
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnDefaultValue
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnNullability
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnPosition
- fieldNames() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
- fieldNames() - Method in interface org.apache.spark.sql.connector.expressions.NamedReference
-
Returns the referenced field name as an array of String parts.
- fieldNames() - Method in class org.apache.spark.sql.types.StructType
-
Returns all field names in an array.
- fieldNumberMismatchForDeserializerError(StructType, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- fields() - Method in class org.apache.spark.sql.types.StructType
- fieldToString(byte) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- fieldToString(byte) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- fieldToString(byte) - Static method in class org.apache.spark.util.DayTimeIntervalUtils
- FIFO() - Static method in class org.apache.spark.scheduler.SchedulingMode
- FileBasedTopologyMapper - Class in org.apache.spark.storage
-
A simple file based topology mapper.
- FileBasedTopologyMapper(SparkConf) - Constructor for class org.apache.spark.storage.FileBasedTopologyMapper
- fileLengthExceedsMaxLengthError(FileStatus, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- fileNotExistError(String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- fileReader() - Method in interface org.apache.spark.sql.avro.AvroUtils.RowReader
- files() - Method in class org.apache.spark.SparkContext
- filesCount() - Method in interface org.apache.spark.sql.connector.read.HasPartitionStatistics
-
Returns the count of files in the partition statistics associated to this partition.
- fileStream(String, Class<K>, Class<V>, Class<F>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
- fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
- fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean, Configuration) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
- fileStream(String, Function1<Path, Object>, boolean, Configuration, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
- fileStream(String, Function1<Path, Object>, boolean, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
- fileStream(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
- fill(boolean) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null values in boolean columns withvalue
. - fill(boolean) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(boolean, String[]) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null values in specified boolean columns. - fill(boolean, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(boolean, Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that replaces null values in specified boolean columns. - fill(boolean, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(double) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null or NaN values in numeric columns withvalue
. - fill(double) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(double, String[]) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null or NaN values in specified numeric columns. - fill(double, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(double, Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that replaces null or NaN values in specified numeric columns. - fill(double, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(long) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null or NaN values in numeric columns withvalue
. - fill(long) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(long, String[]) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null or NaN values in specified numeric columns. - fill(long, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(long, Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that replaces null or NaN values in specified numeric columns. - fill(long, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(String) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null values in string columns withvalue
. - fill(String) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(String, String[]) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null values in specified string columns. - fill(String, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(String, Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that replaces null values in specified string columns. - fill(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(Map<String, Object>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Returns a new
DataFrame
that replaces null values. - fill(Map<String, Object>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- fill(Map<String, Object>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Returns a new
DataFrame
that replaces null values. - fill(Map<String, Object>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- filter() - Method in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
- filter(String) - Method in class org.apache.spark.sql.api.Dataset
-
Filters rows using the given SQL expression.
- filter(String) - Method in class org.apache.spark.sql.Dataset
- filter(FilterFunction<T>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Returns a new Dataset that only contains elements where
func
returnstrue
. - filter(FilterFunction<T>) - Method in class org.apache.spark.sql.Dataset
- filter(Function<Double, Boolean>) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream containing only the elements that satisfy a predicate.
- filter(Function<T, Boolean>) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function<T, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Return a new DStream containing only the elements that satisfy a predicate.
- filter(Params) - Method in class org.apache.spark.ml.param.ParamMap
-
Filters this param map for the given parent.
- filter(Column) - Method in class org.apache.spark.sql.api.Dataset
-
Filters rows using the given condition.
- filter(Column) - Method in class org.apache.spark.sql.Dataset
- filter(Column, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Returns an array of elements for which a predicate holds in a given array.
- filter(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Returns an array of elements for which a predicate holds in a given array.
- filter(Predicate[]) - Method in interface org.apache.spark.sql.connector.read.SupportsRuntimeFiltering
- filter(Predicate[]) - Method in interface org.apache.spark.sql.connector.read.SupportsRuntimeV2Filtering
-
Filters this scan using runtime predicates.
- filter(Filter[]) - Method in interface org.apache.spark.sql.connector.read.SupportsRuntimeFiltering
-
Filters this scan using runtime filters.
- filter(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- filter(Function1<Graph<VD, ED>, Graph<VD2, ED2>>, Function1<EdgeTriplet<VD2, ED2>, Object>, Function2<Object, VD2, Object>, ClassTag<VD2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.GraphOps
-
Filter the graph by computing some values to filter on, and applying the predicates.
- filter(Function1<Tuple2<Object, VD>, Object>) - Method in class org.apache.spark.graphx.VertexRDD
-
Restricts the vertex set to the set of vertices satisfying the given predicate.
- filter(Function1<T, Object>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function1<T, Object>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset that only contains elements where
func
returnstrue
. - filter(Function1<T, Object>) - Method in class org.apache.spark.sql.Dataset
- filter(Function1<T, Object>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream containing only the elements that satisfy a predicate.
- Filter - Class in org.apache.spark.sql.sources
-
A filter predicate for data sources.
- Filter() - Constructor for class org.apache.spark.sql.sources.Filter
- filterAttributes() - Method in interface org.apache.spark.sql.connector.read.SupportsRuntimeFiltering
-
Returns attributes this scan can be filtered by at runtime.
- filterAttributes() - Method in interface org.apache.spark.sql.connector.read.SupportsRuntimeV2Filtering
-
Returns attributes this scan can be filtered by at runtime.
- filterByRange(Comparator<K>, K, K) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a RDD containing only the elements in the inclusive range
lower
toupper
. - filterByRange(K, K) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a RDD containing only the elements in the inclusive range
lower
toupper
. - filterByRange(K, K) - Method in class org.apache.spark.rdd.OrderedRDDFunctions
-
Returns an RDD containing only the elements in the inclusive range
lower
toupper
. - FilterFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for a function used in Dataset's filter function.
- filterName() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
- filterParams() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
- finalStorageLevel() - Method in class org.apache.spark.ml.recommendation.ALS
- finalStorageLevel() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for StorageLevel for ALS model factors.
- find_in_set(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the index (1-based) of the given string (
str
) in the comma-delimited list (strArray
). - findAccessedFields(SerializedLambda, ClassLoader, Map<Class<?>, Set<String>>, Map<Class<?>, Set<String>>, Map<Class<?>, Object>, boolean) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
-
Scans an indylambda Scala closure, along with its lexically nested closures, and populate the accessed fields info on which fields on the outer object are accessed.
- findClass(String) - Method in class org.apache.spark.util.ParentClassLoader
- findColumnPosition(Seq<String>, StructType, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Returns the given column's ordinal within the given
schema
. - findFrequentSequentialPatterns(Dataset<?>) - Method in class org.apache.spark.ml.fpm.PrefixSpan
-
Finds the complete set of frequent sequential patterns in the input sequences of itemsets.
- findListenersByClass(ClassTag<T>) - Method in interface org.apache.spark.util.ListenerBus
- findMatchingTokenClusterConfig(SparkConf, String) - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- findMissingFields(StructType, StructType, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
Returns a
StructType
that contains missing fields recursively fromsource
totarget
. - findMissingPartitions() - Method in class org.apache.spark.ShuffleStatus
-
Returns the sequence of partition ids that are missing (i.e.
- findMultipleDataSourceError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- findSynonyms(String, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words closest in similarity to the given word, not including the word itself.
- findSynonyms(String, int) - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Find synonyms of a word; do not include the word itself in results.
- findSynonyms(Vector, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words whose vector representation is most similar to the supplied vector.
- findSynonyms(Vector, int) - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Find synonyms of the vector representation of a word, possibly including any words in the model vocabulary whose vector representation is the supplied vector.
- findSynonymsArray(String, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words closest in similarity to the given word, not including the word itself.
- findSynonymsArray(Vector, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words whose vector representation is most similar to the supplied vector.
- finish(BUF) - Method in class org.apache.spark.sql.expressions.Aggregator
-
Transform the output of the reduction.
- finished() - Method in class org.apache.spark.scheduler.TaskInfo
- FINISHED - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The application finished with a successful status.
- FINISHED() - Static method in class org.apache.spark.TaskState
- finishTime() - Method in class org.apache.spark.scheduler.TaskInfo
-
The time when the task has completed successfully (including the time to remotely fetch results, if necessary).
- finishWritingArray(int, ArrayList<Integer>) - Method in class org.apache.spark.types.variant.VariantBuilder
- finishWritingObject(int, ArrayList<VariantBuilder.FieldEntry>) - Method in class org.apache.spark.types.variant.VariantBuilder
- first() - Method in class org.apache.spark.api.java.JavaDoubleRDD
- first() - Method in class org.apache.spark.api.java.JavaPairRDD
- first() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the first element in this RDD.
- first() - Method in class org.apache.spark.rdd.RDD
-
Return the first element in this RDD.
- first() - Method in class org.apache.spark.sql.api.Dataset
-
Returns the first row.
- first() - Static method in interface org.apache.spark.sql.connector.catalog.TableChange.ColumnPosition
- first(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value of a column in a group.
- first(String, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value of a column in a group.
- first(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value in a group.
- first(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value in a group.
- FIRST_TASK_LAUNCHED_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- first_value(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value in a group.
- first_value(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value in a group.
- firstFailureReason() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- firstLaunchTime() - Method in class org.apache.spark.status.LiveStage
- firstTaskLaunchedTime() - Method in class org.apache.spark.status.api.v1.StageData
- fit(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDF
-
Computes the inverse document frequency.
- fit(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.PCA
-
Java-friendly version of
fit()
. - fit(JavaRDD<S>) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Computes the vector representation of each word in vocabulary (Java version).
- fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDF
-
Computes the inverse document frequency.
- fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.PCA
-
Computes a
PCAModel
that contains the principal components of the input vectors. - fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.StandardScaler
-
Computes the mean and variance and stores as a model to be used for later scaling.
- fit(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
Returns a ChiSquared feature selector.
- fit(RDD<S>) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Computes the vector representation of each word in vocabulary.
- fit(Dataset<?>) - Method in class org.apache.spark.ml.classification.OneVsRest
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.KMeans
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDA
- fit(Dataset<?>) - Method in class org.apache.spark.ml.Estimator
-
Fits a model to the input data.
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.CountVectorizer
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.IDF
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.Imputer
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.MinMaxScaler
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.PCA
-
Computes a
PCAModel
that contains the principal components of the input vectors. - fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.RFormula
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.RobustScaler
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.StandardScaler
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.StringIndexer
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorIndexer
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.Word2Vec
- fit(Dataset<?>) - Method in class org.apache.spark.ml.fpm.FPGrowth
- fit(Dataset<?>) - Method in class org.apache.spark.ml.Pipeline
-
Fits the pipeline to the input dataset with additional parameters.
- fit(Dataset<?>) - Method in class org.apache.spark.ml.Predictor
- fit(Dataset<?>) - Method in class org.apache.spark.ml.recommendation.ALS
- fit(Dataset<?>) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- fit(Dataset<?>) - Method in class org.apache.spark.ml.tuning.CrossValidator
- fit(Dataset<?>) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- fit(Dataset<?>, ParamMap) - Method in class org.apache.spark.ml.Estimator
-
Fits a single model to the input data with provided parameter map.
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Method in class org.apache.spark.ml.Estimator
-
Fits a single model to the input data with optional parameters.
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.Estimator
-
Fits a single model to the input data with optional parameters.
- fit(Dataset<?>, Seq<ParamMap>) - Method in class org.apache.spark.ml.Estimator
-
Fits multiple models to the input data with multiple sets of parameters.
- FitEnd<M extends Model<M>> - Class in org.apache.spark.ml
-
Event fired after
Estimator.fit
. - FitEnd() - Constructor for class org.apache.spark.ml.FitEnd
- fitIntercept() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- fitIntercept() - Method in class org.apache.spark.ml.classification.FMClassifier
- fitIntercept() - Method in class org.apache.spark.ml.classification.LinearSVC
- fitIntercept() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- fitIntercept() - Method in class org.apache.spark.ml.classification.LogisticRegression
- fitIntercept() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- fitIntercept() - Method in interface org.apache.spark.ml.param.shared.HasFitIntercept
-
Param for whether to fit an intercept term.
- fitIntercept() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- fitIntercept() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- fitIntercept() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- fitIntercept() - Method in class org.apache.spark.ml.regression.FMRegressor
- fitIntercept() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- fitIntercept() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- fitIntercept() - Method in class org.apache.spark.ml.regression.LinearRegression
- fitIntercept() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- fitLinear() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- fitLinear() - Method in class org.apache.spark.ml.classification.FMClassifier
- fitLinear() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
-
Param for whether to fit linear term (aka 1-way term)
- fitLinear() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- fitLinear() - Method in class org.apache.spark.ml.regression.FMRegressor
- FitStart<M extends Model<M>> - Class in org.apache.spark.ml
-
Event fired before
Estimator.fit
. - FitStart() - Constructor for class org.apache.spark.ml.FitStart
- Fixed$() - Constructor for class org.apache.spark.sql.types.DecimalType.Fixed$
- FlamegraphNode - Class in org.apache.spark.ui.flamegraph
- FlamegraphNode(String) - Constructor for class org.apache.spark.ui.flamegraph.FlamegraphNode
- flatMap(FlatMapFunction<T, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
- flatMap(FlatMapFunction<T, U>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
- flatMap(FlatMapFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Returns a new Dataset by first applying a function to all elements of this Dataset, and then flattening the results.
- flatMap(FlatMapFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
- flatMap(Function1<T, IterableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset by first applying a function to all elements of this Dataset, and then flattening the results.
- flatMap(Function1<T, IterableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
- flatMap(Function1<T, IterableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
- flatMap(Function1<T, IterableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
- FlatMapFunction<T,
R> - Interface in org.apache.spark.api.java.function -
A function that returns zero or more output records from each input record.
- FlatMapFunction2<T1,
T2, R> - Interface in org.apache.spark.api.java.function -
A function that takes two inputs and returns zero or more output records.
- flatMapGroups(FlatMapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data.
- flatMapGroups(FlatMapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- flatMapGroups(Function2<K, Iterator<V>, IterableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data.
- flatMapGroups(Function2<K, Iterator<V>, IterableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- FlatMapGroupsFunction<K,
V, R> - Interface in org.apache.spark.api.java.function -
A function that returns zero or more output records from each grouping key and its values.
- flatMapGroupsWithState(FlatMapGroupsWithStateFunction<K, V, S, U>, OutputMode, Encoder<S>, Encoder<U>, GroupStateTimeout) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- flatMapGroupsWithState(FlatMapGroupsWithStateFunction<K, V, S, U>, OutputMode, Encoder<S>, Encoder<U>, GroupStateTimeout) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- flatMapGroupsWithState(FlatMapGroupsWithStateFunction<K, V, S, U>, OutputMode, Encoder<S>, Encoder<U>, GroupStateTimeout, KeyValueGroupedDataset) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- flatMapGroupsWithState(FlatMapGroupsWithStateFunction<K, V, S, U>, OutputMode, Encoder<S>, Encoder<U>, GroupStateTimeout, KeyValueGroupedDataset<K, S>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- flatMapGroupsWithState(OutputMode, GroupStateTimeout, KeyValueGroupedDataset, Function3<K, Iterator<V>, GroupState<S>, Iterator<U>>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- flatMapGroupsWithState(OutputMode, GroupStateTimeout, KeyValueGroupedDataset<K, S>, Function3<K, Iterator<V>, GroupState<S>, Iterator<U>>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- flatMapGroupsWithState(OutputMode, GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, Iterator<U>>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- flatMapGroupsWithState(OutputMode, GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, Iterator<U>>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- FlatMapGroupsWithStateFunction<K,
V, S, R> - Interface in org.apache.spark.api.java.function -
::Experimental:: Base interface for a map function used in
org.apache.spark.sql.KeyValueGroupedDataset.flatMapGroupsWithState( FlatMapGroupsWithStateFunction, org.apache.spark.sql.streaming.OutputMode, org.apache.spark.sql.Encoder, org.apache.spark.sql.Encoder)
- flatMapSortedGroups(Column[], FlatMapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data.
- flatMapSortedGroups(Column[], FlatMapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- flatMapSortedGroups(Seq<Column>, Function2<K, Iterator<V>, IterableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data.
- flatMapSortedGroups(Seq<Column>, Function2<K, Iterator<V>, IterableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- flatMapToDouble(DoubleFlatMapFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
- flatMapValues(FlatMapFunction<V, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
- flatMapValues(FlatMapFunction<V, U>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
- flatMapValues(Function1<V, IterableOnce<U>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
- flatMapValues(Function1<V, IterableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
- flatten(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a single array from an array of arrays.
- FLOAT - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- FLOAT - Static variable in class org.apache.spark.types.variant.VariantUtil
- FLOAT() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable float type.
- FloatAsIfIntegral$() - Constructor for class org.apache.spark.sql.types.FloatType.FloatAsIfIntegral$
- floatColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
- FloatExactNumeric - Class in org.apache.spark.sql.types
- FloatExactNumeric() - Constructor for class org.apache.spark.sql.types.FloatExactNumeric
- FloatParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Float]
for Java. - FloatParam(String, String, String) - Constructor for class org.apache.spark.ml.param.FloatParam
- FloatParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.FloatParam
- FloatParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.FloatParam
- FloatParam(Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.FloatParam
- FloatType - Class in org.apache.spark.sql.types
-
The data type representing
Float
values. - FloatType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the FloatType object.
- FloatType() - Constructor for class org.apache.spark.sql.types.FloatType
- FloatType.FloatAsIfIntegral - Interface in org.apache.spark.sql.types
- FloatType.FloatAsIfIntegral$ - Class in org.apache.spark.sql.types
- FloatType.FloatIsConflicted - Interface in org.apache.spark.sql.types
- FloatTypeExpression - Class in org.apache.spark.sql.types
- FloatTypeExpression() - Constructor for class org.apache.spark.sql.types.FloatTypeExpression
- floor() - Method in class org.apache.spark.sql.types.Decimal
- floor(String) - Static method in class org.apache.spark.sql.functions
-
Computes the floor of the given column value to 0 decimal places.
- floor(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the floor of the given value of
e
to 0 decimal places. - floor(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Computes the floor of the given value of
e
toscale
decimal places. - floor(Duration) - Method in class org.apache.spark.streaming.Time
- floor(Duration, Time) - Method in class org.apache.spark.streaming.Time
- flush() - Method in class org.apache.spark.serializer.SerializationStream
- flush() - Method in class org.apache.spark.storage.TimeTrackingOutputStream
- FMClassificationModel - Class in org.apache.spark.ml.classification
-
Model produced by
FMClassifier
- FMClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for FMClassifier results for a given model.
- FMClassificationSummaryImpl - Class in org.apache.spark.ml.classification
-
FMClassifier results for a given model.
- FMClassificationSummaryImpl(Dataset<Row>, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- FMClassificationTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for FMClassifier training results.
- FMClassificationTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
FMClassifier training results.
- FMClassificationTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.FMClassificationTrainingSummaryImpl
- FMClassifier - Class in org.apache.spark.ml.classification
-
Factorization Machines learning algorithm for classification.
- FMClassifier() - Constructor for class org.apache.spark.ml.classification.FMClassifier
- FMClassifier(String) - Constructor for class org.apache.spark.ml.classification.FMClassifier
- FMClassifierParams - Interface in org.apache.spark.ml.classification
-
Params for FMClassifier.
- fMeasure(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns f1-measure for a given label (category)
- fMeasure(double, double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns f-measure for a given label (category)
- fMeasureByLabel() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns f1-measure for each label (category).
- fMeasureByLabel(double) - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns f-measure for each label (category).
- fMeasureByThreshold() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Returns a dataframe with two fields (threshold, F-Measure) curve with beta = 1.0.
- fMeasureByThreshold() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
- fMeasureByThreshold() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- fMeasureByThreshold() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- fMeasureByThreshold() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- fMeasureByThreshold() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, F-Measure) curve with beta = 1.0.
- fMeasureByThreshold(double) - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, F-Measure) curve.
- FMRegressionModel - Class in org.apache.spark.ml.regression
-
Model produced by
FMRegressor
. - FMRegressor - Class in org.apache.spark.ml.regression
-
Factorization Machines learning algorithm for regression.
- FMRegressor() - Constructor for class org.apache.spark.ml.regression.FMRegressor
- FMRegressor(String) - Constructor for class org.apache.spark.ml.regression.FMRegressor
- FMRegressorParams - Interface in org.apache.spark.ml.regression
-
Params for FMRegressor
- fold(T, Function2<T, T, T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".
- fold(T, Function2<T, T, T>) - Method in class org.apache.spark.rdd.RDD
-
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".
- foldByKey(V, int, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, int, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldCol() - Method in class org.apache.spark.ml.tuning.CrossValidator
- foldCol() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- foldCol() - Method in interface org.apache.spark.ml.tuning.CrossValidatorParams
-
Param for the column name of user specified fold number.
- forall(Column, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Returns whether a predicate holds for every element in the array.
- forceIndexLabel() - Method in class org.apache.spark.ml.feature.RFormula
- forceIndexLabel() - Method in interface org.apache.spark.ml.feature.RFormulaBase
-
Force to index label whether it is numeric or string type.
- forceIndexLabel() - Method in class org.apache.spark.ml.feature.RFormulaModel
- foreach(ForeachFunction<T>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Runs
func
on each element of this Dataset. - foreach(VoidFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Applies a function f to all elements of this RDD.
- foreach(ForeachWriter<T>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Sets the output of the streaming query to be processed using the provided writer object.
- foreach(KVStoreView<T>, Function1<T, BoxedUnit>) - Static method in class org.apache.spark.status.KVUtils
-
Applies a function f to all values produced by KVStoreView.
- foreach(Function1<T, BoxedUnit>) - Method in class org.apache.spark.rdd.RDD
-
Applies a function f to all elements of this RDD.
- foreach(Function1<T, BoxedUnit>) - Method in class org.apache.spark.sql.api.Dataset
-
Applies a function
f
to all rows. - foreach(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.ml.linalg.Vector
-
Applies a function
f
to all the elements of dense and sparse vector. - foreach(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Applies a function
f
to all the elements of dense and sparse vector. - foreachActive(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.ml.linalg.Vector
-
Applies a function
f
to all the active elements of dense and sparse vector. - foreachActive(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Applies a function
f
to all the active elements of dense and sparse vector. - foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in class org.apache.spark.ml.linalg.DenseMatrix
- foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Applies a function
f
to all the active elements of dense and sparse matrix. - foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in class org.apache.spark.ml.linalg.SparseMatrix
- foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Applies a function
f
to all the active elements of dense and sparse matrix. - foreachAsync(VoidFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of the
foreach
action, which applies a function f to all the elements of this RDD. - foreachAsync(Function1<T, BoxedUnit>) - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Applies a function f to all elements of this RDD.
- foreachBatch(VoidFunction2<Dataset<T>, Long>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
:: Experimental ::
- foreachBatch(Function2<Dataset<T>, Object, BoxedUnit>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
:: Experimental ::
- ForeachFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for a function used in Dataset's foreach function.
- foreachNonZero(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.ml.linalg.Vector
-
Applies a function
f
to all the non-zero elements of dense and sparse vector. - foreachNonZero(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Applies a function
f
to all the non-zero elements of dense and sparse vector. - foreachPartition(ForeachPartitionFunction<T>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Runs
func
on each partition of this Dataset. - foreachPartition(ForeachPartitionFunction<T>) - Method in class org.apache.spark.sql.Dataset
- foreachPartition(VoidFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Applies a function f to each partition of this RDD.
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.rdd.RDD
-
Applies a function f to each partition of this RDD.
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.sql.api.Dataset
-
Applies a function
f
to each partition of this Dataset. - foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.sql.Dataset
- foreachPartitionAsync(VoidFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of the
foreachPartition
action, which applies a function f to each partition of this RDD. - foreachPartitionAsync(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Applies a function f to each partition of this RDD.
- ForeachPartitionFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for a function used in Dataset's foreachPartition function.
- foreachRDD(VoidFunction<R>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Apply a function to each RDD in this DStream.
- foreachRDD(VoidFunction2<R, Time>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Apply a function to each RDD in this DStream.
- foreachRDD(Function1<RDD<T>, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Apply a function to each RDD in this DStream.
- foreachRDD(Function2<RDD<T>, Time, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Apply a function to each RDD in this DStream.
- ForeachWriter<T> - Class in org.apache.spark.sql
-
The abstract class for writing custom logic to process data generated by a query.
- ForeachWriter() - Constructor for class org.apache.spark.sql.ForeachWriter
- foreachWriterAbortedDueToTaskFailureError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- format() - Method in class org.apache.spark.ml.clustering.InternalKMeansModelWriter
- format() - Method in class org.apache.spark.ml.clustering.PMMLKMeansModelWriter
- format() - Method in class org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
- format() - Method in class org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
- format() - Method in interface org.apache.spark.ml.util.MLFormatRegister
-
The string that represents the format that this format provider uses.
- format(String) - Method in class org.apache.spark.ml.util.GeneralMLWriter
-
Specifies the format of ML export (e.g.
- format(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Specifies the input data source format.
- format(String) - Method in class org.apache.spark.sql.DataFrameReader
- format(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Specifies the underlying output data source.
- format(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Specifies the input data source format.
- format(String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Specifies the underlying output data source.
- format_number(Column, int) - Static method in class org.apache.spark.sql.functions
-
Formats numeric column x to a format like '#,###,###.##', rounded to d decimal places with HALF_EVEN round mode, and returns the result as a string column.
- format_string(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Formats the arguments in printf-style and returns the result as a string column.
- format_string(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Formats the arguments in printf-style and returns the result as a string column.
- formatBatchTime(long, long, boolean, TimeZone) - Static method in class org.apache.spark.ui.UIUtils
-
If
batchInterval
is less than 1 second, formatbatchTime
with milliseconds. - formatDate(long) - Static method in class org.apache.spark.ui.UIUtils
- formatDate(Date) - Static method in class org.apache.spark.ui.UIUtils
- formatDuration(long) - Static method in class org.apache.spark.ui.UIUtils
- formatDurationVerbose(long) - Static method in class org.apache.spark.ui.UIUtils
-
Generate a verbose human-readable string representing a duration such as "5 second 35 ms"
- formatImportJavaScript(HttpServletRequest, String, Seq<String>) - Static method in class org.apache.spark.ui.UIUtils
- formatNumber(double) - Static method in class org.apache.spark.ui.UIUtils
-
Generate a human-readable string representing a number (e.g.
- formula() - Method in class org.apache.spark.ml.feature.RFormula
- formula() - Method in interface org.apache.spark.ml.feature.RFormulaBase
-
R formula parameter.
- formula() - Method in class org.apache.spark.ml.feature.RFormulaModel
- forNumber(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
- forNumber(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
- forNumber(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
- forNumber(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
- forward(DenseMatrix<Object>, boolean) - Method in interface org.apache.spark.ml.ann.TopologyModel
-
Forward propagation
- forwardToFSPrefix() - Static method in class org.apache.spark.sql.artifact.ArtifactManager
- foundDifferentWindowFunctionTypeError(Seq<NamedExpression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- foundDuplicateFieldInCaseInsensitiveModeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- foundDuplicateFieldInFieldIdLookupModeError(int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- foundMultipleDataSources(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- foundMultipleXMLDataSourceError(String, Seq<String>, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- foundNullValueForNotNullableFieldError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- foundRecursionInProtobufSchema(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- FPGA() - Static method in class org.apache.spark.resource.ResourceUtils
- FPGrowth - Class in org.apache.spark.ml.fpm
-
A parallel FP-growth algorithm to mine frequent itemsets.
- FPGrowth - Class in org.apache.spark.mllib.fpm
-
A parallel FP-growth algorithm to mine frequent itemsets.
- FPGrowth() - Constructor for class org.apache.spark.ml.fpm.FPGrowth
- FPGrowth() - Constructor for class org.apache.spark.mllib.fpm.FPGrowth
-
Constructs a default instance with default parameters {minSupport:
0.3
, numPartitions: same as the input data}. - FPGrowth(String) - Constructor for class org.apache.spark.ml.fpm.FPGrowth
- FPGrowth.FreqItemset<Item> - Class in org.apache.spark.mllib.fpm
-
Frequent itemset.
- FPGrowthModel - Class in org.apache.spark.ml.fpm
-
Model fitted by FPGrowth.
- FPGrowthModel<Item> - Class in org.apache.spark.mllib.fpm
-
Model trained by
FPGrowth
, which holds frequent itemsets. - FPGrowthModel(RDD<FPGrowth.FreqItemset<Item>>, Map<Item, Object>, ClassTag<Item>) - Constructor for class org.apache.spark.mllib.fpm.FPGrowthModel
- FPGrowthModel(RDD<FPGrowth.FreqItemset<Item>>, ClassTag<Item>) - Constructor for class org.apache.spark.mllib.fpm.FPGrowthModel
- FPGrowthModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.fpm
- FPGrowthParams - Interface in org.apache.spark.ml.fpm
-
Common params for FPGrowth and FPGrowthModel
- fpr() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- fpr() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- fpr() - Method in interface org.apache.spark.ml.feature.SelectorParams
-
The highest p-value for features to be kept.
- fpr() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- FRACTION_CACHED() - Static method in class org.apache.spark.ui.storage.ToolTips
- fragment() - Method in interface org.apache.spark.QueryContext
- freq() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
- freq() - Method in class org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
- freqItems(String[]) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Finding frequent items for columns, possibly with false positives.
- freqItems(String[]) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- freqItems(String[], double) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Finding frequent items for columns, possibly with false positives.
- freqItems(String[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- freqItems(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
(Scala-specific) Finding frequent items for columns, possibly with false positives.
- freqItems(Seq<String>) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- freqItems(Seq<String>, double) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
(Scala-specific) Finding frequent items for columns, possibly with false positives.
- freqItems(Seq<String>, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- FreqItemset(Object, long) - Constructor for class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
- freqItemsets() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- freqItemsets() - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
- FreqSequence(Object[], long) - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
- freqSequences() - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel
- from_avro(Column, String) - Static method in class org.apache.spark.sql.avro.functions
-
Converts a binary column of avro format into its corresponding catalyst value.
- from_avro(Column, String, Map<String, String>) - Static method in class org.apache.spark.sql.avro.functions
-
Converts a binary column of Avro format into its corresponding catalyst value.
- from_csv(Column, Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a CSV string into a
StructType
with the specified schema. - from_csv(Column, StructType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a CSV string into a
StructType
with the specified schema. - FROM_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- FROM_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- from_json(Column, String, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a JSON string into a
MapType
withStringType
as keys type,StructType
orArrayType
with the specified schema. - from_json(Column, String, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Parses a column containing a JSON string into a
MapType
withStringType
as keys type,StructType
orArrayType
with the specified schema. - from_json(Column, Column) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Parses a column containing a JSON string into a
MapType
withStringType
as keys type,StructType
orArrayType
ofStructType
s with the specified schema. - from_json(Column, Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a JSON string into a
MapType
withStringType
as keys type,StructType
orArrayType
ofStructType
s with the specified schema. - from_json(Column, DataType) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a JSON string into a
MapType
withStringType
as keys type,StructType
orArrayType
with the specified schema. - from_json(Column, DataType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a JSON string into a
MapType
withStringType
as keys type,StructType
orArrayType
with the specified schema. - from_json(Column, DataType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Parses a column containing a JSON string into a
MapType
withStringType
as keys type,StructType
orArrayType
with the specified schema. - from_json(Column, StructType) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a JSON string into a
StructType
with the specified schema. - from_json(Column, StructType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a JSON string into a
StructType
with the specified schema. - from_json(Column, StructType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Parses a column containing a JSON string into a
StructType
with the specified schema. - from_unixtime(Column) - Static method in class org.apache.spark.sql.functions
-
Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the yyyy-MM-dd HH:mm:ss format.
- from_unixtime(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the given format.
- from_utc_timestamp(Column, String) - Static method in class org.apache.spark.sql.functions
-
Given a timestamp like '2017-07-14 02:40:00.0', interprets it as a time in UTC, and renders that time as a timestamp in the given time zone.
- from_utc_timestamp(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Given a timestamp like '2017-07-14 02:40:00.0', interprets it as a time in UTC, and renders that time as a timestamp in the given time zone.
- from_xml(Column, String, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a XML string into a
StructType
with the specified schema. - from_xml(Column, Column) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a XML string into a
StructType
with the specified schema. - from_xml(Column, Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a XML string into a
StructType
with the specified schema. - from_xml(Column, StructType) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a XML string into the data type corresponding to the specified schema.
- from_xml(Column, StructType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a XML string into the data type corresponding to the specified schema.
- fromArrowField(Field) - Static method in class org.apache.spark.sql.util.ArrowUtils
- fromArrowSchema(Schema) - Static method in class org.apache.spark.sql.util.ArrowUtils
- fromArrowType(ArrowType) - Static method in class org.apache.spark.sql.util.ArrowUtils
- fromCOO(int, int, Iterable<Tuple3<Object, Object, Object>>) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a
SparseMatrix
from Coordinate List (COO) format. - fromCOO(int, int, Iterable<Tuple3<Object, Object, Object>>) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a
SparseMatrix
from Coordinate List (COO) format. - fromDDL(String) - Static method in class org.apache.spark.sql.types.DataType
- fromDDL(String) - Static method in class org.apache.spark.sql.types.StructType
-
Creates StructType for a given DDL-formatted string, which is a comma separated list of field definitions, e.g., a INT, b STRING.
- fromDecimal(Object) - Static method in class org.apache.spark.sql.types.Decimal
- fromDStream(DStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
Convert a scala
DStream
to a Java-friendlyJavaDStream
. - fromEdgePartitions(RDD<Tuple2<Object, EdgePartition<ED, VD>>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from EdgePartitions, setting referenced vertices to
defaultVertexAttr
. - fromEdges(EdgeRDD<?>, int, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a
VertexRDD
containing all vertices referred to inedges
. - fromEdges(RDD<Edge<ED>>, ClassTag<ED>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
Creates an EdgeRDD from a set of edges.
- fromEdges(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of edges.
- fromEdgeTuples(RDD<Tuple2<Object, Object>>, VD, Option<PartitionStrategy>, StorageLevel, StorageLevel, ClassTag<VD>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of edges encoded as vertex id pairs.
- fromExistingRDDs(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from a VertexRDD and an EdgeRDD with the same replicated vertex type as the vertices.
- fromFileStatus(FileStatus) - Static method in class org.apache.spark.paths.SparkPath
- fromInputDStream(InputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
Convert a scala
InputDStream
of pairs to a Java-friendlyJavaPairInputDStream
. - fromInputDStream(InputDStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
Convert a scala
InputDStream
to a Java-friendlyJavaInputDStream
. - fromInt(int) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- fromInt(int) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- fromInt(int) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- fromInt(int) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- fromInt(int) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- fromInt(int) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- fromInt(int) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- fromInt(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- fromInt(int) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- fromInt(int) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- fromJavaDStream(JavaDStream<Tuple2<K, V>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
- fromJavaRDD(JavaRDD<Tuple2<K, V>>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
Convert a JavaRDD of key-value pairs to JavaPairRDD.
- fromJson(String) - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
-
Parses the JSON representation of a Matrix into a
Matrix
. - fromJson(String) - Static method in class org.apache.spark.ml.linalg.JsonVectorConverter
-
Parses the JSON representation of a vector into a
Vector
. - fromJson(String) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Parses the JSON representation of a vector into a
Vector
. - fromJson(String) - Static method in class org.apache.spark.sql.types.DataType
- fromJson(String) - Static method in class org.apache.spark.sql.types.Metadata
-
Creates a Metadata instance from JSON.
- fromKinesisInitialPosition(InitialPositionInStream) - Static method in class org.apache.spark.streaming.kinesis.KinesisInitialPositions
-
Returns instance of [[KinesisInitialPosition]] based on the passed [[InitialPositionInStream]].
- fromMetadata(Metadata) - Method in interface org.apache.spark.ml.attribute.AttributeFactory
-
Creates an
Attribute
from aMetadata
instance. - fromML(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Convert new linalg type to spark.mllib type.
- fromML(DenseVector) - Static method in class org.apache.spark.mllib.linalg.DenseVector
-
Convert new linalg type to spark.mllib type.
- fromML(Matrix) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Convert new linalg type to spark.mllib type.
- fromML(SparseMatrix) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Convert new linalg type to spark.mllib type.
- fromML(SparseVector) - Static method in class org.apache.spark.mllib.linalg.SparseVector
-
Convert new linalg type to spark.mllib type.
- fromML(Vector) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Convert new linalg type to spark.mllib type.
- fromName(String) - Static method in class org.apache.spark.ml.attribute.AttributeType
-
Gets the
AttributeType
object from its name. - fromNullable(T) - Static method in class org.apache.spark.api.java.Optional
- fromOld(Node, Map<Object, Object>) - Static method in class org.apache.spark.ml.tree.Node
-
Create a new Node from the old Node format, recursively creating child nodes as needed.
- fromPairDStream(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
- fromPairRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.mllib.rdd.MLPairRDDFunctions
-
Implicit conversion from a pair RDD to MLPairRDDFunctions.
- fromParams(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
-
Gets the
GeneralizedLinearRegression.Family
object based on param family and variancePower. - fromParams(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
-
Gets the
GeneralizedLinearRegression.Link
object based on param family, link and linkPower. - fromPath(Path) - Static method in class org.apache.spark.paths.SparkPath
- fromPathString(String) - Static method in class org.apache.spark.paths.SparkPath
-
Creates a SparkPath from a hadoop Path string.
- fromRdd(RDD<?>) - Static method in class org.apache.spark.storage.RDDInfo
- fromRDD(RDD<Object>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
- fromRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.api.java.JavaPairRDD
- fromRDD(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.api.java.JavaRDD
- fromRDD(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.mllib.rdd.RDDFunctions
-
Implicit conversion from an RDD to RDDFunctions.
- fromReceiverInputDStream(ReceiverInputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
Convert a scala
ReceiverInputDStream
to a Java-friendlyJavaReceiverInputDStream
. - fromReceiverInputDStream(ReceiverInputDStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
Convert a scala
ReceiverInputDStream
to a Java-friendlyJavaReceiverInputDStream
. - fromSparkContext(SparkContext) - Static method in class org.apache.spark.api.java.JavaSparkContext
- fromStage(Stage, int, Option<Object>, TaskMetrics, Seq<Seq<TaskLocation>>, int) - Static method in class org.apache.spark.scheduler.StageInfo
-
Construct a StageInfo from a Stage.
- fromString(String) - Static method in enum class org.apache.spark.JobExecutionStatus
- fromString(String) - Static method in class org.apache.spark.mllib.tree.impurity.Impurities
- fromString(String) - Static method in class org.apache.spark.mllib.tree.loss.Losses
- fromString(String) - Static method in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- fromString(String) - Static method in enum class org.apache.spark.status.api.v1.ApplicationStatus
- fromString(String) - Static method in enum class org.apache.spark.status.api.v1.StageStatus
- fromString(String) - Static method in enum class org.apache.spark.status.api.v1.streaming.BatchStatus
- fromString(String) - Static method in enum class org.apache.spark.status.api.v1.TaskSorting
- fromString(String) - Static method in enum class org.apache.spark.status.api.v1.TaskStatus
- fromString(String) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi :: Return the StorageLevel object with the specified name.
- fromString(String) - Static method in enum class org.apache.spark.storage.StorageLevelMapper
- fromString(UTF8String) - Static method in class org.apache.spark.sql.types.Decimal
- fromStringANSI(UTF8String, DecimalType, QueryContext) - Static method in class org.apache.spark.sql.types.Decimal
- fromStructField(StructField) - Static method in class org.apache.spark.ml.attribute.Attribute
- fromStructField(StructField) - Method in interface org.apache.spark.ml.attribute.AttributeFactory
-
Creates an
Attribute
from aStructField
instance. - fromStructField(StructField) - Static method in class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group from a
StructField
instance. - fromStructField(StructField) - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
- fromStructField(StructField) - Static method in class org.apache.spark.ml.attribute.NominalAttribute
- fromStructField(StructField) - Static method in class org.apache.spark.ml.attribute.NumericAttribute
- fromToIntervalUnsupportedError(String, String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- fromUri(URI) - Static method in class org.apache.spark.paths.SparkPath
- fromUrlString(String) - Static method in class org.apache.spark.paths.SparkPath
-
Creates a SparkPath from a url-encoded string.
- fullOuterJoin(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a full outer join of
this
andother
. - fullOuterJoin(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a full outer join of
this
andother
. - fullOuterJoin(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a full outer join of
this
andother
. - fullOuterJoin(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a full outer join of
this
andother
. - fullOuterJoin(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a full outer join of
this
andother
. - fullOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a full outer join of
this
andother
. - fullOuterJoin(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'full outer join' between RDDs of
this
DStream andother
DStream. - fullOuterJoin(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'full outer join' between RDDs of
this
DStream andother
DStream. - fullOuterJoin(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'full outer join' between RDDs of
this
DStream andother
DStream. - fullOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'full outer join' between RDDs of
this
DStream andother
DStream. - fullOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'full outer join' between RDDs of
this
DStream andother
DStream. - fullOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'full outer join' between RDDs of
this
DStream andother
DStream. - fullStackTrace() - Method in class org.apache.spark.ExceptionFailure
- fullyQuoted() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- funcBuildError(String, Exception) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- function(Function3<KeyType, Optional<ValueType>, State<StateType>, MappedType>) - Static method in class org.apache.spark.streaming.StateSpec
-
Create a
StateSpec
for setting all the specifications of themapWithState
operation on aJavaPairDStream
. - function(Function4<Time, KeyType, Optional<ValueType>, State<StateType>, Optional<MappedType>>) - Static method in class org.apache.spark.streaming.StateSpec
-
Create a
StateSpec
for setting all the specifications of themapWithState
operation on aJavaPairDStream
. - function(Function3<KeyType, Option<ValueType>, State<StateType>, MappedType>) - Static method in class org.apache.spark.streaming.StateSpec
-
Create a
StateSpec
for setting all the specifications of themapWithState
operation on apair DStream
. - function(Function4<Time, KeyType, Option<ValueType>, State<StateType>, Option<MappedType>>) - Static method in class org.apache.spark.streaming.StateSpec
-
Create a
StateSpec
for setting all the specifications of themapWithState
operation on apair DStream
. - Function - Class in org.apache.spark.sql.catalog
-
A user-defined function in Spark, as returned by
listFunctions
method inCatalog
. - Function<T1,
R> - Interface in org.apache.spark.api.java.function -
Base interface for functions whose return types do not create special RDDs.
- Function - Interface in org.apache.spark.sql.connector.catalog.functions
-
Base class for user-defined functions.
- Function(String, String, String[], String, String, boolean) - Constructor for class org.apache.spark.sql.catalog.Function
- Function(String, String, String, String, boolean) - Constructor for class org.apache.spark.sql.catalog.Function
- Function0<R> - Interface in org.apache.spark.api.java.function
-
A zero-argument function that returns an R.
- Function2<T1,
T2, R> - Interface in org.apache.spark.api.java.function -
A two-argument function that takes arguments of type T1 and T2 and returns an R.
- Function3<T1,
T2, T3, R> - Interface in org.apache.spark.api.java.function -
A three-argument function that takes arguments of type T1, T2 and T3 and returns an R.
- Function4<T1,
T2, T3, T4, R> - Interface in org.apache.spark.api.java.function -
A four-argument function that takes arguments of type T1, T2, T3 and T4 and returns an R.
- functionAlreadyExistsError(FunctionIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- functionCannotProcessInputError(UnboundFunction, Seq<Expression>, UnsupportedOperationException) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- FunctionCatalog - Interface in org.apache.spark.sql.connector.catalog
-
Catalog methods for working with Functions.
- functionExists(String) - Method in class org.apache.spark.sql.api.Catalog
-
Check if the function with the specified name exists.
- functionExists(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Check if the function with the specified name exists in the specified database under the Hive Metastore.
- functionExists(Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- functionExists(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.FunctionCatalog
-
Returns true if the function exists, false otherwise.
- FunctionIdentifierHelper(FunctionIdentifier) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.FunctionIdentifierHelper
- functionNameUnsupportedError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- functions - Class in org.apache.spark.ml
- functions - Class in org.apache.spark.sql.avro
- functions - Class in org.apache.spark.sql
-
Commonly used functions available for DataFrame operations.
- functions() - Constructor for class org.apache.spark.ml.functions
- functions() - Constructor for class org.apache.spark.sql.avro.functions
- functions() - Constructor for class org.apache.spark.sql.functions
- functions() - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
List the user-defined functions in jdbc dialect.
- functions() - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- functions.partitioning$ - Class in org.apache.spark.sql
- functionWithUnsupportedSyntaxError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- future() - Method in class org.apache.spark.sql.Observation
-
Future holding the (yet to be completed) observation.
- FutureAction<T> - Interface in org.apache.spark
-
A future for the result of an action to support cancellation.
- futureExecutionContext() - Static method in class org.apache.spark.rdd.AsyncRDDActions
- FValueTest - Class in org.apache.spark.ml.stat
-
FValue test for continuous data.
- FValueTest() - Constructor for class org.apache.spark.ml.stat.FValueTest
- fwe() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- fwe() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- fwe() - Method in interface org.apache.spark.ml.feature.SelectorParams
-
The upper bound of the expected family-wise error rate.
- fwe() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
G
- gain() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- gain() - Method in class org.apache.spark.ml.tree.InternalNode
- gain() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- Gamma$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- gamma1() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- gamma2() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- gamma6() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- gamma7() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- GammaGenerator - Class in org.apache.spark.mllib.random
-
Generates i.i.d.
- GammaGenerator(double, double) - Constructor for class org.apache.spark.mllib.random.GammaGenerator
- gammaJavaRDD(JavaSparkContext, double, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaRDD
with the default number of partitions and the default seed. - gammaJavaRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaRDD
with the default seed. - gammaJavaRDD(JavaSparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.gammaRDD
. - gammaJavaVectorRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaVectorRDD
with the default number of partitions and the default seed. - gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaVectorRDD
with the default seed. - gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.gammaVectorRDD
. - gammaRDD(SparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples from the gamma distribution with the input shape and scale. - gammaVectorRDD(SparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing
i.i.d.
samples drawn from the gamma distribution with the input shape and scale. - gapply(RelationalGroupedDataset, byte[], byte[], Object[], StructType) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
The helper function for gapply() on R side.
- gaps() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Indicates whether regex splits on gaps (true) or matches tokens (false).
- GarbageCollectionMetrics - Class in org.apache.spark.metrics
- GarbageCollectionMetrics() - Constructor for class org.apache.spark.metrics.GarbageCollectionMetrics
- GAUGE() - Static method in class org.apache.spark.metrics.sink.StatsdMetricType
- Gaussian$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- GaussianMixture - Class in org.apache.spark.ml.clustering
-
Gaussian Mixture clustering.
- GaussianMixture - Class in org.apache.spark.mllib.clustering
-
This class performs expectation maximization for multivariate Gaussian Mixture Models (GMMs).
- GaussianMixture() - Constructor for class org.apache.spark.ml.clustering.GaussianMixture
- GaussianMixture() - Constructor for class org.apache.spark.mllib.clustering.GaussianMixture
-
Constructs a default instance.
- GaussianMixture(String) - Constructor for class org.apache.spark.ml.clustering.GaussianMixture
- GaussianMixtureModel - Class in org.apache.spark.ml.clustering
-
Multivariate Gaussian Mixture Model (GMM) consisting of k Gaussians, where points are drawn from each Gaussian i with probability weights(i).
- GaussianMixtureModel - Class in org.apache.spark.mllib.clustering
-
Multivariate Gaussian Mixture Model (GMM) consisting of k Gaussians, where points are drawn from each Gaussian i=1..k with probability w(i); mu(i) and sigma(i) are the respective mean and covariance for each Gaussian distribution i=1..k.
- GaussianMixtureModel(double[], MultivariateGaussian[]) - Constructor for class org.apache.spark.mllib.clustering.GaussianMixtureModel
- GaussianMixtureParams - Interface in org.apache.spark.ml.clustering
-
Common params for GaussianMixture and GaussianMixtureModel
- GaussianMixtureSummary - Class in org.apache.spark.ml.clustering
-
Summary of GaussianMixture.
- gaussians() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- gaussians() - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
- gaussiansDF() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
Retrieve Gaussian distributions as a DataFrame.
- GBTClassificationModel - Class in org.apache.spark.ml.classification
-
Gradient-Boosted Trees (GBTs) (http://en.wikipedia.org/wiki/Gradient_boosting) model for classification.
- GBTClassificationModel(String, DecisionTreeRegressionModel[], double[]) - Constructor for class org.apache.spark.ml.classification.GBTClassificationModel
-
Construct a GBTClassificationModel
- GBTClassifier - Class in org.apache.spark.ml.classification
-
Gradient-Boosted Trees (GBTs) (http://en.wikipedia.org/wiki/Gradient_boosting) learning algorithm for classification.
- GBTClassifier() - Constructor for class org.apache.spark.ml.classification.GBTClassifier
- GBTClassifier(String) - Constructor for class org.apache.spark.ml.classification.GBTClassifier
- GBTClassifierParams - Interface in org.apache.spark.ml.tree
- GBTParams - Interface in org.apache.spark.ml.tree
-
Parameters for Gradient-Boosted Tree algorithms.
- GBTRegressionModel - Class in org.apache.spark.ml.regression
-
Gradient-Boosted Trees (GBTs) model for regression.
- GBTRegressionModel(String, DecisionTreeRegressionModel[], double[]) - Constructor for class org.apache.spark.ml.regression.GBTRegressionModel
-
Construct a GBTRegressionModel
- GBTRegressor - Class in org.apache.spark.ml.regression
-
Gradient-Boosted Trees (GBTs) learning algorithm for regression.
- GBTRegressor() - Constructor for class org.apache.spark.ml.regression.GBTRegressor
- GBTRegressor(String) - Constructor for class org.apache.spark.ml.regression.GBTRegressor
- GBTRegressorParams - Interface in org.apache.spark.ml.tree
- GC_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- GC_TIME() - Static method in class org.apache.spark.ui.ToolTips
- gemm(double, Matrix, DenseMatrix, double, double[]) - Static method in class org.apache.spark.ml.linalg.BLAS
-
CValues[0: A.numRows * B.numCols] := alpha * A * B + beta * CValues[0: A.numRows * B.numCols]
- gemm(double, Matrix, DenseMatrix, double, DenseMatrix) - Static method in class org.apache.spark.ml.linalg.BLAS
-
C := alpha * A * B + beta * C
- gemm(double, Matrix, DenseMatrix, double, DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
C := alpha * A * B + beta * C
- gemv(double, Matrix, double[], double, double[]) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y[0: A.numRows] := alpha * A * x[0: A.numCols] + beta * y[0: A.numRows]
- gemv(double, Matrix, Vector, double, DenseVector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y := alpha * A * x + beta * y
- gemv(double, Matrix, Vector, double, DenseVector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y := alpha * A * x + beta * y
- GeneralAggregateFunc - Class in org.apache.spark.sql.connector.expressions.aggregate
-
The general implementation of
AggregateFunc
, which contains the upper-cased function name, the `isDistinct` flag and all the inputs. - GeneralAggregateFunc(String, boolean, Expression[]) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- GeneralAggregateFunc(String, boolean, Expression[], SortValue[]) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- GeneralizedLinearAlgorithm<M extends GeneralizedLinearModel> - Class in org.apache.spark.mllib.regression
-
GeneralizedLinearAlgorithm implements methods to train a Generalized Linear Model (GLM).
- GeneralizedLinearAlgorithm() - Constructor for class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
- GeneralizedLinearModel - Class in org.apache.spark.mllib.regression
-
GeneralizedLinearModel (GLM) represents a model trained using GeneralizedLinearAlgorithm.
- GeneralizedLinearModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.GeneralizedLinearModel
- GeneralizedLinearRegression - Class in org.apache.spark.ml.regression
-
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family).
- GeneralizedLinearRegression() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression
- GeneralizedLinearRegression(String) - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression
- GeneralizedLinearRegression.Binomial$ - Class in org.apache.spark.ml.regression
-
Binomial exponential family distribution.
- GeneralizedLinearRegression.CLogLog$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Family$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.FamilyAndLink$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Gamma$ - Class in org.apache.spark.ml.regression
-
Gamma exponential family distribution.
- GeneralizedLinearRegression.Gaussian$ - Class in org.apache.spark.ml.regression
-
Gaussian exponential family distribution.
- GeneralizedLinearRegression.Identity$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Inverse$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Link$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Log$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Logit$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Poisson$ - Class in org.apache.spark.ml.regression
-
Poisson exponential family distribution.
- GeneralizedLinearRegression.Probit$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Sqrt$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegression.Tweedie$ - Class in org.apache.spark.ml.regression
- GeneralizedLinearRegressionBase - Interface in org.apache.spark.ml.regression
-
Params for Generalized Linear Regression.
- GeneralizedLinearRegressionModel - Class in org.apache.spark.ml.regression
-
Model produced by
GeneralizedLinearRegression
. - GeneralizedLinearRegressionSummary - Class in org.apache.spark.ml.regression
-
Summary of
GeneralizedLinearRegression
model and predictions. - GeneralizedLinearRegressionTrainingSummary - Class in org.apache.spark.ml.regression
-
Summary of
GeneralizedLinearRegression
fitting and model. - GeneralMLWritable - Interface in org.apache.spark.ml.util
-
Trait for classes that provide
GeneralMLWriter
. - GeneralMLWriter - Class in org.apache.spark.ml.util
-
A ML Writer which delegates based on the requested format.
- GeneralMLWriter(PipelineStage) - Constructor for class org.apache.spark.ml.util.GeneralMLWriter
- GeneralScalarExpression - Class in org.apache.spark.sql.connector.expressions
-
The general representation of SQL scalar expressions, which contains the upper-cased expression name and all the children expressions.
- GeneralScalarExpression(String, Expression[]) - Constructor for class org.apache.spark.sql.connector.expressions.GeneralScalarExpression
- generateAssociationRules(double) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
-
Generates association rules for the
Item
s inFPGrowthModel.freqItemsets()
. - generateExtendedInfo(SparkPlan) - Method in interface org.apache.spark.sql.ExtendedExplainGenerator
- generateKMeansRDD(SparkContext, int, int, int, double, int) - Static method in class org.apache.spark.mllib.util.KMeansDataGenerator
-
Generate an RDD containing test data for KMeans.
- generateLinearInput(double, double[], double[], double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
- generateLinearInput(double, double[], double[], double[], int, int, double, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
- generateLinearInput(double, double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
For compatibility, the generated data without specifying the mean and variance will have zero mean and variance of (1.0/3.0) since the original output range is [-1, 1] with uniform distribution, and the variance of uniform distribution is (b - a)^2^ / 12 which will be (1.0/3.0)
- generateLinearInputAsList(double, double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
Return a Java List of synthetic data randomly generated according to a multi collinear model.
- generateLinearRDD(SparkContext, int, int, double, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
Generate an RDD containing sample data for Linear Regression models - including Ridge, Lasso, and unregularized variants.
- generateLogisticRDD(SparkContext, int, int, double, int, double) - Static method in class org.apache.spark.mllib.util.LogisticRegressionDataGenerator
-
Generate an RDD containing test data for LogisticRegression.
- generateRandomEdges(int, int, int, long) - Static method in class org.apache.spark.graphx.util.GraphGenerators
- generateRolledOverFileSuffix() - Method in interface org.apache.spark.util.logging.RollingPolicy
-
Get the desired name of the rollover file
- generationExpression() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns the generation expression of this table column.
- generatorNotExpectedError(FunctionIdentifier, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- generatorOutsideSelectError(LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- GEOGRAPHY() - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- GEOMETRY() - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- geq(Object) - Method in class org.apache.spark.sql.Column
-
Greater than or equal to an expression.
- get() - Method in class org.apache.spark.api.java.Optional
- get() - Static method in class org.apache.spark.BarrierTaskContext
-
:: Experimental :: Returns the currently active BarrierTaskContext.
- get() - Method in interface org.apache.spark.FutureAction
-
Blocks and returns the result of this job.
- get() - Static method in class org.apache.spark.SparkEnv
-
Returns the SparkEnv.
- get() - Method in interface org.apache.spark.sql.connector.read.PartitionReader
-
Return the current record.
- get() - Method in class org.apache.spark.sql.Observation
-
(Scala-specific) Get the observed metrics.
- get() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Get the state value if it exists, or throw NoSuchElementException.
- get() - Method in interface org.apache.spark.sql.streaming.ListState
-
Get the state value.
- get() - Method in interface org.apache.spark.sql.streaming.ValueState
-
Get the state value if it exists
- get() - Method in class org.apache.spark.streaming.State
-
Get the state if it exists, otherwise it will throw
java.util.NoSuchElementException
. - get() - Static method in class org.apache.spark.TaskContext
-
Return the currently active TaskContext.
- get(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- get(int, DataType) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- get(int, DataType) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- get(int, DataType) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- get(long) - Static method in class org.apache.spark.util.AccumulatorContext
-
Returns the
AccumulatorV2
registered with the given ID, if any. - get(Object) - Method in class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
- get(Object) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- get(String) - Method in class org.apache.spark.SparkConf
-
Get a parameter; throws a NoSuchElementException if it's not set
- get(String) - Static method in class org.apache.spark.SparkFiles
-
Get the absolute path of a file added through
SparkContext.addFile()
. - get(String) - Static method in class org.apache.spark.sql.jdbc.JdbcDialects
-
Fetch the JdbcDialect class corresponding to a given database url.
- get(String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns the value of Spark runtime configuration property for the given key.
- get(String) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns the query if there is an active query with the given id, or null.
- get(String, String) - Method in class org.apache.spark.SparkConf
-
Get a parameter, falling back to a default if not set
- get(String, String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns the value of Spark runtime configuration property for the given key.
- get(UUID) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns the query if there is an active query with the given id, or null.
- get(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Optionally returns the value associated with a param.
- get(Param<T>) - Method in interface org.apache.spark.ml.param.Params
-
Optionally returns the user-supplied value of a param.
- get(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns element of array at given (0-based) index.
- get_json_object(Column, String) - Static method in class org.apache.spark.sql.functions
-
Extracts json object from a json string based on json path specified, and returns json string of the extracted json object.
- getAbsolutePathFromExecutable(String) - Static method in class org.apache.spark.TestUtils
-
Get the absolute path from the executable.
- getAcceptanceResults(RDD<Tuple2<K, V>>, boolean, Map<K, Object>, Option<Map<K, Object>>, long) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Count the number of items instantly accepted and generate the waitlist for each stratum.
- getAccumulatorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
int64 accumulator_id = 2;
- getAccumulatorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
int64 accumulator_id = 2;
- getAccumulatorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetricOrBuilder
-
int64 accumulator_id = 2;
- getAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdates(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdates(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdates(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- getAccumulatorUpdatesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getAccumulatorUpdatesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- getActive() - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Get the currently active context, if there is one.
- getActiveJobIds() - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns an array containing the ids of all active jobs.
- getActiveJobIds() - Method in class org.apache.spark.SparkStatusTracker
-
Returns an array containing the ids of all active jobs.
- getActiveOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Either get the currently active StreamingContext (that is, started but not stopped), OR recreate a StreamingContext from checkpoint data in the given path.
- getActiveOrCreate(Function0<StreamingContext>) - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Either return the "active" StreamingContext (that is, started but not stopped), or create a new StreamingContext that is
- getActiveSession() - Static method in class org.apache.spark.sql.SparkSession
-
Returns the active SparkSession for the current thread, returned by the builder.
- getActiveStageIds() - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns an array containing the ids of all active stages.
- getActiveStageIds() - Method in class org.apache.spark.SparkStatusTracker
-
Returns an array containing the ids of all active stages.
- getActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 active_tasks = 9;
- getActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 active_tasks = 9;
- getActiveTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 active_tasks = 9;
- getAddColumnQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getAddColumnQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getAddColumnQuery(String, String, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getAddColumnQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getAddedJars() - Method in class org.apache.spark.sql.artifact.ArtifactManager
-
Get the URLs of all jar artifacts.
- getAddress() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional string address = 1;
- getAddress() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional string address = 1;
- getAddress() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional string address = 1;
- getAddressBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional string address = 1;
- getAddressBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional string address = 1;
- getAddressBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional string address = 1;
- getAddresses(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- getAddresses(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
repeated string addresses = 2;
- getAddresses(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceInformationOrBuilder
-
repeated string addresses = 2;
- getAddressesBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- getAddressesBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
repeated string addresses = 2;
- getAddressesBytes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceInformationOrBuilder
-
repeated string addresses = 2;
- getAddressesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- getAddressesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
repeated string addresses = 2;
- getAddressesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceInformationOrBuilder
-
repeated string addresses = 2;
- getAddressesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- getAddressesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
repeated string addresses = 2;
- getAddressesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceInformationOrBuilder
-
repeated string addresses = 2;
- getAddTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 add_time = 20;
- getAddTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 add_time = 20;
- getAddTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 add_time = 20;
- getAddTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
int64 add_time = 5;
- getAddTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
int64 add_time = 5;
- getAddTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
int64 add_time = 5;
- getAggregationDepth() - Method in interface org.apache.spark.ml.param.shared.HasAggregationDepth
- getAlgo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getAll() - Method in class org.apache.spark.SparkConf
-
Get all parameters as a list of pairs
- getAll() - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns all properties set in this conf.
- getAllClusterConfigs(SparkConf) - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- getAllConfs() - Method in class org.apache.spark.sql.SQLContext
-
Return all the configuration properties that have been set (i.e.
- getAllPools() - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Return pools for fair scheduler
- GetAllReceiverInfo - Class in org.apache.spark.streaming.scheduler
- GetAllReceiverInfo() - Constructor for class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- getAllRemovalsTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 all_removals_time_ms = 6;
- getAllRemovalsTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 all_removals_time_ms = 6;
- getAllRemovalsTimeMs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 all_removals_time_ms = 6;
- getAllUpdatesTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 all_updates_time_ms = 4;
- getAllUpdatesTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 all_updates_time_ms = 4;
- getAllUpdatesTimeMs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 all_updates_time_ms = 4;
- getAllWithPrefix(String) - Method in class org.apache.spark.SparkConf
-
Get all parameters that start with
prefix
- getAlpha() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getAlpha() - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for
LDA.getDocConcentration()
- getAmount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
int64 amount = 2;
- getAmount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
int64 amount = 2;
- getAmount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
int64 amount = 2;
- getAmount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
double amount = 2;
- getAmount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
-
double amount = 2;
- getAmount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequestOrBuilder
-
double amount = 2;
- getAnyValAs(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- getAppId() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Returns the application ID, or
null
if not yet known. - getAppId() - Method in class org.apache.spark.SparkConf
-
Returns the Spark application id, valid in the Driver after TaskScheduler registration and from the start in the Executor.
- getApplicationInfo(String) - Method in interface org.apache.spark.status.api.v1.UIRoot
- getApplicationInfoList() - Method in interface org.apache.spark.status.api.v1.UIRoot
- getAppSparkVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string app_spark_version = 8;
- getAppSparkVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string app_spark_version = 8;
- getAppSparkVersion() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string app_spark_version = 8;
- getAppSparkVersionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string app_spark_version = 8;
- getAppSparkVersionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string app_spark_version = 8;
- getAppSparkVersionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string app_spark_version = 8;
- getArray(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getArray(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getArray(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getArray(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getArray(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the array type value for
rowId
. - getAs(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- getAs(String) - Method in interface org.apache.spark.sql.Row
-
Returns the value of a given fieldName.
- getAsJava() - Method in class org.apache.spark.sql.Observation
-
(Java-specific) Get the observed metrics.
- getAssociationRulesFromFP(Dataset<?>, String, String, double, Map<T, Object>, long, ClassTag<T>) - Static method in class org.apache.spark.ml.fpm.AssociationRules
-
Computes the association rules with confidence above minConfidence.
- getAsymmetricAlpha() - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for
LDA.getAsymmetricDocConcentration()
- getAsymmetricDocConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
- getAttempt() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 attempt = 3;
- getAttempt() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
int32 attempt = 3;
- getAttempt() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
int32 attempt = 3;
- getAttempt() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 attempt = 3;
- getAttempt() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int32 attempt = 3;
- getAttempt() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int32 attempt = 3;
- getAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string attempt_id = 1;
- getAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string attempt_id = 1;
- getAttemptId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string attempt_id = 1;
- getAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 attempt_id = 3;
- getAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 attempt_id = 3;
- getAttemptId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 attempt_id = 3;
- getAttemptIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string attempt_id = 1;
- getAttemptIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string attempt_id = 1;
- getAttemptIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string attempt_id = 1;
- getAttempts(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttempts(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttempts(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttemptsOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- getAttr(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its index.
- getAttr(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its name.
- getAttributes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
Deprecated.
- getAttributes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
Deprecated.
- getAttributes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
Deprecated.
- getAttributesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- getAttributesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getAttributesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> attributes = 27;
- getAttributesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> attributes = 27;
- getAttributesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> attributes = 27;
- getAttributesMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> attributes = 27;
- getAttributesOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> attributes = 27;
- getAttributesOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> attributes = 27;
- getAttributesOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> attributes = 27;
- getAttributesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> attributes = 27;
- getAttributesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> attributes = 27;
- getAttributesOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> attributes = 27;
- getAvroField(String, int) - Method in class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
-
Get the Avro field corresponding to the provided Catalyst field name/position, if any.
- getAvroSchema() - Method in class org.apache.spark.SparkConf
-
Gets all the avro schemas in the configuration used in the generic Avro record serializer
- getBarrier() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
bool barrier = 4;
- getBarrier() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
bool barrier = 4;
- getBarrier() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
bool barrier = 4;
- getBatchDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
int64 batch_duration = 6;
- getBatchDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
int64 batch_duration = 6;
- getBatchDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
int64 batch_duration = 6;
- getBatchId() - Method in interface org.apache.spark.sql.streaming.QueryInfo
-
Returns the batch id associated with stateful operator
- getBatchId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
int64 batch_id = 5;
- getBatchId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
int64 batch_id = 5;
- getBatchId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
int64 batch_id = 5;
- getBatchingTimeout(SparkConf) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
How long we will wait for the wrappedLog in the BatchedWriteAheadLog to write the records before we fail the write attempt to unblock receivers.
- getBernoulliSamplingFunction(RDD<Tuple2<K, V>>, Map<K, Object>, boolean, long) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Return the per partition sampling function used for sampling without replacement.
- getBeta() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- getBeta() - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for
LDA.getTopicConcentration()
- getBin(int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Returns a particular histogram bin.
- getBinary() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
- getBinary() - Method in class org.apache.spark.ml.feature.HashingTF
- getBinary() - Method in class org.apache.spark.types.variant.Variant
- getBinary(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getBinary(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getBinary(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getBinary(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getBinary(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getBinary(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the binary type value for
rowId
. - getbit(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the bit (0 or 1) at the specified position.
- getBlacklistedInStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStages(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStagesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
repeated int64 blacklisted_in_stages = 25;
- getBlacklistedInStagesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
repeated int64 blacklisted_in_stages = 25;
- getBlockName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string block_name = 1;
- getBlockName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
optional string block_name = 1;
- getBlockName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
optional string block_name = 1;
- getBlockNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string block_name = 1;
- getBlockNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
optional string block_name = 1;
- getBlockNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
optional string block_name = 1;
- getBlockSize() - Method in interface org.apache.spark.ml.param.shared.HasBlockSize
- GetBlockStatus(BlockId, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
- GetBlockStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus$
- getBoolean() - Method in class org.apache.spark.types.variant.Variant
- getBoolean(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getBoolean(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive boolean.
- getBoolean(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getBoolean(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getBoolean(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getBoolean(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getBoolean(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the boolean type value for
rowId
. - getBoolean(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Boolean.
- getBoolean(String, boolean) - Method in class org.apache.spark.SparkConf
-
Get a parameter as a boolean, falling back to a default if not set
- getBoolean(String, boolean) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
Returns the boolean value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
- getBooleanArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Boolean array.
- getBooleans(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Gets boolean type values from
[rowId, rowId + count)
. - getBootstrap() - Method in interface org.apache.spark.ml.tree.RandomForestParams
- getBucketLength() - Method in interface org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
- getBuilder() - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
- getBuilder() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
- getBuilder() - Method in interface org.apache.spark.storage.memory.ValuesHolder
-
Note: After this method is called, the ValuesHolder is invalid, we can't store data and get estimate size again.
- getByte(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive byte.
- getByte(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getByte(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getByte(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getByte(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getByte(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the byte type value for
rowId
. - getBytes(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Gets byte type values from
[rowId, rowId + count)
. - getBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double bytes_read = 18;
- getBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double bytes_read = 18;
- getBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double bytes_read = 18;
- getBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
int64 bytes_read = 1;
- getBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
-
int64 bytes_read = 1;
- getBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricsOrBuilder
-
int64 bytes_read = 1;
- getBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- getBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
repeated double bytes_read = 1;
- getBytesRead(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributionsOrBuilder
-
repeated double bytes_read = 1;
- getBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- getBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
repeated double bytes_read = 1;
- getBytesReadCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributionsOrBuilder
-
repeated double bytes_read = 1;
- getBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- getBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
repeated double bytes_read = 1;
- getBytesReadList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributionsOrBuilder
-
repeated double bytes_read = 1;
- getBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double bytes_written = 20;
- getBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double bytes_written = 20;
- getBytesWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double bytes_written = 20;
- getBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
int64 bytes_written = 1;
- getBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
-
int64 bytes_written = 1;
- getBytesWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricsOrBuilder
-
int64 bytes_written = 1;
- getBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 bytes_written = 1;
- getBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
-
int64 bytes_written = 1;
- getBytesWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricsOrBuilder
-
int64 bytes_written = 1;
- getBytesWritten(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- getBytesWritten(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
repeated double bytes_written = 1;
- getBytesWritten(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributionsOrBuilder
-
repeated double bytes_written = 1;
- getBytesWrittenCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- getBytesWrittenCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
repeated double bytes_written = 1;
- getBytesWrittenCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributionsOrBuilder
-
repeated double bytes_written = 1;
- getBytesWrittenList() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- getBytesWrittenList() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
repeated double bytes_written = 1;
- getBytesWrittenList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributionsOrBuilder
-
repeated double bytes_written = 1;
- getCached() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
bool cached = 3;
- getCached() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
bool cached = 3;
- getCached() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
bool cached = 3;
- getCachedBlockManagerId(BlockManagerId) - Static method in class org.apache.spark.storage.BlockManagerId
- getCachedMetadata(String) - Static method in class org.apache.spark.rdd.HadoopRDD
-
The three methods below are helpers for accessing the local map, a property of the SparkEnv of the local process.
- getCacheNodeIds() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getCallsite() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string callsite = 5;
- getCallsite() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
optional string callsite = 5;
- getCallsite() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
optional string callsite = 5;
- getCallSite(Function1<String, Object>) - Static method in class org.apache.spark.util.Utils
-
When called inside a class in the spark package, returns the name of the user code class (outside the spark package) that called into Spark, as well as which Spark method they called.
- getCallsiteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string callsite = 5;
- getCallsiteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
optional string callsite = 5;
- getCallsiteBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
optional string callsite = 5;
- getCaseSensitive() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Get the custom datatype mapping for the given jdbc meta information.
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.TeradataDialect
- getCategoricalCols() - Method in class org.apache.spark.ml.feature.FeatureHasher
- getCategoricalFeatures(StructField) - Static method in class org.apache.spark.ml.util.MetadataUtils
-
Examine a schema to identify categorical (Binary and Nominal) features.
- getCategoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getCensorCol() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
- getCharContent(boolean) - Method in class org.apache.spark.util.SparkTestUtils.JavaSourceFromString
- getCheckpointDir() - Method in class org.apache.spark.api.java.JavaSparkContext
- getCheckpointDir() - Method in class org.apache.spark.SparkContext
- getCheckpointFile() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Gets the name of the file to which this RDD was checkpointed
- getCheckpointFile() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- getCheckpointFile() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- getCheckpointFile() - Method in class org.apache.spark.rdd.RDD
-
Gets the name of the directory to which this RDD was checkpointed.
- getCheckpointFiles() - Method in class org.apache.spark.graphx.Graph
-
Gets the name of the files to which this Graph was checkpointed.
- getCheckpointFiles() - Method in class org.apache.spark.graphx.impl.GraphImpl
- getCheckpointFiles() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
If using checkpointing and
LDA.keepLastCheckpoint
is set to true, then there may be saved checkpoint files. - getCheckpointInterval() - Method in interface org.apache.spark.ml.param.shared.HasCheckpointInterval
- getCheckpointInterval() - Method in class org.apache.spark.mllib.clustering.LDA
-
Period (in iterations) between checkpoints.
- getCheckpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getChild(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getChild(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
- getChildClusters(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClusters(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClusters(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildClustersOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- getChildNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getChildNodesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- getClassifier() - Method in interface org.apache.spark.ml.classification.OneVsRestParams
- getClasspathEntries(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntries(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntries(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getClasspathEntriesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- getCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- getCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- getCluster() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- getClusterBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- getClusterConfig(SparkConf, String) - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- getClusterOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- getClusterOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- getClusterOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- getCodecName() - Method in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- getColdStartStrategy() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
- getCollectSubModels() - Method in interface org.apache.spark.ml.param.shared.HasCollectSubModels
- getColumnName(Seq<Object>, StructType) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Gets the name of the column in the given position.
- getCombOp() - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Returns the function used combine results returned by seqOp from different partitions.
- getComment() - Method in class org.apache.spark.sql.types.StructField
-
Return the comment of this StructField.
- getCommitTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 commit_time_ms = 7;
- getCommitTimeMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 commit_time_ms = 7;
- getCommitTimeMs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 commit_time_ms = 7;
- getCompleted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
bool completed = 7;
- getCompleted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
bool completed = 7;
- getCompleted() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
bool completed = 7;
- getCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 completed_tasks = 11;
- getCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 completed_tasks = 11;
- getCompletedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 completed_tasks = 11;
- getCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 completion_time = 5;
- getCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional int64 completion_time = 5;
- getCompletionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional int64 completion_time = 5;
- getCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional int64 completion_time = 9;
- getCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional int64 completion_time = 9;
- getCompletionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional int64 completion_time = 9;
- getCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 completion_time = 12;
- getCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional int64 completion_time = 12;
- getCompletionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional int64 completion_time = 12;
- getConf() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Return a copy of this JavaSparkContext's configuration.
- getConf() - Method in interface org.apache.spark.input.Configurable
- getConf() - Method in class org.apache.spark.rdd.HadoopRDD
- getConf() - Method in class org.apache.spark.rdd.NewHadoopRDD
- getConf() - Method in class org.apache.spark.SparkContext
-
Return a copy of this SparkContext's configuration.
- getConf(String) - Method in class org.apache.spark.sql.SQLContext
-
Return the value of Spark SQL configuration property for the given key.
- getConf(String, String) - Method in class org.apache.spark.sql.SQLContext
-
Return the value of Spark SQL configuration property for the given key.
- getConfiguration() - Method in class org.apache.spark.input.PortableDataStream
- getConfiguredLocalDirs(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the configured local directories where Spark can write files.
- getConnection() - Method in interface org.apache.spark.rdd.JdbcRDD.ConnectionFactory
- getConnection(Driver, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcConnectionProvider
-
Opens connection to the database.
- getContextOrSparkClassLoader() - Method in interface org.apache.spark.util.SparkClassUtils
- getContextOrSparkClassLoader() - Static method in class org.apache.spark.util.Utils
- getConvergenceTol() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the largest change in log-likelihood at which convergence is considered to have occurred.
- getCoresGranted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_granted = 3;
- getCoresGranted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 cores_granted = 3;
- getCoresGranted() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 cores_granted = 3;
- getCoresPerExecutor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_per_executor = 5;
- getCoresPerExecutor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 cores_per_executor = 5;
- getCoresPerExecutor() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 cores_per_executor = 5;
- getCorrelationFromName(String) - Static method in class org.apache.spark.mllib.stat.correlation.Correlations
- getCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunks(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunksCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double corrupt_merged_block_chunks = 1;
- getCorruptMergedBlockChunksList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double corrupt_merged_block_chunks = 1;
- getCount() - Method in class org.apache.spark.storage.CountingWritableChannel
- getCurrentDefaultValue() - Method in class org.apache.spark.sql.types.StructField
-
Return the current default value of this StructField.
- getCurrentProcessingTimeInMs() - Method in interface org.apache.spark.sql.streaming.TimerValues
-
Get the current processing time as milliseconds in epoch time.
- getCurrentProcessingTimeMs() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Get the current processing time as milliseconds in epoch time.
- getCurrentUserGroups(SparkConf, String) - Static method in class org.apache.spark.util.Utils
- getCurrentUserName() - Static method in class org.apache.spark.util.Utils
-
Returns the current user name.
- getCurrentWatermarkInMs() - Method in interface org.apache.spark.sql.streaming.TimerValues
-
Get the current event time watermark as milliseconds in epoch time.
- getCurrentWatermarkMs() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Get the current event time watermark as milliseconds in epoch time.
- getCustomMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
Deprecated.
- getCustomMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
Deprecated.
- getCustomMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
Deprecated.
- getCustomMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- getCustomMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- getCustomMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsOrDefault(String, long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
map<string, int64> custom_metrics = 12;
- getCustomMetricsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
map<string, int64> custom_metrics = 12;
- getData(Row) - Static method in class org.apache.spark.ml.image.ImageSchema
-
Gets the image data
- getDatabase(String) - Method in class org.apache.spark.sql.api.Catalog
-
Get the database (namespace) with the specified name (can be qualified with catalog).
- getDataDistribution(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistribution(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistribution(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDataDistributionOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- getDate(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of date type as java.sql.Date.
- getDayTimeIntervalAsMicros(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Converts a day-time interval string to a long value
micros
. - getDayTimeIntervalAsMicros(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getDayTimeIntervalFields() - Method in class org.apache.spark.types.variant.Variant
- getDayTimeIntervalFields(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getDecimal() - Method in class org.apache.spark.types.variant.Variant
- getDecimal(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getDecimal(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of decimal type as java.math.BigDecimal.
- getDecimal(int, int, int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getDecimal(int, int, int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getDecimal(int, int, int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getDecimal(int, int, int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getDecimal(int, int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the decimal type value for
rowId
. - getDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
-
Gets the default value of a parameter.
- getDefaultCompressionLevel() - Method in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- getDefaultInstance() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- getDefaultInstanceForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- getDefaultPropertiesFile(Map<String, String>) - Static method in class org.apache.spark.util.Utils
-
Return the path of the default Spark properties file.
- getDefaultReadLimit() - Method in interface org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControl
-
Returns the read limits potentially passed to the data source through options when creating the data source.
- getDefaultSession() - Static method in class org.apache.spark.sql.SparkSession
-
Returns the default SparkSession that is returned by the builder.
- getDegree() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
- getDeleteColumnQuery(String, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getDeleteColumnQuery(String, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getDenseSizeInBytes() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the size of the dense representation of this `Matrix`.
- getDependencies() - Method in class org.apache.spark.rdd.CoGroupedRDD
- getDependencies() - Method in class org.apache.spark.rdd.ShuffledRDD
- getDependencies() - Method in class org.apache.spark.rdd.UnionRDD
- getDeprecatedConfig(String, Map<String, String>) - Static method in class org.apache.spark.SparkConf
-
Looks for available deprecated keys for the given config option, and return the first value available.
- getDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string desc = 3;
- getDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
optional string desc = 3;
- getDesc() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
optional string desc = 3;
- getDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string desc = 3;
- getDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
optional string desc = 3;
- getDesc() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
optional string desc = 3;
- getDescBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string desc = 3;
- getDescBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
optional string desc = 3;
- getDescBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
optional string desc = 3;
- getDescBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string desc = 3;
- getDescBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
optional string desc = 3;
- getDescBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
optional string desc = 3;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string description = 3;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string description = 3;
- getDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string description = 3;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
optional string description = 1;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
optional string description = 1;
- getDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
optional string description = 1;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string description = 1;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string description = 1;
- getDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string description = 1;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string description = 3;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string description = 3;
- getDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string description = 3;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string description = 40;
- getDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string description = 40;
- getDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string description = 40;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string description = 3;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string description = 3;
- getDescriptionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string description = 3;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
optional string description = 1;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
optional string description = 1;
- getDescriptionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
optional string description = 1;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string description = 1;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string description = 1;
- getDescriptionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string description = 1;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string description = 3;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string description = 3;
- getDescriptionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string description = 3;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string description = 40;
- getDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string description = 40;
- getDescriptionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string description = 40;
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- getDescriptor() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- getDescriptor() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- getDescriptor() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- getDescriptor() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- getDescriptorForType() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- getDescriptorForType() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- getDescriptorForType() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- getDescriptorForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- getDeserialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool deserialized = 7;
- getDeserialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
bool deserialized = 7;
- getDeserialized() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
bool deserialized = 7;
- getDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string details = 4;
- getDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string details = 4;
- getDetails() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string details = 4;
- getDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string details = 41;
- getDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string details = 41;
- getDetails() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string details = 41;
- getDetailsBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string details = 4;
- getDetailsBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string details = 4;
- getDetailsBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string details = 4;
- getDetailsBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string details = 41;
- getDetailsBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string details = 41;
- getDetailsBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string details = 41;
- getDiscoveryScript() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string discovery_script = 3;
- getDiscoveryScript() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string discovery_script = 3;
- getDiscoveryScript() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string discovery_script = 3;
- getDiscoveryScriptBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string discovery_script = 3;
- getDiscoveryScriptBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string discovery_script = 3;
- getDiscoveryScriptBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string discovery_script = 3;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double disk_bytes_spilled = 17;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double disk_bytes_spilled = 17;
- getDiskBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double disk_bytes_spilled = 17;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 disk_bytes_spilled = 14;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 disk_bytes_spilled = 14;
- getDiskBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 disk_bytes_spilled = 14;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 disk_bytes_spilled = 22;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 disk_bytes_spilled = 22;
- getDiskBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 disk_bytes_spilled = 22;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 disk_bytes_spilled = 24;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 disk_bytes_spilled = 24;
- getDiskBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 disk_bytes_spilled = 24;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 disk_bytes_spilled = 9;
- getDiskBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 disk_bytes_spilled = 9;
- getDiskBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 disk_bytes_spilled = 9;
- getDiskBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilled(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilled(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilledCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilledCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilledList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double disk_bytes_spilled = 15;
- getDiskBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double disk_bytes_spilled = 14;
- getDiskBytesSpilledList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double disk_bytes_spilled = 14;
- getDiskSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
int64 disk_size = 9;
- getDiskSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
int64 disk_size = 9;
- getDiskSize() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
int64 disk_size = 9;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 disk_used = 6;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 disk_used = 6;
- getDiskUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 disk_used = 6;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 disk_used = 4;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
int64 disk_used = 4;
- getDiskUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
int64 disk_used = 4;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
int64 disk_used = 4;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
int64 disk_used = 4;
- getDiskUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
int64 disk_used = 4;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int64 disk_used = 7;
- getDiskUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
int64 disk_used = 7;
- getDiskUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
int64 disk_used = 7;
- getDistanceMeasure() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- getDistanceMeasure() - Method in class org.apache.spark.ml.evaluation.ClusteringMetrics
- getDistanceMeasure() - Method in interface org.apache.spark.ml.param.shared.HasDistanceMeasure
- getDistanceMeasure() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
The distance suite used by the algorithm.
- getDistanceMeasure() - Method in class org.apache.spark.mllib.clustering.KMeans
-
The distance suite used by the algorithm.
- getDistributions() - Method in class org.apache.spark.status.LiveRDD
- getDocConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getDocConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
- getDouble() - Method in class org.apache.spark.types.variant.Variant
- getDouble(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getDouble(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive double.
- getDouble(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getDouble(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getDouble(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getDouble(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getDouble(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the double type value for
rowId
. - getDouble(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Double.
- getDouble(String, double) - Method in class org.apache.spark.SparkConf
-
Get a parameter as a double, falling back to a default if not ste
- getDouble(String, double) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
Returns the double value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
- getDoubleArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Double array.
- getDoubles(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Gets double type values from
[rowId, rowId + count)
. - getDriverAttributes() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get the attributes on driver.
- getDriverLogUrls() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get the URLs for the driver logs.
- getDropLast() - Method in interface org.apache.spark.ml.feature.OneHotEncoderBase
- getDstCol() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 duration = 5;
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
int64 duration = 5;
- getDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
int64 duration = 5;
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double duration = 5;
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double duration = 5;
- getDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double duration = 5;
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 duration = 7;
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional int64 duration = 7;
- getDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional int64 duration = 7;
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 duration = 7;
- getDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 duration = 7;
- getDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 duration = 7;
- getDuration(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- getDuration(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double duration = 2;
- getDuration(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double duration = 2;
- getDuration(JobData) - Static method in class org.apache.spark.ui.jobs.JobDataUtil
- getDurationCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- getDurationCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double duration = 2;
- getDurationCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double duration = 2;
- getDurationList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- getDurationList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double duration = 2;
- getDurationList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double duration = 2;
- getDurationMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
Deprecated.
- getDurationMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
Deprecated.
- getDurationMs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
Deprecated.
- getDurationMsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- getDurationMsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getDurationMsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, int64> duration_ms = 7;
- getDurationMsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, int64> duration_ms = 7;
- getDurationMsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, int64> duration_ms = 7;
- getDurationMsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, int64> duration_ms = 7;
- getDurationMsOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, int64> duration_ms = 7;
- getDurationMsOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, int64> duration_ms = 7;
- getDurationMsOrDefault(String, long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, int64> duration_ms = 7;
- getDurationMsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, int64> duration_ms = 7;
- getDurationMsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, int64> duration_ms = 7;
- getDurationMsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, int64> duration_ms = 7;
- getDynamicAllocationInitialExecutors(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the initial number of executors for dynamic allocation.
- getEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdges(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdges(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- getEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getEdgesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- getElasticNetParam() - Method in interface org.apache.spark.ml.param.shared.HasElasticNetParam
- getElementAtIndex(int) - Method in class org.apache.spark.types.variant.Variant
- getEndOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string end_offset = 3;
- getEndOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string end_offset = 3;
- getEndOffset() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string end_offset = 3;
- getEndOffsetBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string end_offset = 3;
- getEndOffsetBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string end_offset = 3;
- getEndOffsetBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string end_offset = 3;
- getEndTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 end_time = 3;
- getEndTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
int64 end_time = 3;
- getEndTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
int64 end_time = 3;
- getEndTimeEpoch() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- getEndTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional int64 end_timestamp = 7;
- getEndTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional int64 end_timestamp = 7;
- getEndTimestamp() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional int64 end_timestamp = 7;
- getEps() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- getEpsilon() - Method in interface org.apache.spark.ml.regression.LinearRegressionParams
- getEpsilon() - Method in class org.apache.spark.mllib.clustering.KMeans
-
The distance threshold within which we've consider centers to have converged.
- getError() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
If the application failed due to an error, return the underlying error.
- getErrorClass() - Method in exception org.apache.spark.SparkException
- getErrorClass() - Method in interface org.apache.spark.SparkThrowable
- getErrorClass() - Method in exception org.apache.spark.sql.AnalysisException
- getErrorClass() - Method in exception org.apache.spark.sql.exceptions.SqlScriptingException
- getErrorClass() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- getErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string error_message = 10;
- getErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string error_message = 10;
- getErrorMessage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string error_message = 10;
- getErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string error_message = 14;
- getErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string error_message = 14;
- getErrorMessage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string error_message = 14;
- getErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string error_message = 14;
- getErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string error_message = 14;
- getErrorMessage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string error_message = 14;
- getErrorMessage(String, Map<String, Object>) - Method in class org.apache.spark.ErrorClassesJsonReader
- getErrorMessageBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string error_message = 10;
- getErrorMessageBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string error_message = 10;
- getErrorMessageBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string error_message = 10;
- getErrorMessageBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string error_message = 14;
- getErrorMessageBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string error_message = 14;
- getErrorMessageBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string error_message = 14;
- getErrorMessageBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string error_message = 14;
- getErrorMessageBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string error_message = 14;
- getErrorMessageBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string error_message = 14;
- getEstimator() - Method in interface org.apache.spark.ml.tuning.ValidatorParams
- getEstimatorParamMaps() - Method in interface org.apache.spark.ml.tuning.ValidatorParams
- getEvaluator() - Method in interface org.apache.spark.ml.tuning.ValidatorParams
- getEventTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
Deprecated.
- getEventTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
Deprecated.
- getEventTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
Deprecated.
- getEventTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- getEventTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getEventTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> event_time = 8;
- getEventTimeMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> event_time = 8;
- getEventTimeMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> event_time = 8;
- getEventTimeMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> event_time = 8;
- getEventTimeOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> event_time = 8;
- getEventTimeOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> event_time = 8;
- getEventTimeOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> event_time = 8;
- getEventTimeOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> event_time = 8;
- getEventTimeOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> event_time = 8;
- getEventTimeOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> event_time = 8;
- getException() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string exception = 5;
- getException() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string exception = 5;
- getException() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string exception = 5;
- getExceptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string exception = 5;
- getExceptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string exception = 5;
- getExceptionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string exception = 5;
- getExcludedInStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStages(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStagesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
repeated int64 excluded_in_stages = 31;
- getExcludedInStagesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
repeated int64 excluded_in_stages = 31;
- getExecutionContext() - Method in interface org.apache.spark.ml.param.shared.HasParallelism
-
Create a new execution context with a thread-pool that has a maximum number of threads set to the value of
HasParallelism.parallelism()
. - getExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
int64 execution_id = 1;
- getExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
int64 execution_id = 1;
- getExecutionId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
int64 execution_id = 1;
- getExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 execution_id = 1;
- getExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
int64 execution_id = 1;
- getExecutionId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
int64 execution_id = 1;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_cpu_time = 9;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double executor_cpu_time = 9;
- getExecutorCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double executor_cpu_time = 9;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_cpu_time = 17;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 executor_cpu_time = 17;
- getExecutorCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 executor_cpu_time = 17;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_cpu_time = 19;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 executor_cpu_time = 19;
- getExecutorCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 executor_cpu_time = 19;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_cpu_time = 4;
- getExecutorCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 executor_cpu_time = 4;
- getExecutorCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 executor_cpu_time = 4;
- getExecutorCpuTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_cpu_time = 6;
- getExecutorCpuTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_cpu_time = 6;
- getExecutorDecommissionState(String) - Method in interface org.apache.spark.scheduler.TaskScheduler
-
If an executor is decommissioned, return its corresponding decommission info
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_deserialize_cpu_time = 7;
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double executor_deserialize_cpu_time = 7;
- getExecutorDeserializeCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double executor_deserialize_cpu_time = 7;
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_deserialize_cpu_time = 15;
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 executor_deserialize_cpu_time = 15;
- getExecutorDeserializeCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 executor_deserialize_cpu_time = 15;
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_deserialize_cpu_time = 17;
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 executor_deserialize_cpu_time = 17;
- getExecutorDeserializeCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 executor_deserialize_cpu_time = 17;
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_deserialize_cpu_time = 2;
- getExecutorDeserializeCpuTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 executor_deserialize_cpu_time = 2;
- getExecutorDeserializeCpuTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 executor_deserialize_cpu_time = 2;
- getExecutorDeserializeCpuTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeCpuTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_deserialize_cpu_time = 4;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_deserialize_time = 6;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double executor_deserialize_time = 6;
- getExecutorDeserializeTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double executor_deserialize_time = 6;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_deserialize_time = 14;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 executor_deserialize_time = 14;
- getExecutorDeserializeTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 executor_deserialize_time = 14;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_deserialize_time = 16;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 executor_deserialize_time = 16;
- getExecutorDeserializeTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 executor_deserialize_time = 16;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_deserialize_time = 1;
- getExecutorDeserializeTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 executor_deserialize_time = 1;
- getExecutorDeserializeTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 executor_deserialize_time = 1;
- getExecutorDeserializeTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_deserialize_time = 3;
- getExecutorDeserializeTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_deserialize_time = 3;
- GetExecutorEndpointRef(String) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
- GetExecutorEndpointRef$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef$
- getExecutorEnv() - Method in class org.apache.spark.SparkConf
-
Get all executor environment variables set on this SparkConf
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
optional string executor_id = 3;
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
optional string executor_id = 3;
- getExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
optional string executor_id = 3;
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string executor_id = 2;
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string executor_id = 2;
- getExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string executor_id = 2;
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string executor_id = 8;
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string executor_id = 8;
- getExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string executor_id = 8;
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string executor_id = 8;
- getExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string executor_id = 8;
- getExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string executor_id = 8;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
optional string executor_id = 3;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
optional string executor_id = 3;
- getExecutorIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
optional string executor_id = 3;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string executor_id = 2;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string executor_id = 2;
- getExecutorIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string executor_id = 2;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string executor_id = 8;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string executor_id = 8;
- getExecutorIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string executor_id = 8;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string executor_id = 8;
- getExecutorIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string executor_id = 8;
- getExecutorIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string executor_id = 8;
- getExecutorInfos() - Method in class org.apache.spark.SparkStatusTracker
-
Returns information of all known executors, including host, port, cacheSize, numRunningTasks and memory metrics.
- getExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
Deprecated.
- getExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
Deprecated.
- getExecutorLogs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
Deprecated.
- getExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
Deprecated.
- getExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
Deprecated.
- getExecutorLogs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
Deprecated.
- getExecutorLogsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- getExecutorLogsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getExecutorLogsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> executor_logs = 23;
- getExecutorLogsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- getExecutorLogsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- getExecutorLogsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
map<string, string> executor_logs = 16;
- getExecutorLogsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> executor_logs = 23;
- getExecutorLogsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> executor_logs = 23;
- getExecutorLogsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> executor_logs = 23;
- getExecutorLogsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
map<string, string> executor_logs = 16;
- getExecutorLogsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
map<string, string> executor_logs = 16;
- getExecutorLogsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
map<string, string> executor_logs = 16;
- getExecutorLogsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> executor_logs = 23;
- getExecutorLogsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> executor_logs = 23;
- getExecutorLogsOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> executor_logs = 23;
- getExecutorLogsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
map<string, string> executor_logs = 16;
- getExecutorLogsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
map<string, string> executor_logs = 16;
- getExecutorLogsOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
map<string, string> executor_logs = 16;
- getExecutorLogsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> executor_logs = 23;
- getExecutorLogsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, string> executor_logs = 23;
- getExecutorLogsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, string> executor_logs = 23;
- getExecutorLogsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
map<string, string> executor_logs = 16;
- getExecutorLogsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
map<string, string> executor_logs = 16;
- getExecutorLogsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
map<string, string> executor_logs = 16;
- GetExecutorLossReason(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
- GetExecutorLossReason$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason$
- getExecutorMemoryStatus() - Method in class org.apache.spark.SparkContext
-
Return a map from the block manager to the max memory available for caching and the remaining memory available for caching.
- getExecutorMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetrics(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- getExecutorMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- getExecutorMetricsDistributions() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- getExecutorMetricsDistributionsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- getExecutorMetricsDistributionsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- getExecutorMetricsDistributionsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- getExecutorMetricsDistributionsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- getExecutorMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorMetricsOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- getExecutorResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
Deprecated.
- getExecutorResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
Deprecated.
- getExecutorResources() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
Deprecated.
- getExecutorResourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- getExecutorResourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- getExecutorResourcesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesOrDefault(String, StoreTypes.ExecutorResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesOrDefault(String, StoreTypes.ExecutorResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesOrDefault(String, StoreTypes.ExecutorResourceRequest) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorResourcesOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_run_time = 8;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double executor_run_time = 8;
- getExecutorRunTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double executor_run_time = 8;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_run_time = 16;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 executor_run_time = 16;
- getExecutorRunTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 executor_run_time = 16;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_run_time = 18;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 executor_run_time = 18;
- getExecutorRunTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 executor_run_time = 18;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_run_time = 3;
- getExecutorRunTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 executor_run_time = 3;
- getExecutorRunTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 executor_run_time = 3;
- getExecutorRunTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- getExecutorRunTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_run_time = 5;
- getExecutorRunTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_run_time = 5;
- getExecutorRunTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- getExecutorRunTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_run_time = 5;
- getExecutorRunTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_run_time = 5;
- getExecutorRunTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- getExecutorRunTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double executor_run_time = 5;
- getExecutorRunTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double executor_run_time = 5;
- getExecutors(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- getExecutors(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
repeated string executors = 5;
- getExecutors(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
repeated string executors = 5;
- getExecutorsBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- getExecutorsBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
repeated string executors = 5;
- getExecutorsBytes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
repeated string executors = 5;
- getExecutorsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- getExecutorsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
repeated string executors = 5;
- getExecutorsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
repeated string executors = 5;
- getExecutorsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- getExecutorsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
repeated string executors = 5;
- getExecutorsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
repeated string executors = 5;
- getExecutorSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
Deprecated.
- getExecutorSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
Deprecated.
- getExecutorSummary() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
Deprecated.
- getExecutorSummaryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- getExecutorSummaryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getExecutorSummaryCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryOrDefault(String, StoreTypes.ExecutorStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryOrDefault(String, StoreTypes.ExecutorStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryOrDefault(String, StoreTypes.ExecutorStageSummary) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExecutorSummaryOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- getExpiryTimeInMs() - Method in interface org.apache.spark.sql.streaming.ExpiredTimerInfo
-
Get the expired timer's expiry time as milliseconds in epoch time.
- getFactorSize() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
- getFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 failed_tasks = 2;
- getFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int32 failed_tasks = 2;
- getFailedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int32 failed_tasks = 2;
- getFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 failed_tasks = 10;
- getFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 failed_tasks = 10;
- getFailedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 failed_tasks = 10;
- getFailedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- getFailedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double failed_tasks = 3;
- getFailedTasks(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double failed_tasks = 3;
- getFailedTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- getFailedTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double failed_tasks = 3;
- getFailedTasksCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double failed_tasks = 3;
- getFailedTasksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- getFailedTasksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double failed_tasks = 3;
- getFailedTasksList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double failed_tasks = 3;
- getFailureReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string failure_reason = 13;
- getFailureReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string failure_reason = 13;
- getFailureReason() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string failure_reason = 13;
- getFailureReasonBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string failure_reason = 13;
- getFailureReasonBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string failure_reason = 13;
- getFailureReasonBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string failure_reason = 13;
- getFamily() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
- getFamily() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
- getFdr() - Method in interface org.apache.spark.ml.feature.SelectorParams
- getFeatureIndex() - Method in interface org.apache.spark.ml.regression.IsotonicRegressionBase
- getFeatureIndicesFromNames(StructField, String[]) - Static method in class org.apache.spark.ml.util.MetadataUtils
-
Takes a Vector column and a list of feature names, and returns the corresponding list of feature indices in the column, in order.
- getFeatures() - Method in class org.apache.spark.ml.feature.LabeledPoint
- getFeatures() - Method in class org.apache.spark.mllib.regression.LabeledPoint
- getFeaturesAndLabels(RFormulaModel, Dataset<?>) - Static method in class org.apache.spark.ml.r.RWrapperUtils
-
Get the feature names and original labels from the schema of DataFrame transformed by RFormulaModel.
- getFeaturesCol() - Method in interface org.apache.spark.ml.param.shared.HasFeaturesCol
- getFeatureSubsetStrategy() - Method in interface org.apache.spark.ml.tree.TreeEnsembleParams
- getFeatureType() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
- getFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 fetch_wait_time = 3;
- getFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 fetch_wait_time = 3;
- getFetchWaitTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 fetch_wait_time = 3;
- getFetchWaitTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- getFetchWaitTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double fetch_wait_time = 5;
- getFetchWaitTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double fetch_wait_time = 5;
- getFetchWaitTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- getFetchWaitTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double fetch_wait_time = 5;
- getFetchWaitTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double fetch_wait_time = 5;
- getFetchWaitTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- getFetchWaitTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double fetch_wait_time = 5;
- getFetchWaitTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double fetch_wait_time = 5;
- getField(String) - Method in class org.apache.spark.sql.Column
-
An expression that gets a field by name in a
StructType
. - getFieldAtIndex(int) - Method in class org.apache.spark.types.variant.Variant
- getFieldByKey(String) - Method in class org.apache.spark.types.variant.Variant
- getFileLength(File, SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the file length, if the file is compressed it returns the uncompressed file length.
- getFileSegmentLocations(String, long, long, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
Get the locations of the HDFS blocks containing the given file segment.
- getFileSystemForPath(Path, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
- getFinalStorageLevel() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getFinalValue() - Method in class org.apache.spark.partial.PartialResult
-
Blocking method to wait for and return the final value.
- getFirstTaskLaunchedTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 first_task_launched_time = 11;
- getFirstTaskLaunchedTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional int64 first_task_launched_time = 11;
- getFirstTaskLaunchedTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional int64 first_task_launched_time = 11;
- getFitIntercept() - Method in interface org.apache.spark.ml.param.shared.HasFitIntercept
- getFitLinear() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
- getFloat() - Method in class org.apache.spark.types.variant.Variant
- getFloat(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getFloat(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive float.
- getFloat(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getFloat(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getFloat(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getFloat(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getFloat(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the float type value for
rowId
. - getFloats(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Gets float type values from
[rowId, rowId + count)
. - getFoldCol() - Method in interface org.apache.spark.ml.tuning.CrossValidatorParams
- getForceIndexLabel() - Method in interface org.apache.spark.ml.feature.RFormulaBase
- getFormattedClassName(Object) - Method in interface org.apache.spark.util.SparkClassUtils
-
Return the class name of the given object, removing all dollar signs
- getFormattedClassName(Object) - Static method in class org.apache.spark.util.Utils
- getFormattedDuration(JobData) - Static method in class org.apache.spark.ui.jobs.JobDataUtil
- getFormattedSubmissionTime(JobData) - Static method in class org.apache.spark.ui.jobs.JobDataUtil
- getFormula() - Method in interface org.apache.spark.ml.feature.RFormulaBase
- getFpr() - Method in interface org.apache.spark.ml.feature.SelectorParams
- getFromId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
int32 from_id = 1;
- getFromId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
-
int32 from_id = 1;
- getFromId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdgeOrBuilder
-
int32 from_id = 1;
- getFromId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
int64 from_id = 1;
- getFromId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
-
int64 from_id = 1;
- getFromId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdgeOrBuilder
-
int64 from_id = 1;
- getFullyQualifiedQuotedTableName(Identifier) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Return the DB-specific quoted and fully qualified table name
- getFullyQualifiedQuotedTableName(Identifier) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getFunction(String) - Method in class org.apache.spark.sql.api.Catalog
-
Get the function with the specified name.
- getFunction(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Get the function with the specified name in the specified database under the Hive Metastore.
- getFwe() - Method in interface org.apache.spark.ml.feature.SelectorParams
- getGaps() - Method in class org.apache.spark.ml.feature.RegexTokenizer
- getGettingResultTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double getting_result_time = 13;
- getGettingResultTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double getting_result_time = 13;
- getGettingResultTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double getting_result_time = 13;
- getGettingResultTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 getting_result_time = 18;
- getGettingResultTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
int64 getting_result_time = 18;
- getGettingResultTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
int64 getting_result_time = 18;
- getGettingResultTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- getGettingResultTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double getting_result_time = 10;
- getGettingResultTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double getting_result_time = 10;
- getGettingResultTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- getGettingResultTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double getting_result_time = 10;
- getGettingResultTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double getting_result_time = 10;
- getGettingResultTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- getGettingResultTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double getting_result_time = 10;
- getGettingResultTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double getting_result_time = 10;
- getGroups(String) - Method in interface org.apache.spark.security.GroupMappingServiceProvider
-
Get the groups the user belongs to.
- getHadoopFileSystem(String, Configuration) - Static method in class org.apache.spark.util.Utils
-
Return a Hadoop FileSystem with the scheme encoded in the given path.
- getHadoopFileSystem(URI, Configuration) - Static method in class org.apache.spark.util.Utils
-
Return a Hadoop FileSystem with the scheme encoded in the given path.
- getHadoopProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopProperties(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHadoopPropertiesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- getHandleInvalid() - Method in interface org.apache.spark.ml.param.shared.HasHandleInvalid
- getHasMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
bool has_metrics = 15;
- getHasMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
bool has_metrics = 15;
- getHasMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
bool has_metrics = 15;
- getHeapHistogram() - Static method in class org.apache.spark.util.Utils
-
Return a heap dump.
- getHeight(Row) - Static method in class org.apache.spark.ml.image.ImageSchema
-
Gets the height of the image
- getHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string host = 9;
- getHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string host = 9;
- getHost() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string host = 9;
- getHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string host = 9;
- getHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string host = 9;
- getHost() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string host = 9;
- getHostBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string host = 9;
- getHostBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string host = 9;
- getHostBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string host = 9;
- getHostBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string host = 9;
- getHostBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string host = 9;
- getHostBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string host = 9;
- getHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string host_port = 2;
- getHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string host_port = 2;
- getHostPort() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string host_port = 2;
- getHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string host_port = 2;
- getHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional string host_port = 2;
- getHostPort() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional string host_port = 2;
- getHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string host_port = 3;
- getHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string host_port = 3;
- getHostPort() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string host_port = 3;
- getHostPortBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string host_port = 2;
- getHostPortBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string host_port = 2;
- getHostPortBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string host_port = 2;
- getHostPortBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string host_port = 2;
- getHostPortBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional string host_port = 2;
- getHostPortBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional string host_port = 2;
- getHostPortBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string host_port = 3;
- getHostPortBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string host_port = 3;
- getHostPortBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string host_port = 3;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
int64 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
int64 id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
int64 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional string id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional string id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
optional string id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
int32 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
int32 id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
int32 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
int32 id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
int32 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
int32 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
int32 id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
int32 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
int64 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
int64 id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
int64 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
int64 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
int64 id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
int64 id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string id = 2;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string id = 2;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string id = 2;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string id = 1;
- getId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string id = 1;
- getId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional string id = 1;
- getIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string id = 1;
- getIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional string id = 1;
- getIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
optional string id = 1;
- getIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string id = 2;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string id = 2;
- getIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string id = 2;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string id = 1;
- getIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string id = 1;
- getIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string id = 1;
- getImplicitPrefs() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getImpurity() - Method in interface org.apache.spark.ml.tree.HasVarianceImpurity
- getImpurity() - Method in interface org.apache.spark.ml.tree.TreeClassifierParams
- getImpurity() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getIncomingEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdges(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIncomingEdgesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- getIndex() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 index = 2;
- getIndex() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
int32 index = 2;
- getIndex() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
int32 index = 2;
- getIndex() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 index = 2;
- getIndex() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int32 index = 2;
- getIndex() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int32 index = 2;
- getIndices() - Method in class org.apache.spark.ml.feature.VectorSlicer
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
.org.apache.spark.status.protobuf.JobData info = 1;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- getInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
.org.apache.spark.status.protobuf.StageData info = 1;
- getInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- getInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
.org.apache.spark.status.protobuf.JobData info = 1;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- getInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
.org.apache.spark.status.protobuf.StageData info = 1;
- getInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- getInitializationMode() - Method in class org.apache.spark.mllib.clustering.KMeans
-
The initialization algorithm.
- getInitializationSteps() - Method in class org.apache.spark.mllib.clustering.KMeans
-
Number of steps for the k-means|| initialization mode
- getInitialModel() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the user supplied initial GMM, if supplied
- getInitialTargetExecutorNumber(SparkConf, int) - Static method in class org.apache.spark.scheduler.cluster.SchedulerBackendUtils
-
Getting the initial target number of executors depends on whether dynamic allocation is enabled.
- getInitialWeights() - Method in interface org.apache.spark.ml.classification.MultilayerPerceptronParams
- getInitMode() - Method in interface org.apache.spark.ml.clustering.KMeansParams
- getInitMode() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
- getInitStd() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
- getInitSteps() - Method in interface org.apache.spark.ml.clustering.KMeansParams
- getInOutCols() - Method in interface org.apache.spark.ml.feature.ImputerParams
-
Returns the input and output column names corresponding in pair.
- getInOutCols() - Method in interface org.apache.spark.ml.feature.OneHotEncoderBase
-
Returns the input and output column names corresponding in pair.
- getInOutCols() - Method in interface org.apache.spark.ml.feature.StringIndexerBase
-
Returns the input and output column names corresponding in pair.
- getInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 input_bytes = 5;
- getInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 input_bytes = 5;
- getInputBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 input_bytes = 5;
- getInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 input_bytes = 24;
- getInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 input_bytes = 24;
- getInputBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 input_bytes = 24;
- getInputBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- getInputBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double input_bytes = 6;
- getInputBytes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double input_bytes = 6;
- getInputBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- getInputBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double input_bytes = 6;
- getInputBytesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double input_bytes = 6;
- getInputBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- getInputBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double input_bytes = 6;
- getInputBytesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double input_bytes = 6;
- getInputBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 input_bytes_read = 26;
- getInputBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 input_bytes_read = 26;
- getInputBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 input_bytes_read = 26;
- getInputCol() - Method in interface org.apache.spark.ml.param.shared.HasInputCol
- getInputCols() - Method in interface org.apache.spark.ml.param.shared.HasInputCols
- getInputFilePath() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Returns the holding file name or empty string if it is unknown.
- getInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- getInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- getInputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- getInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- getInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- getInputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- getInputMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- getInputMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- getInputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- getInputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- getInputMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- getInputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- getInputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- getInputMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- getInputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 input_records = 6;
- getInputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 input_records = 6;
- getInputRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 input_records = 6;
- getInputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 input_records = 25;
- getInputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 input_records = 25;
- getInputRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 input_records = 25;
- getInputRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- getInputRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double input_records = 7;
- getInputRecords(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double input_records = 7;
- getInputRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- getInputRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double input_records = 7;
- getInputRecordsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double input_records = 7;
- getInputRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- getInputRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double input_records = 7;
- getInputRecordsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double input_records = 7;
- getInputRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 input_records_read = 27;
- getInputRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 input_records_read = 27;
- getInputRecordsRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 input_records_read = 27;
- getInputRowsPerSecond() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
double input_rows_per_second = 6;
- getInputRowsPerSecond() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
double input_rows_per_second = 6;
- getInputRowsPerSecond() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
double input_rows_per_second = 6;
- getInputStream(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
- getInstant(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of date type as java.time.Instant.
- getInt(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive int.
- getInt(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getInt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getInt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getInt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getInt(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the int type value for
rowId
. - getInt(String, int) - Method in class org.apache.spark.SparkConf
-
Get a parameter as an integer, falling back to a default if not set
- getInt(String, int) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
Returns the integer value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
- getIntermediateStorageLevel() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getInterval(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getInterval(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getInterval(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getInterval(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getInterval(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the calendar interval type value for
rowId
. - getInts(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Gets int type values from
[rowId, rowId + count)
. - getInverse() - Method in class org.apache.spark.ml.feature.DCT
- getIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_active = 3;
- getIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
bool is_active = 3;
- getIsActive() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
bool is_active = 3;
- getIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
bool is_active = 3;
- getIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
bool is_active = 3;
- getIsActive() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
bool is_active = 3;
- getIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
bool is_active = 4;
- getIsActive() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
bool is_active = 4;
- getIsActive() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
bool is_active = 4;
- getIsBlacklisted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_blacklisted = 18;
- getIsBlacklisted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
bool is_blacklisted = 18;
- getIsBlacklisted() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
bool is_blacklisted = 18;
- getIsBlacklistedForStage() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
bool is_blacklisted_for_stage = 15;
- getIsBlacklistedForStage() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
bool is_blacklisted_for_stage = 15;
- getIsBlacklistedForStage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
bool is_blacklisted_for_stage = 15;
- getIsExcluded() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_excluded = 30;
- getIsExcluded() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
bool is_excluded = 30;
- getIsExcluded() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
bool is_excluded = 30;
- getIsExcludedForStage() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
bool is_excluded_for_stage = 17;
- getIsExcludedForStage() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
bool is_excluded_for_stage = 17;
- getIsExcludedForStage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
bool is_excluded_for_stage = 17;
- getIsExperiment() - Method in class org.apache.spark.mllib.stat.test.BinarySample
- getIsotonic() - Method in interface org.apache.spark.ml.regression.IsotonicRegressionBase
- getIsShufflePushEnabled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
bool is_shuffle_push_enabled = 63;
- getIsShufflePushEnabled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
bool is_shuffle_push_enabled = 63;
- getIsShufflePushEnabled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
bool is_shuffle_push_enabled = 63;
- getItem(Object) - Method in class org.apache.spark.sql.Column
-
An expression that gets an item at position
ordinal
out of an array, or gets a value by keykey
in aMapType
. - getItemCol() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
- getItemsCol() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
- getIteratorSize(Iterator<Object>) - Static method in class org.apache.spark.util.Utils
-
Counts the number of elements of an iterator.
- getIteratorZipWithIndex(Iterator<T>, long) - Static method in class org.apache.spark.util.Utils
-
Generate a zipWithIndex iterator, avoid index value overflowing problem in scala's zipWithIndex
- getIvyProperties() - Static method in class org.apache.spark.util.DependencyUtils
- getJavaHome() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_home = 2;
- getJavaHome() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string java_home = 2;
- getJavaHome() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string java_home = 2;
- getJavaHomeBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_home = 2;
- getJavaHomeBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string java_home = 2;
- getJavaHomeBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string java_home = 2;
- getJavaMap(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of array type as a
java.util.Map
. - getJavaSparkContext(SparkSession) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- getJavaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_version = 1;
- getJavaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string java_version = 1;
- getJavaVersion() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string java_version = 1;
- getJavaVersionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_version = 1;
- getJavaVersionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string java_version = 1;
- getJavaVersionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string java_version = 1;
- getJdbcSQLQueryBuilder(JDBCOptions) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getJdbcSQLQueryBuilder(JDBCOptions) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns the SQL builder for the SELECT statement.
- getJdbcSQLQueryBuilder(JDBCOptions) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getJdbcSQLQueryBuilder(JDBCOptions) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getJdbcSQLQueryBuilder(JDBCOptions) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getJdbcSQLQueryBuilder(JDBCOptions) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Retrieve the jdbc / sql type for a given datatype.
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.SnowflakeDialect
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.TeradataDialect
- getJobGroup() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string job_group = 7;
- getJobGroup() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string job_group = 7;
- getJobGroup() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string job_group = 7;
- getJobGroupBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string job_group = 7;
- getJobGroupBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string job_group = 7;
- getJobGroupBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string job_group = 7;
- getJobId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
All IDs are int64 for extendability, even when they are currently int32 in Spark.
- getJobId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
All IDs are int64 for extendability, even when they are currently int32 in Spark.
- getJobId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
All IDs are int64 for extendability, even when they are currently int32 in Spark.
- getJobIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- getJobIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
repeated int64 job_ids = 2;
- getJobIds(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
repeated int64 job_ids = 2;
- getJobIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- getJobIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
repeated int64 job_ids = 2;
- getJobIdsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
repeated int64 job_ids = 2;
- getJobIdsForGroup(String) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Return a list of all known jobs in a particular job group.
- getJobIdsForGroup(String) - Method in class org.apache.spark.SparkStatusTracker
-
Return a list of all known jobs in a particular job group.
- getJobIdsForTag(String) - Method in class org.apache.spark.SparkStatusTracker
-
Return a list of all known jobs with a particular tag.
- getJobIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- getJobIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
repeated int64 job_ids = 2;
- getJobIdsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
repeated int64 job_ids = 2;
- getJobInfo(int) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns job information, or
null
if the job info could not be found or was garbage collected. - getJobInfo(int) - Method in class org.apache.spark.SparkStatusTracker
-
Returns job information, or
None
if the job info could not be found or was garbage collected. - getJobs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getJobs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
Deprecated.
- getJobs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
Deprecated.
- getJobsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- getJobsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getJobsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsOrDefault(long, StoreTypes.JobExecutionStatus) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsOrDefault(long, StoreTypes.JobExecutionStatus) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsOrDefault(long, StoreTypes.JobExecutionStatus) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsOrThrow(long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getJobsValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
Deprecated.
- getJobsValue() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
Deprecated.
- getJobsValueMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueOrDefault(long, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueOrDefault(long, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueOrDefault(long, int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobsValueOrThrow(long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- getJobTags() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get the tags that are currently set to be assigned to all the jobs started by this thread.
- getJobTags() - Method in class org.apache.spark.SparkContext
-
Get the tags that are currently set to be assigned to all the jobs started by this thread.
- getJobTags(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- getJobTags(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
repeated string job_tags = 21;
- getJobTags(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
repeated string job_tags = 21;
- getJobTags(JavaSparkContext) - Static method in class org.apache.spark.api.r.RUtils
- getJobTagsBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- getJobTagsBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
repeated string job_tags = 21;
- getJobTagsBytes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
repeated string job_tags = 21;
- getJobTagsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- getJobTagsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
repeated string job_tags = 21;
- getJobTagsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
repeated string job_tags = 21;
- getJobTagsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- getJobTagsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
repeated string job_tags = 21;
- getJobTagsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
repeated string job_tags = 21;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double jvm_gc_time = 11;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double jvm_gc_time = 11;
- getJvmGcTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double jvm_gc_time = 11;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 jvm_gc_time = 19;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 jvm_gc_time = 19;
- getJvmGcTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 jvm_gc_time = 19;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 jvm_gc_time = 21;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 jvm_gc_time = 21;
- getJvmGcTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 jvm_gc_time = 21;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 jvm_gc_time = 6;
- getJvmGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 jvm_gc_time = 6;
- getJvmGcTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 jvm_gc_time = 6;
- getJvmGcTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- getJvmGcTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double jvm_gc_time = 8;
- getJvmGcTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double jvm_gc_time = 8;
- getJvmGcTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- getJvmGcTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double jvm_gc_time = 8;
- getJvmGcTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double jvm_gc_time = 8;
- getJvmGcTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- getJvmGcTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double jvm_gc_time = 8;
- getJvmGcTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double jvm_gc_time = 8;
- getK() - Method in interface org.apache.spark.ml.clustering.BisectingKMeansParams
- getK() - Method in interface org.apache.spark.ml.clustering.GaussianMixtureParams
- getK() - Method in interface org.apache.spark.ml.clustering.KMeansParams
- getK() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getK() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
- getK() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- getK() - Method in interface org.apache.spark.ml.feature.PCAParams
- getK() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the desired number of leaf clusters.
- getK() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the number of Gaussians in the mixture model
- getK() - Method in class org.apache.spark.mllib.clustering.KMeans
-
Number of clusters to create (k).
- getK() - Method in class org.apache.spark.mllib.clustering.LDA
-
Number of topics to infer, i.e., the number of soft cluster centers.
- getKappa() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Learning rate: exponential decay rate
- getKeepLastCheckpoint() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getKeepLastCheckpoint() - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
-
If using checkpointing, this indicates whether to keep the last checkpoint (vs clean up).
- getKeytabJaasParams(String, String, String) - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- getKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 killed_tasks = 4;
- getKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int32 killed_tasks = 4;
- getKilledTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int32 killed_tasks = 4;
- getKilledTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- getKilledTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double killed_tasks = 5;
- getKilledTasks(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double killed_tasks = 5;
- getKilledTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- getKilledTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double killed_tasks = 5;
- getKilledTasksCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double killed_tasks = 5;
- getKilledTasksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- getKilledTasksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double killed_tasks = 5;
- getKilledTasksList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double killed_tasks = 5;
- getKilledTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
Deprecated.
- getKilledTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
Deprecated.
- getKilledTasksSummary() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
Deprecated.
- getKilledTasksSummaryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- getKilledTasksSummaryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getKilledTasksSummaryCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryOrDefault(String, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryOrDefault(String, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryOrDefault(String, int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<string, int32> killed_tasks_summary = 48;
- getKilledTasksSummaryOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<string, int32> killed_tasks_summary = 48;
- getKillTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
Deprecated.
- getKillTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
Deprecated.
- getKillTasksSummary() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
Deprecated.
- getKillTasksSummaryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- getKillTasksSummaryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- getKillTasksSummaryCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryOrDefault(String, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryOrDefault(String, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryOrDefault(String, int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
map<string, int32> kill_tasks_summary = 20;
- getKillTasksSummaryOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
map<string, int32> kill_tasks_summary = 20;
- getKrb5LoginModuleName() - Static method in class org.apache.spark.util.SecurityUtils
-
Krb5LoginModule package varies in different JVMs.
- getLabel() - Method in class org.apache.spark.ml.feature.LabeledPoint
- getLabel() - Method in class org.apache.spark.mllib.regression.LabeledPoint
- getLabelCol() - Method in interface org.apache.spark.ml.param.shared.HasLabelCol
- getLabels() - Method in class org.apache.spark.ml.feature.IndexToString
- getLabelType() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
- getLambda() - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Get the smoothing parameter.
- getLastUpdated() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 last_updated = 4;
- getLastUpdated() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
int64 last_updated = 4;
- getLastUpdated() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
int64 last_updated = 4;
- getLastUpdatedEpoch() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- getLatestOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string latest_offset = 4;
- getLatestOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string latest_offset = 4;
- getLatestOffset() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string latest_offset = 4;
- getLatestOffsetBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string latest_offset = 4;
- getLatestOffsetBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string latest_offset = 4;
- getLatestOffsetBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string latest_offset = 4;
- getLaunchTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 launch_time = 5;
- getLaunchTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
int64 launch_time = 5;
- getLaunchTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
int64 launch_time = 5;
- getLaunchTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 launch_time = 5;
- getLaunchTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 launch_time = 5;
- getLaunchTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 launch_time = 5;
- getLayers() - Method in interface org.apache.spark.ml.classification.MultilayerPerceptronParams
- getLDAModel(double[]) - Method in interface org.apache.spark.mllib.clustering.LDAOptimizer
- getLeafCol() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getLeafField(String) - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
- getLeafField(String) - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
- getLearningDecay() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getLearningOffset() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getLearningRate() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- getLeastGroupHash(String) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Gets the least element of the list associated with key in groupHash The returned PartitionGroup is the least loaded of all groups that represent the machine "key"
- getLength() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Returns the length of the block being read, or -1 if it is unknown.
- getLimitClause(Integer) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getLimitClause(Integer) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- getLimitClause(Integer) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns the LIMIT clause for the SELECT statement
- getLimitClause(Integer) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getLimitClause(Integer) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getLimitClause(Integer) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getLimitClause(Integer) - Method in class org.apache.spark.sql.jdbc.TeradataDialect
- getLink() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
- getLinkPower() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
- getLinkPredictionCol() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
- getList(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of array type as
java.util.List
. - getListState(String, Encoder<T>) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Creates new or returns existing list state associated with stateName.
- getListState(String, Encoder<T>, TTLConfig) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to create new or return existing list state variable of given type with ttl.
- getLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 local_blocks_fetched = 2;
- getLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 local_blocks_fetched = 2;
- getLocalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 local_blocks_fetched = 2;
- getLocalBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetched(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetchedCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double local_blocks_fetched = 4;
- getLocalBlocksFetchedList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double local_blocks_fetched = 4;
- getLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 local_bytes_read = 6;
- getLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 local_bytes_read = 6;
- getLocalBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 local_bytes_read = 6;
- getLocalDate(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of date type as java.time.LocalDate.
- getLocalDir(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Get the path of a temporary directory.
- getLocale() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- getLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
Deprecated.
- getLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
Deprecated.
- getLocality() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
Deprecated.
- getLocalityCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- getLocalityCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- getLocalityCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
map<string, int64> locality = 3;
- getLocalityMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
map<string, int64> locality = 3;
- getLocalityMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
map<string, int64> locality = 3;
- getLocalityMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
map<string, int64> locality = 3;
- getLocalityOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
map<string, int64> locality = 3;
- getLocalityOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
map<string, int64> locality = 3;
- getLocalityOrDefault(String, long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
map<string, int64> locality = 3;
- getLocalityOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
map<string, int64> locality = 3;
- getLocalityOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
map<string, int64> locality = 3;
- getLocalityOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
map<string, int64> locality = 3;
- getLocalMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetched(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetchedCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBlocksFetchedList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_blocks_fetched = 4;
- getLocalMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_bytes_read = 8;
- getLocalMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 local_merged_bytes_read = 8;
- getLocalMergedBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 local_merged_bytes_read = 8;
- getLocalMergedBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesRead(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesReadCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedBytesReadList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_bytes_read = 8;
- getLocalMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetched(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetchedCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double local_merged_chunks_fetched = 6;
- getLocalMergedChunksFetchedList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double local_merged_chunks_fetched = 6;
- getLocalProperty(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get a local property set in this thread, or null if it is missing.
- getLocalProperty(String) - Method in class org.apache.spark.BarrierTaskContext
- getLocalProperty(String) - Method in class org.apache.spark.SparkContext
-
Get a local property set in this thread, or null if it is missing.
- getLocalProperty(String) - Method in class org.apache.spark.TaskContext
-
Get a local property set upstream in the driver, or null if it is missing.
- getLocalUserJarsForShell(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the local jar files which will be added to REPL's classpath.
- GetLocations(BlockId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocations
- GetLocations$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocations$
- GetLocationsAndStatus(BlockId, String) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
- GetLocationsAndStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus$
- GetLocationsMultipleBlockIds(BlockId[]) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
- GetLocationsMultipleBlockIds$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds$
- getLogLevel() - Static method in class org.apache.spark.util.Utils
-
Get current log level
- getLong() - Method in class org.apache.spark.types.variant.Variant
- getLong(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getLong(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive long.
- getLong(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getLong(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getLong(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getLong(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getLong(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the long type value for
rowId
. - getLong(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Long.
- getLong(String, long) - Method in class org.apache.spark.SparkConf
-
Get a parameter as a long, falling back to a default if not set
- getLong(String, long) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
Returns the long value to which the specified key is mapped, or defaultValue if there is no mapping for the key.
- getLongArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Long array.
- getLongs(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Gets long type values from
[rowId, rowId + count)
. - getLoss() - Method in interface org.apache.spark.ml.param.shared.HasLoss
- getLoss() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- getLossType() - Method in interface org.apache.spark.ml.tree.GBTClassifierParams
- getLossType() - Method in interface org.apache.spark.ml.tree.GBTRegressorParams
- getLower() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
- getLowerBound(double) - Static method in class org.apache.spark.util.random.PoissonBounds
-
Returns a lambda such that Pr[X > s] is very small, where X ~ Pois(lambda).
- getLowerBound(double, long, double) - Static method in class org.apache.spark.util.random.BinomialBounds
-
Returns a threshold
p
such that if we conduct n Bernoulli trials with success rate =p
, it is very unlikely to have more thanfraction * n
successes. - getLowerBoundsOnCoefficients() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
- getLowerBoundsOnIntercepts() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
- getMap(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of map type as a Scala Map.
- getMap(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getMap(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getMap(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getMap(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getMap(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the map type value for
rowId
. - getMapOutputMetadata() - Method in class org.apache.spark.shuffle.api.metadata.MapOutputCommitMessage
- getMapState(String, Encoder<K>, Encoder<V>) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Creates new or returns existing map state associated with stateName.
- getMapState(String, Encoder<K>, Encoder<V>, TTLConfig) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to create new or return existing map state variable of given type with ttl.
- getMapStatus(long) - Method in class org.apache.spark.ShuffleStatus
-
Get the map output that corresponding to a given mapId.
- GetMatchingBlockIds(Function1<BlockId, Object>, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
- GetMatchingBlockIds$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds$
- getMax() - Method in interface org.apache.spark.ml.feature.MinMaxScalerParams
- getMaxBins() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getMaxBins() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getMaxBlockSizeInMB() - Method in interface org.apache.spark.ml.param.shared.HasMaxBlockSizeInMB
- getMaxCategories() - Method in interface org.apache.spark.ml.feature.VectorIndexerParams
- getMaxCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 max_cores = 4;
- getMaxCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 max_cores = 4;
- getMaxCores() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 max_cores = 4;
- getMaxDepth() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getMaxDepth() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getMaxDF() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
- getMaxFailures(SparkConf, boolean) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- getMaxIter() - Method in interface org.apache.spark.ml.param.shared.HasMaxIter
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the max number of k-means iterations to split clusters.
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the maximum number of iterations allowed
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.KMeans
-
Maximum number of iterations allowed.
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.LDA
-
Maximum number of iterations allowed.
- getMaxLocalProjDBSize() - Method in class org.apache.spark.ml.fpm.PrefixSpan
- getMaxLocalProjDBSize() - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Gets the maximum number of items allowed in a projected database before local processing.
- getMaxMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 max_memory = 19;
- getMaxMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 max_memory = 19;
- getMaxMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 max_memory = 19;
- getMaxMemoryInMB() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getMaxMemoryInMB() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getMaxPatternLength() - Method in class org.apache.spark.ml.fpm.PrefixSpan
- getMaxPatternLength() - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Gets the maximal pattern length (i.e.
- getMaxSentenceLength() - Method in interface org.apache.spark.ml.feature.Word2VecBase
- getMaxTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 max_tasks = 8;
- getMaxTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 max_tasks = 8;
- getMaxTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 max_tasks = 8;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double memory_bytes_spilled = 16;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double memory_bytes_spilled = 16;
- getMemoryBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double memory_bytes_spilled = 16;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 memory_bytes_spilled = 13;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 memory_bytes_spilled = 13;
- getMemoryBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 memory_bytes_spilled = 13;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 memory_bytes_spilled = 21;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 memory_bytes_spilled = 21;
- getMemoryBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 memory_bytes_spilled = 21;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 memory_bytes_spilled = 23;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 memory_bytes_spilled = 23;
- getMemoryBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 memory_bytes_spilled = 23;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 memory_bytes_spilled = 8;
- getMemoryBytesSpilled() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 memory_bytes_spilled = 8;
- getMemoryBytesSpilled() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 memory_bytes_spilled = 8;
- getMemoryBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilled(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilled(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilled(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilledCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilledCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilledCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilledList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double memory_bytes_spilled = 14;
- getMemoryBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilledList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double memory_bytes_spilled = 13;
- getMemoryBytesSpilledList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double memory_bytes_spilled = 13;
- getMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- getMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- getMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- getMemoryMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- getMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- getMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- getMemoryMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- getMemoryPerExecutorMb() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 memory_per_executor_mb = 6;
- getMemoryPerExecutorMb() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 memory_per_executor_mb = 6;
- getMemoryPerExecutorMb() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 memory_per_executor_mb = 6;
- getMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 memory_remaining = 3;
- getMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
int64 memory_remaining = 3;
- getMemoryRemaining() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
int64 memory_remaining = 3;
- GetMemoryStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus$
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 memory_used = 5;
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 memory_used = 5;
- getMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 memory_used = 5;
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 memory_used = 2;
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
int64 memory_used = 2;
- getMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
int64 memory_used = 2;
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
int64 memory_used = 3;
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
int64 memory_used = 3;
- getMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
int64 memory_used = 3;
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int64 memory_used = 6;
- getMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
int64 memory_used = 6;
- getMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
int64 memory_used = 6;
- getMemoryUsedBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 memory_used_bytes = 8;
- getMemoryUsedBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 memory_used_bytes = 8;
- getMemoryUsedBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 memory_used_bytes = 8;
- getMemSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
int64 mem_size = 8;
- getMemSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
int64 mem_size = 8;
- getMemSize() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
int64 mem_size = 8;
- getMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCount(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCount(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCount(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCountCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCountCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCountCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCountList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCountList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double merged_fetch_fallback_count = 2;
- getMergedFetchFallbackCountList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double merged_fetch_fallback_count = 2;
- getMergerLocs() - Method in class org.apache.spark.ShuffleDependency
- getMessage() - Method in exception org.apache.spark.sql.AnalysisException
- getMessage() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- getMessage(String, Map<String, String>) - Static method in class org.apache.spark.SparkThrowableHelper
- getMessage(String, Map<String, String>) - Static method in class org.apache.spark.SparkThrowableHelper
- getMessage(String, Map<String, String>, String) - Static method in class org.apache.spark.SparkThrowableHelper
- getMessage(SparkThrowable, Enumeration.Value) - Static method in class org.apache.spark.SparkThrowableHelper
- getMessageParameters() - Method in exception org.apache.spark.SparkException
- getMessageParameters() - Method in interface org.apache.spark.SparkThrowable
- getMessageParameters() - Method in exception org.apache.spark.sql.AnalysisException
- getMessageParameters() - Method in exception org.apache.spark.sql.exceptions.SqlScriptingException
- getMessageParameters() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- getMessageParameters(String) - Method in class org.apache.spark.ErrorClassesJsonReader
- getMessageParameters(String) - Static method in class org.apache.spark.SparkThrowableHelper
- getMessageTemplate(String) - Method in class org.apache.spark.ErrorClassesJsonReader
- getMetadata() - Method in class org.apache.spark.types.variant.Variant
- getMetadata(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Metadata.
- getMetadataArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Metadata array.
- getMetadataKey(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getMetadataOutput(SparkSession, Map<String, String>, Option<StructType>) - Method in interface org.apache.spark.sql.sources.SupportsStreamSourceMetadataColumns
-
Returns the metadata columns that should be added to the schema of the Stream Source.
- getMetricDistribution(String) - Method in class org.apache.spark.status.api.v1.ExecutorPeakMetricsDistributions
-
Returns the distributions for the specified metric.
- getMetricLabel() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- getMetricLabel() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- getMetricName() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- getMetricName() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- getMetricName() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- getMetricName() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- getMetricName() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- getMetricName() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- getMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
Deprecated.
- getMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
-
Deprecated.
- getMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsOrBuilder
-
Deprecated.
- getMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
Deprecated.
- getMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
Deprecated.
- getMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
Deprecated.
- getMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
Deprecated.
- getMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
Deprecated.
- getMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
Deprecated.
- getMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetrics(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetrics(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetrics(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetrics(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
Get a BinaryClassificationMetrics, which can be used to get binary classification metrics such as areaUnderROC and areaUnderPR.
- getMetrics(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
-
Get a ClusteringMetrics, which can be used to get clustering metrics such as silhouette score.
- getMetrics(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
Get a MulticlassMetrics, which can be used to get multiclass classification metrics such as accuracy, weightedPrecision, etc.
- getMetrics(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
-
Get a MultilabelMetrics, which can be used to get multilabel classification metrics such as accuracy, precision, precisionByLabel, etc.
- getMetrics(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
-
Get a RankingMetrics, which can be used to get ranking metrics such as meanAveragePrecision, meanAveragePrecisionAtK, etc.
- getMetrics(Dataset<?>) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
Get a RegressionMetrics, which can be used to get regression metrics such as rootMeanSquaredError, meanSquaredError, etc.
- getMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- getMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsOrBuilder
-
map<string, int64> metrics = 1;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- getMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
map<string, string> metrics = 3;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- getMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
map<string, string> metrics = 8;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
map<string, int64> metrics = 1;
- getMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
-
map<string, int64> metrics = 1;
- getMetricsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsOrBuilder
-
map<string, int64> metrics = 1;
- getMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
map<string, string> metrics = 3;
- getMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
map<string, string> metrics = 3;
- getMetricsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
map<string, string> metrics = 3;
- getMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
map<string, string> metrics = 8;
- getMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
map<string, string> metrics = 8;
- getMetricsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
map<string, string> metrics = 8;
- getMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- getMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- getMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- getMetricsOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
map<string, int64> metrics = 1;
- getMetricsOrDefault(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
-
map<string, int64> metrics = 1;
- getMetricsOrDefault(String, long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsOrBuilder
-
map<string, int64> metrics = 1;
- getMetricsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
map<string, string> metrics = 3;
- getMetricsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
map<string, string> metrics = 3;
- getMetricsOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
map<string, string> metrics = 3;
- getMetricsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
map<string, string> metrics = 8;
- getMetricsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
map<string, string> metrics = 8;
- getMetricsOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
map<string, string> metrics = 8;
- getMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
map<string, int64> metrics = 1;
- getMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
-
map<string, int64> metrics = 1;
- getMetricsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsOrBuilder
-
map<string, int64> metrics = 1;
- getMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
map<string, string> metrics = 3;
- getMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
map<string, string> metrics = 3;
- getMetricsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
map<string, string> metrics = 3;
- getMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
map<string, string> metrics = 8;
- getMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
map<string, string> metrics = 8;
- getMetricsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
map<string, string> metrics = 8;
- getMetricsProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsProperties(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsPropertiesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- getMetricsSources(String) - Method in class org.apache.spark.BarrierTaskContext
- getMetricsSources(String) - Method in class org.apache.spark.TaskContext
-
::DeveloperApi:: Returns all metrics sources with the given name which are associated with the instance which runs the task.
- getMetricType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string metric_type = 3;
- getMetricType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
optional string metric_type = 3;
- getMetricType() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetricOrBuilder
-
optional string metric_type = 3;
- getMetricTypeBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string metric_type = 3;
- getMetricTypeBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
optional string metric_type = 3;
- getMetricTypeBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetricOrBuilder
-
optional string metric_type = 3;
- getMetricValue(MemoryManager) - Method in interface org.apache.spark.metrics.SingleValueExecutorMetricType
- getMetricValues() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getMetricValues() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
Deprecated.
- getMetricValues() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
Deprecated.
- getMetricValues(MemoryManager) - Method in interface org.apache.spark.metrics.ExecutorMetricType
- getMetricValues(MemoryManager) - Method in interface org.apache.spark.metrics.SingleValueExecutorMetricType
- getMetricValuesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- getMetricValuesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getMetricValuesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, string> metric_values = 14;
- getMetricValuesIsNull() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
bool metric_values_is_null = 13;
- getMetricValuesIsNull() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
bool metric_values_is_null = 13;
- getMetricValuesIsNull() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
bool metric_values_is_null = 13;
- getMetricValuesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, string> metric_values = 14;
- getMetricValuesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, string> metric_values = 14;
- getMetricValuesMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, string> metric_values = 14;
- getMetricValuesOrDefault(long, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, string> metric_values = 14;
- getMetricValuesOrDefault(long, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, string> metric_values = 14;
- getMetricValuesOrDefault(long, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, string> metric_values = 14;
- getMetricValuesOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, string> metric_values = 14;
- getMetricValuesOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<int64, string> metric_values = 14;
- getMetricValuesOrThrow(long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<int64, string> metric_values = 14;
- getMin() - Method in interface org.apache.spark.ml.feature.MinMaxScalerParams
- getMinConfidence() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
- getMinCount() - Method in interface org.apache.spark.ml.feature.Word2VecBase
- getMinDF() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
- getMinDivisibleClusterSize() - Method in interface org.apache.spark.ml.clustering.BisectingKMeansParams
- getMinDivisibleClusterSize() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the minimum number of points (if greater than or equal to
1.0
) or the minimum proportion of points (if less than1.0
) of a divisible cluster. - getMinDocFreq() - Method in interface org.apache.spark.ml.feature.IDFBase
- getMiniBatchFraction() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
- getMiniBatchFraction() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Mini-batch fraction, which sets the fraction of document sampled and used in each iteration
- getMinInfoGain() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getMinInfoGain() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getMinInstancesPerNode() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getMinInstancesPerNode() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getMinSupport() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
- getMinSupport() - Method in class org.apache.spark.ml.fpm.PrefixSpan
- getMinSupport() - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Get the minimal support (i.e.
- getMinTF() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
- getMinTokenLength() - Method in class org.apache.spark.ml.feature.RegexTokenizer
- getMinWeightFractionPerNode() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- getMinWeightFractionPerNode() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getMissingValue() - Method in interface org.apache.spark.ml.feature.ImputerParams
- getMode(Row) - Static method in class org.apache.spark.ml.image.ImageSchema
-
Gets the OpenCV representation as an int
- getModelType() - Method in interface org.apache.spark.ml.classification.NaiveBayesParams
- getModelType() - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Get the model type.
- getModifiedConfigs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getModifiedConfigs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
Deprecated.
- getModifiedConfigs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
Deprecated.
- getModifiedConfigsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- getModifiedConfigsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getModifiedConfigsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<string, string> modified_configs = 6;
- getModifiedConfigsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<string, string> modified_configs = 6;
- getModifiedConfigsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<string, string> modified_configs = 6;
- getModifiedConfigsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<string, string> modified_configs = 6;
- getModifiedConfigsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<string, string> modified_configs = 6;
- getModifiedConfigsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<string, string> modified_configs = 6;
- getModifiedConfigsOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<string, string> modified_configs = 6;
- getModifiedConfigsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<string, string> modified_configs = 6;
- getModifiedConfigsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
map<string, string> modified_configs = 6;
- getModifiedConfigsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
map<string, string> modified_configs = 6;
- getMutableAttributes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
Deprecated.
- getMutableCustomMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
Deprecated.
- getMutableDurationMs() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
Deprecated.
- getMutableEventTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
Deprecated.
- getMutableExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
Deprecated.
- getMutableExecutorLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
Deprecated.
- getMutableExecutorResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
Deprecated.
- getMutableExecutorSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
Deprecated.
- getMutableJobs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getMutableJobsValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getMutableKilledTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
Deprecated.
- getMutableKillTasksSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
Deprecated.
- getMutableLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
Deprecated.
- getMutableMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
Deprecated.
- getMutableMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
Deprecated.
- getMutableMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
Deprecated.
- getMutableMetricValues() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getMutableModifiedConfigs() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
Deprecated.
- getMutableObservedMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
Deprecated.
- getMutableProcessLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
Deprecated.
- getMutableResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
Deprecated.
- getMutableTaskResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
Deprecated.
- getMutableTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
Deprecated.
- getN() - Method in class org.apache.spark.ml.feature.NGram
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
-
optional string name = 1;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PoolDataOrBuilder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
optional string name = 1;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceInformationOrBuilder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
optional string name = 2;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
optional string name = 2;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
optional string name = 1;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetricOrBuilder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string name = 39;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string name = 39;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string name = 39;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string name = 1;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string name = 1;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string name = 1;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string name = 3;
- getName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string name = 3;
- getName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string name = 3;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
-
optional string name = 1;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PoolDataOrBuilder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
optional string name = 1;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceInformationOrBuilder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
optional string name = 2;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
optional string name = 2;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
optional string name = 1;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetricOrBuilder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string name = 39;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string name = 39;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string name = 39;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string name = 1;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string name = 1;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string name = 1;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string name = 3;
- getNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string name = 3;
- getNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string name = 3;
- getNames() - Method in class org.apache.spark.ml.feature.VectorSlicer
- getNChannels(Row) - Static method in class org.apache.spark.ml.image.ImageSchema
-
Gets the number of channels in the image
- getNode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- getNode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- getNode() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- getNode(int, Node) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Traces down from a root node to get the node with the given node index.
- getNodeBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- getNodeOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- getNodeOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- getNodeOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- getNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- getNodesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNodesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- getNonnegative() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getNumActiveStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_active_stages = 16;
- getNumActiveStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_active_stages = 16;
- getNumActiveStages() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_active_stages = 16;
- getNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_active_tasks = 10;
- getNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_active_tasks = 10;
- getNumActiveTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_active_tasks = 10;
- getNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_active_tasks = 2;
- getNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
-
int32 num_active_tasks = 2;
- getNumActiveTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryOrBuilder
-
int32 num_active_tasks = 2;
- getNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_active_tasks = 5;
- getNumActiveTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 num_active_tasks = 5;
- getNumActiveTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 num_active_tasks = 5;
- getNumber() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
- getNumber() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
- getNumber() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
- getNumber() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
- getNumBins() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- getNumBins() - Method in class org.apache.spark.sql.util.NumericHistogram
-
Returns the number of bins.
- getNumBuckets() - Method in interface org.apache.spark.ml.feature.QuantileDiscretizerBase
- getNumBucketsArray() - Method in interface org.apache.spark.ml.feature.QuantileDiscretizerBase
- getNumBytesWritten() - Method in interface org.apache.spark.shuffle.api.ShufflePartitionWriter
-
Returns the number of bytes written either by this writer's output stream opened by
ShufflePartitionWriter.openStream()
or the byte channel opened byShufflePartitionWriter.openChannelWrapper()
. - getNumCachedPartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 num_cached_partitions = 4;
- getNumCachedPartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
int32 num_cached_partitions = 4;
- getNumCachedPartitions() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
int32 num_cached_partitions = 4;
- getNumClasses() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getNumClasses(StructField) - Static method in class org.apache.spark.ml.util.MetadataUtils
-
Examine a schema to identify the number of classes in a label column.
- getNumCompletedIndices() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_indices = 15;
- getNumCompletedIndices() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_completed_indices = 15;
- getNumCompletedIndices() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_completed_indices = 15;
- getNumCompletedIndices() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_completed_indices = 9;
- getNumCompletedIndices() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 num_completed_indices = 9;
- getNumCompletedIndices() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 num_completed_indices = 9;
- getNumCompletedJobs() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
int32 num_completed_jobs = 1;
- getNumCompletedJobs() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
-
int32 num_completed_jobs = 1;
- getNumCompletedJobs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AppSummaryOrBuilder
-
int32 num_completed_jobs = 1;
- getNumCompletedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
int32 num_completed_stages = 2;
- getNumCompletedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
-
int32 num_completed_stages = 2;
- getNumCompletedStages() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AppSummaryOrBuilder
-
int32 num_completed_stages = 2;
- getNumCompletedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_stages = 17;
- getNumCompletedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_completed_stages = 17;
- getNumCompletedStages() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_completed_stages = 17;
- getNumCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_tasks = 11;
- getNumCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_completed_tasks = 11;
- getNumCompletedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_completed_tasks = 11;
- getNumCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_completed_tasks = 3;
- getNumCompletedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
-
int32 num_completed_tasks = 3;
- getNumCompletedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryOrBuilder
-
int32 num_completed_tasks = 3;
- getNumCompleteTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_complete_tasks = 6;
- getNumCompleteTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 num_complete_tasks = 6;
- getNumCompleteTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 num_complete_tasks = 6;
- getNumFailedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_failed_stages = 19;
- getNumFailedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_failed_stages = 19;
- getNumFailedStages() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_failed_stages = 19;
- getNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_failed_tasks = 13;
- getNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_failed_tasks = 13;
- getNumFailedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_failed_tasks = 13;
- getNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_failed_tasks = 4;
- getNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
-
int32 num_failed_tasks = 4;
- getNumFailedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryOrBuilder
-
int32 num_failed_tasks = 4;
- getNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_failed_tasks = 7;
- getNumFailedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 num_failed_tasks = 7;
- getNumFailedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 num_failed_tasks = 7;
- getNumFeatures() - Method in interface org.apache.spark.ml.param.shared.HasNumFeatures
- getNumFeatures() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
The dimension of training features.
- getNumFeatures(StructField) - Static method in class org.apache.spark.ml.util.MetadataUtils
-
Examine a schema to identify the number of features in a vector column.
- getNumFolds() - Method in interface org.apache.spark.ml.tuning.CrossValidatorParams
- getNumHashTables() - Method in interface org.apache.spark.ml.feature.LSHParams
- getNumInputRows() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
int64 num_input_rows = 5;
- getNumInputRows() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
int64 num_input_rows = 5;
- getNumInputRows() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
int64 num_input_rows = 5;
- getNumItemBlocks() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getNumIterations() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- getNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_killed_tasks = 14;
- getNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_killed_tasks = 14;
- getNumKilledTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_killed_tasks = 14;
- getNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_killed_tasks = 5;
- getNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
-
int32 num_killed_tasks = 5;
- getNumKilledTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryOrBuilder
-
int32 num_killed_tasks = 5;
- getNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_killed_tasks = 8;
- getNumKilledTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 num_killed_tasks = 8;
- getNumKilledTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 num_killed_tasks = 8;
- getNumObjFields() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- getNumOutputRows() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
int64 num_output_rows = 2;
- getNumOutputRows() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
int64 num_output_rows = 2;
- getNumOutputRows() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
int64 num_output_rows = 2;
- getNumPartitions() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the number of partitions in this RDD.
- getNumPartitions() - Method in interface org.apache.spark.ml.feature.Word2VecBase
- getNumPartitions() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
- getNumPartitions() - Method in class org.apache.spark.rdd.RDD
-
Returns the number of partitions of this RDD.
- getNumPartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 num_partitions = 3;
- getNumPartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
int32 num_partitions = 3;
- getNumPartitions() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
int32 num_partitions = 3;
- getNumRowsDroppedByWatermark() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_dropped_by_watermark = 9;
- getNumRowsDroppedByWatermark() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 num_rows_dropped_by_watermark = 9;
- getNumRowsDroppedByWatermark() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 num_rows_dropped_by_watermark = 9;
- getNumRowsRemoved() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_removed = 5;
- getNumRowsRemoved() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 num_rows_removed = 5;
- getNumRowsRemoved() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 num_rows_removed = 5;
- getNumRowsTotal() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_total = 2;
- getNumRowsTotal() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 num_rows_total = 2;
- getNumRowsTotal() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 num_rows_total = 2;
- getNumRowsUpdated() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_updated = 3;
- getNumRowsUpdated() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 num_rows_updated = 3;
- getNumRowsUpdated() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 num_rows_updated = 3;
- getNumShufflePartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_shuffle_partitions = 10;
- getNumShufflePartitions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 num_shuffle_partitions = 10;
- getNumShufflePartitions() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 num_shuffle_partitions = 10;
- getNumSkippedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_skipped_stages = 18;
- getNumSkippedStages() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_skipped_stages = 18;
- getNumSkippedStages() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_skipped_stages = 18;
- getNumSkippedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_skipped_tasks = 12;
- getNumSkippedTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_skipped_tasks = 12;
- getNumSkippedTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_skipped_tasks = 12;
- getNumStateStoreInstances() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_state_store_instances = 11;
- getNumStateStoreInstances() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
int64 num_state_store_instances = 11;
- getNumStateStoreInstances() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
int64 num_state_store_instances = 11;
- getNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_tasks = 9;
- getNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
int32 num_tasks = 9;
- getNumTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
int32 num_tasks = 9;
- getNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_tasks = 1;
- getNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
-
int32 num_tasks = 1;
- getNumTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryOrBuilder
-
int32 num_tasks = 1;
- getNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_tasks = 4;
- getNumTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 num_tasks = 4;
- getNumTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 num_tasks = 4;
- getNumTopFeatures() - Method in interface org.apache.spark.ml.feature.SelectorParams
- getNumTrees() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
Number of trees in ensemble
- getNumTrees() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
Number of trees in ensemble
- getNumTrees() - Method in interface org.apache.spark.ml.tree.RandomForestParams
- getNumUserBlocks() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getNumValues() - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Get the number of values, either from
numValues
or fromvalues
. - getObjFieldValues(Object, Object[]) - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- getObservedMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
Deprecated.
- getObservedMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
Deprecated.
- getObservedMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
Deprecated.
- getObservedMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- getObservedMetricsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getObservedMetricsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> observed_metrics = 12;
- getObservedMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> observed_metrics = 12;
- getObservedMetricsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> observed_metrics = 12;
- getObservedMetricsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> observed_metrics = 12;
- getObservedMetricsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> observed_metrics = 12;
- getObservedMetricsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> observed_metrics = 12;
- getObservedMetricsOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> observed_metrics = 12;
- getObservedMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> observed_metrics = 12;
- getObservedMetricsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
map<string, string> observed_metrics = 12;
- getObservedMetricsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
map<string, string> observed_metrics = 12;
- getOffHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_remaining = 8;
- getOffHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 off_heap_memory_remaining = 8;
- getOffHeapMemoryRemaining() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 off_heap_memory_remaining = 8;
- getOffHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_used = 6;
- getOffHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 off_heap_memory_used = 6;
- getOffHeapMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 off_heap_memory_used = 6;
- getOffset() - Method in interface org.apache.spark.sql.connector.read.streaming.ContinuousPartitionReader
-
Get the offset of the current record, or the start offset if no records have been read.
- getOffsetClause(Integer) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getOffsetClause(Integer) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns the OFFSET clause for the SELECT statement
- getOffsetClause(Integer) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getOffsetClause(Integer) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getOffsetCol() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
- getOldBoostingStrategy(Map<Object, Object>, Enumeration.Value) - Method in interface org.apache.spark.ml.tree.GBTParams
-
(private[ml]) Create a BoostingStrategy instance to use with the old API.
- getOldDocConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Get docConcentration used by spark.mllib LDA
- getOldImpurity() - Method in interface org.apache.spark.ml.tree.HasVarianceImpurity
-
Convert new impurity to old impurity.
- getOldImpurity() - Method in interface org.apache.spark.ml.tree.TreeClassifierParams
-
Convert new impurity to old impurity.
- getOldLossType() - Method in interface org.apache.spark.ml.tree.GBTClassifierParams
-
(private[ml]) Convert new loss to old loss.
- getOldLossType() - Method in interface org.apache.spark.ml.tree.GBTParams
-
Get old Gradient Boosting Loss type
- getOldLossType() - Method in interface org.apache.spark.ml.tree.GBTRegressorParams
-
(private[ml]) Convert new loss to old loss.
- getOldOptimizer() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getOldStrategy(Map<Object, Object>, int, Enumeration.Value, Impurity) - Method in interface org.apache.spark.ml.tree.TreeEnsembleParams
-
Create a Strategy instance to use with the old API.
- getOldStrategy(Map<Object, Object>, int, Enumeration.Value, Impurity, double) - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
(private[ml]) Create a Strategy instance to use with the old API.
- getOldTopicConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Get topicConcentration used by spark.mllib LDA
- getOnHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_remaining = 7;
- getOnHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 on_heap_memory_remaining = 7;
- getOnHeapMemoryRemaining() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 on_heap_memory_remaining = 7;
- getOnHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_used = 5;
- getOnHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 on_heap_memory_used = 5;
- getOnHeapMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 on_heap_memory_used = 5;
- getOperatorName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
optional string operator_name = 1;
- getOperatorName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
optional string operator_name = 1;
- getOperatorName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
optional string operator_name = 1;
- getOperatorNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
optional string operator_name = 1;
- getOperatorNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
optional string operator_name = 1;
- getOperatorNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
optional string operator_name = 1;
- getOptimizeDocConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getOptimizeDocConcentration() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Optimize docConcentration, indicates whether docConcentration (Dirichlet parameter for document-topic distribution) will be optimized during training.
- getOptimizer() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getOptimizer() - Method in class org.apache.spark.mllib.clustering.LDA
-
LDAOptimizer used to perform the actual calculation
- getOption() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Get the state value as a scala Option.
- getOption() - Method in interface org.apache.spark.sql.streaming.ValueState
-
Get the state if it exists as an option and None otherwise
- getOption() - Method in class org.apache.spark.streaming.State
-
Get the state as a
scala.Option
. - getOption(String) - Method in class org.apache.spark.SparkConf
-
Get a parameter as an Option
- getOption(String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns the value of Spark runtime configuration property for the given key.
- getOptional(boolean, Function0<T>) - Static method in class org.apache.spark.status.protobuf.Utils
- getOrCreate() - Static method in class org.apache.spark.SparkContext
-
This function may be used to get or instantiate a SparkContext and register it as a singleton object.
- getOrCreate() - Method in class org.apache.spark.sql.SparkSession.Builder
-
Gets an existing
SparkSession
or, if there is no existing one, creates a new one based on the options set in this builder. - getOrCreate(String, Function0<JavaStreamingContext>) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreate(String, Function0<JavaStreamingContext>, Configuration) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreate(String, Function0<JavaStreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreate(SparkConf) - Static method in class org.apache.spark.SparkContext
-
This function may be used to get or instantiate a SparkContext and register it as a singleton object.
- getOrCreate(SparkContext) - Static method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use SparkSession.builder instead. Since 2.0.0.
- getOrCreateSparkSession(JavaSparkContext, Map<Object, Object>, boolean) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- getOrDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
-
Gets the value of a param in the embedded param map or its default value.
- getOrDiscoverAllResources(SparkConf, String, Option<String>) - Static method in class org.apache.spark.resource.ResourceUtils
-
Gets all allocated resource information for the input component from input resources file and the application level Spark configs.
- getOrDiscoverAllResourcesForResourceProfile(Option<String>, String, ResourceProfile, SparkConf) - Static method in class org.apache.spark.resource.ResourceUtils
-
This function is similar to getOrDiscoverallResources, except for it uses the ResourceProfile information instead of the application level configs.
- getOrElse(Param<T>, T) - Method in class org.apache.spark.ml.param.ParamMap
-
Returns the value associated with a param or a default value.
- getOrigin(Row) - Static method in class org.apache.spark.ml.image.ImageSchema
-
Gets the origin of the image
- getOutgoingEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdges(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutgoingEdgesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- getOutputAttrGroupFromData(Dataset<?>, Seq<String>, Seq<String>, boolean) - Static method in class org.apache.spark.ml.feature.OneHotEncoderCommon
-
This method is called when we want to generate
AttributeGroup
from actual data for one-hot encoder. - getOutputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 output_bytes = 7;
- getOutputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 output_bytes = 7;
- getOutputBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 output_bytes = 7;
- getOutputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 output_bytes = 26;
- getOutputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 output_bytes = 26;
- getOutputBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 output_bytes = 26;
- getOutputBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- getOutputBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double output_bytes = 8;
- getOutputBytes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double output_bytes = 8;
- getOutputBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- getOutputBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double output_bytes = 8;
- getOutputBytesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double output_bytes = 8;
- getOutputBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- getOutputBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double output_bytes = 8;
- getOutputBytesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double output_bytes = 8;
- getOutputBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 output_bytes_written = 28;
- getOutputBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 output_bytes_written = 28;
- getOutputBytesWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 output_bytes_written = 28;
- getOutputCol() - Method in interface org.apache.spark.ml.param.shared.HasOutputCol
- getOutputCols() - Method in interface org.apache.spark.ml.param.shared.HasOutputCols
- getOutputDeterministicLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- getOutputDeterministicLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- getOutputDeterministicLevel() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- getOutputDeterministicLevelValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- getOutputDeterministicLevelValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- getOutputDeterministicLevelValue() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- getOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- getOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- getOutputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- getOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- getOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- getOutputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- getOutputMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- getOutputMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- getOutputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- getOutputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- getOutputMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- getOutputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- getOutputMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- getOutputMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- getOutputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 output_records = 8;
- getOutputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 output_records = 8;
- getOutputRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 output_records = 8;
- getOutputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 output_records = 27;
- getOutputRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 output_records = 27;
- getOutputRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 output_records = 27;
- getOutputRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- getOutputRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double output_records = 9;
- getOutputRecords(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double output_records = 9;
- getOutputRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- getOutputRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double output_records = 9;
- getOutputRecordsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double output_records = 9;
- getOutputRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- getOutputRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double output_records = 9;
- getOutputRecordsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double output_records = 9;
- getOutputRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 output_records_written = 29;
- getOutputRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 output_records_written = 29;
- getOutputRecordsWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 output_records_written = 29;
- getOutputSize(int) - Method in interface org.apache.spark.ml.ann.Layer
-
Returns the output size given the input size (not counting the stack size).
- getOutputStream(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
- getP() - Method in class org.apache.spark.ml.feature.Normalizer
- getParallelism() - Method in interface org.apache.spark.ml.param.shared.HasParallelism
- getParam(String) - Method in interface org.apache.spark.ml.param.Params
-
Gets a param by its name.
- getParameter(String) - Method in class org.apache.spark.ui.XssSafeRequest
- getParameterMap() - Method in class org.apache.spark.ui.XssSafeRequest
- getParameterNames() - Method in class org.apache.spark.ui.XssSafeRequest
- getParameterOtherTable(HttpServletRequest, String) - Method in interface org.apache.spark.ui.PagedTable
-
Returns parameters of other tables in the page.
- getParameterValues(String) - Method in class org.apache.spark.ui.XssSafeRequest
- getParentLoggerNotImplementedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- getParents(int) - Method in class org.apache.spark.NarrowDependency
-
Get the parent partitions for a child partition.
- getParents(int) - Method in class org.apache.spark.OneToOneDependency
- getParents(int) - Method in class org.apache.spark.RangeDependency
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- getParserForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
- getPartition(long, long, int) - Method in interface org.apache.spark.graphx.PartitionStrategy
-
Returns the partition number for a given edge.
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
- getPartition(Object) - Method in class org.apache.spark.HashPartitioner
- getPartition(Object) - Method in class org.apache.spark.Partitioner
- getPartition(Object) - Method in class org.apache.spark.RangePartitioner
- getPartitionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 partition_id = 4;
- getPartitionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
int32 partition_id = 4;
- getPartitionId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
int32 partition_id = 4;
- getPartitionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 partition_id = 4;
- getPartitionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int32 partition_id = 4;
- getPartitionId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int32 partition_id = 4;
- getPartitionId() - Static method in class org.apache.spark.TaskContext
-
Returns the partition id of currently active TaskContext.
- getPartitionLengths() - Method in class org.apache.spark.shuffle.api.metadata.MapOutputCommitMessage
- getPartitionMetadataByFilterError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- getPartitions() - Method in class org.apache.spark.api.r.BaseRRDD
- getPartitions() - Method in class org.apache.spark.rdd.CoGroupedRDD
- getPartitions() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- getPartitions() - Method in class org.apache.spark.rdd.HadoopRDD
- getPartitions() - Method in class org.apache.spark.rdd.JdbcRDD
- getPartitions() - Method in class org.apache.spark.rdd.NewHadoopRDD
- getPartitions() - Method in class org.apache.spark.rdd.ShuffledRDD
- getPartitions() - Method in class org.apache.spark.rdd.UnionRDD
- getPartitions() - Method in class org.apache.spark.status.LiveRDD
- getPartitions(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitions(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitions(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionsOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- getPartitionWriter(int) - Method in interface org.apache.spark.shuffle.api.ShuffleMapOutputWriter
-
Creates a writer that can open an output stream to persist bytes targeted for a given reduce partition id.
- getPath() - Method in class org.apache.spark.input.PortableDataStream
- getPattern() - Method in class org.apache.spark.ml.feature.RegexTokenizer
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double peak_execution_memory = 15;
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double peak_execution_memory = 15;
- getPeakExecutionMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double peak_execution_memory = 15;
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 peak_execution_memory = 23;
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 peak_execution_memory = 23;
- getPeakExecutionMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 peak_execution_memory = 23;
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 peak_execution_memory = 25;
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 peak_execution_memory = 25;
- getPeakExecutionMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 peak_execution_memory = 25;
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 peak_execution_memory = 10;
- getPeakExecutionMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 peak_execution_memory = 10;
- getPeakExecutionMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 peak_execution_memory = 10;
- getPeakExecutionMemory(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemory(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemory(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemoryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemoryCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemoryCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemoryList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemoryList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double peak_execution_memory = 12;
- getPeakExecutionMemoryList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double peak_execution_memory = 12;
- getPeakExecutorMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- getPeakExecutorMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- getPeakExecutorMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- getPeakExecutorMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- getPeakExecutorMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- getPeakExecutorMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- getPeakExecutorMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- getPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- getPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- getPeakMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- getPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- getPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- getPeakMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- getPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- getPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- getPeakMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- getPeakMemoryMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- getPeakMemoryMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- getPeakMemoryMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- getPeakMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- getPeakMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- getPeakMemoryMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- getPeakMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- getPeakMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- getPeakMemoryMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- getPeakMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- getPeakMemoryMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- getPeakMemoryMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- GetPeers(BlockManagerId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetPeers
- GetPeers$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetPeers$
- getPercentile() - Method in interface org.apache.spark.ml.feature.SelectorParams
- getPersistentRDDs() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Returns a Java map of JavaRDDs that have marked themselves as persistent via cache() call.
- getPersistentRDDs() - Method in class org.apache.spark.SparkContext
-
Returns an immutable map of RDDs that have marked themselves as persistent via cache() call.
- getPhysicalPlanDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string physical_plan_description = 5;
- getPhysicalPlanDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string physical_plan_description = 5;
- getPhysicalPlanDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string physical_plan_description = 5;
- getPhysicalPlanDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string physical_plan_description = 5;
- getPhysicalPlanDescriptionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string physical_plan_description = 5;
- getPhysicalPlanDescriptionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string physical_plan_description = 5;
- getPmml() - Method in interface org.apache.spark.mllib.pmml.export.PMMLModelExport
- getPoissonSamplingFunction(RDD<Tuple2<K, V>>, Map<K, Object>, boolean, long, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Return the per partition sampling function used for sampling with replacement.
- getPoolForName(String) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Return the pool associated with the given name, if one exists
- getPosition() - Method in class org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
- getPosition() - Method in class org.apache.spark.streaming.kinesis.KinesisInitialPositions.Latest
- getPosition() - Method in class org.apache.spark.streaming.kinesis.KinesisInitialPositions.TrimHorizon
- getPowerIterationClustering(int, String, int, String, String, String) - Static method in class org.apache.spark.ml.r.PowerIterationClusteringWrapper
- getPredictionCol() - Method in interface org.apache.spark.ml.param.shared.HasPredictionCol
- getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.HadoopRDD
- getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.NewHadoopRDD
- getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.UnionRDD
- getPrefixSpan(double, int, double, String) - Static method in class org.apache.spark.ml.r.PrefixSpanWrapper
- getProbabilityCol() - Method in interface org.apache.spark.ml.param.shared.HasProbabilityCol
- getProcessedRowsPerSecond() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
double processed_rows_per_second = 7;
- getProcessedRowsPerSecond() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
double processed_rows_per_second = 7;
- getProcessedRowsPerSecond() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
double processed_rows_per_second = 7;
- getProcessLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
Deprecated.
- getProcessLogs() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
Deprecated.
- getProcessLogs() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
Deprecated.
- getProcessLogsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- getProcessLogsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- getProcessLogsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
map<string, string> process_logs = 7;
- getProcessLogsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
map<string, string> process_logs = 7;
- getProcessLogsMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
map<string, string> process_logs = 7;
- getProcessLogsMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
map<string, string> process_logs = 7;
- getProcessLogsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
map<string, string> process_logs = 7;
- getProcessLogsOrDefault(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
map<string, string> process_logs = 7;
- getProcessLogsOrDefault(String, String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
map<string, string> process_logs = 7;
- getProcessLogsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
map<string, string> process_logs = 7;
- getProcessLogsOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
map<string, string> process_logs = 7;
- getProcessLogsOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
map<string, string> process_logs = 7;
- getProcessName() - Static method in class org.apache.spark.util.Utils
-
Returns the name of this JVM process.
- getProgress() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- getProgress() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- getProgress() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapperOrBuilder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- getProgressBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- getProgressOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- getProgressOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- getProgressOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapperOrBuilder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- getPropertiesFromFile(String) - Static method in class org.apache.spark.util.Utils
-
Load properties present in the given file.
- getPythonIncludes() - Method in class org.apache.spark.sql.artifact.ArtifactManager
-
Get the py-file names added to this SparkSession.
- getQuantile() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
optional string quantile = 3;
- getQuantile() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
optional string quantile = 3;
- getQuantile() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
optional string quantile = 3;
- getQuantileBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
optional string quantile = 3;
- getQuantileBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
optional string quantile = 3;
- getQuantileBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
optional string quantile = 3;
- getQuantileCalculationStrategy() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getQuantileProbabilities() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
- getQuantiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double quantiles = 1;
- getQuantiles(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantilesCol() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
- getQuantilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double quantiles = 1;
- getQuantilesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- getQuantilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double quantiles = 1;
- getQuantilesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- getQuantilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
-
repeated double quantiles = 1;
- getQuantilesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- getQuantilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double quantiles = 1;
- getQuantilesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double quantiles = 1;
- getQuantilesValue(IndexedSeq<Object>, double[]) - Static method in class org.apache.spark.status.AppStatusUtils
- getQueryContext() - Method in exception org.apache.spark.SparkException
- getQueryContext() - Method in interface org.apache.spark.SparkThrowable
- getQueryContext() - Method in exception org.apache.spark.sql.AnalysisException
- getQueryContext(QueryContext) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- getQueryContext(QueryContext) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- getQueryContext(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- getQueryContext(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- getQueryContext(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- getQueryId() - Method in interface org.apache.spark.sql.streaming.QueryInfo
-
Returns the streaming query id associated with stateful operator
- getQueryInfo() - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to return queryInfo for currently running task
- getQueryName(StreamingQueryUIData) - Static method in class org.apache.spark.sql.streaming.ui.UIUtils
- getQueryStatus(StreamingQueryUIData) - Static method in class org.apache.spark.sql.streaming.ui.UIUtils
- getRandomSample(Seq<T>, int, Random) - Static method in class org.apache.spark.storage.BlockReplicationUtils
-
Get a random sample of size m from the elems
- getRank() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getRatingCol() - Method in interface org.apache.spark.ml.recommendation.ALSParams
- getRawPredictionCol() - Method in interface org.apache.spark.ml.param.shared.HasRawPredictionCol
- getRddBlocks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 rdd_blocks = 4;
- getRddBlocks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 rdd_blocks = 4;
- getRddBlocks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 rdd_blocks = 4;
- GetRDDBlockVisibility(RDDBlockId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetRDDBlockVisibility
- GetRDDBlockVisibility$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetRDDBlockVisibility$
- getRddIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- getRddIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated int64 rdd_ids = 43;
- getRddIds(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated int64 rdd_ids = 43;
- getRddIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- getRddIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated int64 rdd_ids = 43;
- getRddIdsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated int64 rdd_ids = 43;
- getRddIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- getRddIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
repeated int64 rdd_ids = 43;
- getRddIdsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
repeated int64 rdd_ids = 43;
- getRDDStorageInfo() - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Return information about what RDDs are cached, if they are in mem or on disk, how much space they take, etc.
- getReadBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- getReadBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double read_bytes = 1;
- getReadBytes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double read_bytes = 1;
- getReadBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- getReadBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double read_bytes = 1;
- getReadBytesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double read_bytes = 1;
- getReadBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- getReadBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double read_bytes = 1;
- getReadBytesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double read_bytes = 1;
- getReadLimits() - Method in class org.apache.spark.sql.connector.read.streaming.CompositeReadLimit
- getReadRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- getReadRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double read_records = 2;
- getReadRecords(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double read_records = 2;
- getReadRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- getReadRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double read_records = 2;
- getReadRecordsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double read_records = 2;
- getReadRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- getReadRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double read_records = 2;
- getReadRecordsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double read_records = 2;
- getReceiver() - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
-
Gets the receiver object that will be sent to the worker nodes to receive data.
- getRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double records_read = 19;
- getRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double records_read = 19;
- getRecordsRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double records_read = 19;
- getRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
int64 records_read = 2;
- getRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
-
int64 records_read = 2;
- getRecordsRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricsOrBuilder
-
int64 records_read = 2;
- getRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 records_read = 7;
- getRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 records_read = 7;
- getRecordsRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 records_read = 7;
- getRecordsRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- getRecordsRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
repeated double records_read = 2;
- getRecordsRead(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributionsOrBuilder
-
repeated double records_read = 2;
- getRecordsReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- getRecordsReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
repeated double records_read = 2;
- getRecordsReadCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributionsOrBuilder
-
repeated double records_read = 2;
- getRecordsReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- getRecordsReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
repeated double records_read = 2;
- getRecordsReadList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributionsOrBuilder
-
repeated double records_read = 2;
- getRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double records_written = 21;
- getRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double records_written = 21;
- getRecordsWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double records_written = 21;
- getRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
int64 records_written = 2;
- getRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
-
int64 records_written = 2;
- getRecordsWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricsOrBuilder
-
int64 records_written = 2;
- getRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 records_written = 3;
- getRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
-
int64 records_written = 3;
- getRecordsWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricsOrBuilder
-
int64 records_written = 3;
- getRecordsWritten(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- getRecordsWritten(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
repeated double records_written = 2;
- getRecordsWritten(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributionsOrBuilder
-
repeated double records_written = 2;
- getRecordsWrittenCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- getRecordsWrittenCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
repeated double records_written = 2;
- getRecordsWrittenCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributionsOrBuilder
-
repeated double records_written = 2;
- getRecordsWrittenList() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- getRecordsWrittenList() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
repeated double records_written = 2;
- getRecordsWrittenList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributionsOrBuilder
-
repeated double records_written = 2;
- getRegParam() - Method in interface org.apache.spark.ml.param.shared.HasRegParam
- getRelativeError() - Method in interface org.apache.spark.ml.param.shared.HasRelativeError
- getRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_blocks_fetched = 1;
- getRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 remote_blocks_fetched = 1;
- getRemoteBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 remote_blocks_fetched = 1;
- getRemoteBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetched(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetchedCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_blocks_fetched = 3;
- getRemoteBlocksFetchedList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_blocks_fetched = 3;
- getRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_bytes_read = 4;
- getRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 remote_bytes_read = 4;
- getRemoteBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 remote_bytes_read = 4;
- getRemoteBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- getRemoteBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_bytes_read = 6;
- getRemoteBytesRead(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_bytes_read = 6;
- getRemoteBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- getRemoteBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_bytes_read = 6;
- getRemoteBytesReadCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_bytes_read = 6;
- getRemoteBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- getRemoteBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_bytes_read = 6;
- getRemoteBytesReadList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_bytes_read = 6;
- getRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_bytes_read_to_disk = 5;
- getRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 remote_bytes_read_to_disk = 5;
- getRemoteBytesReadToDisk() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 remote_bytes_read_to_disk = 5;
- getRemoteBytesReadToDisk(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDisk(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDisk(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDiskCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDiskCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDiskCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDiskList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDiskList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteBytesReadToDiskList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_bytes_read_to_disk = 7;
- getRemoteMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetched(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetchedCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBlocksFetchedList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_blocks_fetched = 3;
- getRemoteMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_bytes_read = 7;
- getRemoteMergedBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 remote_merged_bytes_read = 7;
- getRemoteMergedBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 remote_merged_bytes_read = 7;
- getRemoteMergedBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesRead(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesReadCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedBytesReadList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_bytes_read = 7;
- getRemoteMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetched(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetchedCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedChunksFetchedList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_chunks_fetched = 5;
- getRemoteMergedReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
-
int64 remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricsOrBuilder
-
int64 remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDuration(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDuration(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDuration(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDurationCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDurationCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDurationCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDurationList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDurationList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteMergedReqsDurationList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributionsOrBuilder
-
repeated double remote_merged_reqs_duration = 9;
- getRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_reqs_duration = 8;
- getRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
int64 remote_reqs_duration = 8;
- getRemoteReqsDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
int64 remote_reqs_duration = 8;
- getRemoteReqsDuration(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDuration(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDuration(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDurationCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDurationCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDurationCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDurationList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDurationList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double remote_reqs_duration = 9;
- getRemoteReqsDurationList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double remote_reqs_duration = 9;
- getRemoteUser() - Method in class org.apache.spark.ui.XssSafeRequest
- getRemoveReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string remove_reason = 22;
- getRemoveReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string remove_reason = 22;
- getRemoveReason() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string remove_reason = 22;
- getRemoveReasonBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string remove_reason = 22;
- getRemoveReasonBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string remove_reason = 22;
- getRemoveReasonBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string remove_reason = 22;
- getRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional int64 remove_time = 21;
- getRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional int64 remove_time = 21;
- getRemoveTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional int64 remove_time = 21;
- getRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional int64 remove_time = 6;
- getRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional int64 remove_time = 6;
- getRemoveTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional int64 remove_time = 6;
- getRenameColumnQuery(String, String, String, int) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getRenameColumnQuery(String, String, String, int) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getRenameColumnQuery(String, String, String, int) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getRenameColumnQuery(String, String, String, int) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- GetReplicateInfoForRDDBlocks(BlockManagerId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetReplicateInfoForRDDBlocks
- GetReplicateInfoForRDDBlocks$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetReplicateInfoForRDDBlocks$
- getResource(String) - Method in class org.apache.spark.util.ChildFirstURLClassLoader
- getResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string resource_name = 1;
- getResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string resource_name = 1;
- getResourceName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string resource_name = 1;
- getResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
optional string resource_name = 1;
- getResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
-
optional string resource_name = 1;
- getResourceName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequestOrBuilder
-
optional string resource_name = 1;
- getResourceNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string resource_name = 1;
- getResourceNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string resource_name = 1;
- getResourceNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string resource_name = 1;
- getResourceNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
optional string resource_name = 1;
- getResourceNameBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
-
optional string resource_name = 1;
- getResourceNameBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequestOrBuilder
-
optional string resource_name = 1;
- getResourceProfile() - Method in class org.apache.spark.api.java.JavaRDD
-
Get the ResourceProfile specified with this RDD or None if it wasn't specified.
- getResourceProfile() - Method in class org.apache.spark.rdd.RDD
-
Get the ResourceProfile specified with this RDD or null if it wasn't specified.
- getResourceProfileId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 resource_profile_id = 29;
- getResourceProfileId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 resource_profile_id = 29;
- getResourceProfileId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 resource_profile_id = 29;
- getResourceProfileId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 resource_profile_id = 49;
- getResourceProfileId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 resource_profile_id = 49;
- getResourceProfileId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 resource_profile_id = 49;
- getResourceProfiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfiles(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResourceProfilesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- getResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
Deprecated.
- getResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
Deprecated.
- getResources() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
Deprecated.
- getResources(String) - Method in class org.apache.spark.util.ChildFirstURLClassLoader
- getResourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- getResourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getResourcesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesOrDefault(String, StoreTypes.ResourceInformation) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesOrDefault(String, StoreTypes.ResourceInformation) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesOrDefault(String, StoreTypes.ResourceInformation) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResourcesOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- getResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 result_fetch_start = 6;
- getResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional int64 result_fetch_start = 6;
- getResultFetchStart() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional int64 result_fetch_start = 6;
- getResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_fetch_start = 6;
- getResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 result_fetch_start = 6;
- getResultFetchStart() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 result_fetch_start = 6;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double result_serialization_time = 12;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double result_serialization_time = 12;
- getResultSerializationTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double result_serialization_time = 12;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 result_serialization_time = 20;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 result_serialization_time = 20;
- getResultSerializationTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 result_serialization_time = 20;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_serialization_time = 22;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 result_serialization_time = 22;
- getResultSerializationTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 result_serialization_time = 22;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 result_serialization_time = 7;
- getResultSerializationTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 result_serialization_time = 7;
- getResultSerializationTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 result_serialization_time = 7;
- getResultSerializationTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- getResultSerializationTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double result_serialization_time = 9;
- getResultSerializationTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double result_serialization_time = 9;
- getResultSerializationTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- getResultSerializationTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double result_serialization_time = 9;
- getResultSerializationTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double result_serialization_time = 9;
- getResultSerializationTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- getResultSerializationTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double result_serialization_time = 9;
- getResultSerializationTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double result_serialization_time = 9;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double result_size = 10;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double result_size = 10;
- getResultSize() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double result_size = 10;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 result_size = 18;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 result_size = 18;
- getResultSize() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 result_size = 18;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_size = 20;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 result_size = 20;
- getResultSize() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 result_size = 20;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 result_size = 5;
- getResultSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
int64 result_size = 5;
- getResultSize() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
int64 result_size = 5;
- getResultSize(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- getResultSize(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double result_size = 7;
- getResultSize(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double result_size = 7;
- getResultSizeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- getResultSizeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double result_size = 7;
- getResultSizeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double result_size = 7;
- getResultSizeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- getResultSizeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double result_size = 7;
- getResultSizeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double result_size = 7;
- getRoaringBitMap(ShuffleBlockChunkId) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
Get the RoaringBitMap for a specific ShuffleBlockChunkId
- getRollingIntervalSecs(SparkConf, boolean) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- getRootCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- getRootCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- getRootCluster() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- getRootClusterBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- getRootClusterOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- getRootClusterOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- getRootClusterOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- getRootDirectory() - Static method in class org.apache.spark.SparkFiles
-
Get the root directory that contains files added through
SparkContext.addFile()
. - getRootExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 root_execution_id = 2;
- getRootExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
int64 root_execution_id = 2;
- getRootExecutionId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
int64 root_execution_id = 2;
- getRow(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
-
Returns the row in this batch at `rowId`.
- getRpInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- getRpInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- getRpInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- getRpInfoBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- getRpInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- getRpInfoOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- getRpInfoOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- getRunId() - Method in interface org.apache.spark.sql.streaming.QueryInfo
-
Returns the streaming query runId associated with stateful operator
- getRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string run_id = 3;
- getRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string run_id = 3;
- getRunId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string run_id = 3;
- getRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string run_id = 2;
- getRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string run_id = 2;
- getRunId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string run_id = 2;
- getRunIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string run_id = 3;
- getRunIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string run_id = 3;
- getRunIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string run_id = 3;
- getRunIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string run_id = 2;
- getRunIdBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string run_id = 2;
- getRunIdBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string run_id = 2;
- getRuntime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- getRuntime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- getRuntime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- getRuntimeBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- getRuntimeOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- getRuntimeOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- getRuntimeOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- getScalaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string scala_version = 3;
- getScalaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string scala_version = 3;
- getScalaVersion() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string scala_version = 3;
- getScalaVersionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string scala_version = 3;
- getScalaVersionBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string scala_version = 3;
- getScalaVersionBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string scala_version = 3;
- getScalingVec() - Method in class org.apache.spark.ml.feature.ElementwiseProduct
- getSchedulableByName(String) - Method in interface org.apache.spark.scheduler.Schedulable
- getSchedulerDelay() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double scheduler_delay = 14;
- getSchedulerDelay() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double scheduler_delay = 14;
- getSchedulerDelay() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double scheduler_delay = 14;
- getSchedulerDelay() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 scheduler_delay = 17;
- getSchedulerDelay() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
int64 scheduler_delay = 17;
- getSchedulerDelay() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
int64 scheduler_delay = 17;
- getSchedulerDelay(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- getSchedulerDelay(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double scheduler_delay = 11;
- getSchedulerDelay(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double scheduler_delay = 11;
- getSchedulerDelayCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- getSchedulerDelayCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double scheduler_delay = 11;
- getSchedulerDelayCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double scheduler_delay = 11;
- getSchedulerDelayList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- getSchedulerDelayList() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
repeated double scheduler_delay = 11;
- getSchedulerDelayList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
repeated double scheduler_delay = 11;
- getSchedulingMode() - Method in class org.apache.spark.SparkContext
-
Return current scheduling mode
- getSchedulingPool() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string scheduling_pool = 42;
- getSchedulingPool() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string scheduling_pool = 42;
- getSchedulingPool() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string scheduling_pool = 42;
- getSchedulingPoolBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string scheduling_pool = 42;
- getSchedulingPoolBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string scheduling_pool = 42;
- getSchedulingPoolBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string scheduling_pool = 42;
- getSchemaCommentQuery(String, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getSchemaCommentQuery(String, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getSchemaCommentQuery(String, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getSchemaField(StructType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Get schema field.
- getSchemaFieldType(StructType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Get schema field type.
- getSchemaQuery(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
- getSchemaQuery(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
The SQL query that should be used to discover the schema of a table.
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getSeed() - Method in interface org.apache.spark.ml.param.shared.HasSeed
- getSeed() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the random seed.
- getSeed() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the random seed
- getSeed() - Method in class org.apache.spark.mllib.clustering.KMeans
-
The random seed for cluster initialization.
- getSeed() - Method in class org.apache.spark.mllib.clustering.LDA
-
Random seed for cluster initialization.
- getSeed() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Random seed for cluster initialization.
- getSelectionMode() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
- getSelectionThreshold() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
- getSelectorType() - Method in interface org.apache.spark.ml.feature.SelectorParams
- getSeq(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of array type as a Scala Seq.
- getSeqOp(boolean, Map<K, Object>, org.apache.spark.util.random.StratifiedSamplingUtils.RandomDataGenerator, Option<Map<K, Object>>) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Returns the function used by aggregate to collect sampling statistics for each partition.
- getSequenceCol() - Method in class org.apache.spark.ml.fpm.PrefixSpan
- getSerializationProxy(Object) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
-
Check if the given reference is a indylambda style Scala closure.
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- getSerializedSize() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- getSessionConf(SparkSession) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- getShort(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive short.
- getShort(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getShort(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getShort(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getShort(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getShort(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the short type value for
rowId
. - getShorts(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Gets short type values from
[rowId, rowId + count)
. - getShuffleBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_bytes_written = 37;
- getShuffleBytesWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_bytes_written = 37;
- getShuffleBytesWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_bytes_written = 37;
- getShuffleChunkCardinality(ShuffleBlockChunkId) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
Get the number of map blocks in a ShuffleBlockChunk
- getShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_corrupt_merged_block_chunks = 33;
- getShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_corrupt_merged_block_chunks = 33;
- getShuffleCorruptMergedBlockChunks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_corrupt_merged_block_chunks = 33;
- getShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_corrupt_merged_block_chunks = 53;
- getShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_corrupt_merged_block_chunks = 53;
- getShuffleCorruptMergedBlockChunks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_corrupt_merged_block_chunks = 53;
- getShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_corrupt_merged_block_chunks = 42;
- getShuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_corrupt_merged_block_chunks = 42;
- getShuffleCorruptMergedBlockChunks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_corrupt_merged_block_chunks = 42;
- getShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_fetch_wait_time = 26;
- getShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_fetch_wait_time = 26;
- getShuffleFetchWaitTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_fetch_wait_time = 26;
- getShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_fetch_wait_time = 30;
- getShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_fetch_wait_time = 30;
- getShuffleFetchWaitTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_fetch_wait_time = 30;
- getShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_fetch_wait_time = 32;
- getShuffleFetchWaitTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_fetch_wait_time = 32;
- getShuffleFetchWaitTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_fetch_wait_time = 32;
- getShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_local_blocks_fetched = 25;
- getShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_local_blocks_fetched = 25;
- getShuffleLocalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_local_blocks_fetched = 25;
- getShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_local_blocks_fetched = 29;
- getShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_local_blocks_fetched = 29;
- getShuffleLocalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_local_blocks_fetched = 29;
- getShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_local_blocks_fetched = 31;
- getShuffleLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_local_blocks_fetched = 31;
- getShuffleLocalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_local_blocks_fetched = 31;
- getShuffleLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_local_bytes_read = 33;
- getShuffleLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_local_bytes_read = 33;
- getShuffleLocalBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_local_bytes_read = 33;
- getShuffleLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_local_bytes_read = 35;
- getShuffleLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_local_bytes_read = 35;
- getShuffleLocalBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_local_bytes_read = 35;
- getShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_fetch_fallback_count = 34;
- getShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_fetch_fallback_count = 34;
- getShuffleMergedFetchFallbackCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_fetch_fallback_count = 34;
- getShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_fetch_fallback_count = 54;
- getShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_fetch_fallback_count = 54;
- getShuffleMergedFetchFallbackCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_fetch_fallback_count = 54;
- getShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_fetch_fallback_count = 43;
- getShuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_fetch_fallback_count = 43;
- getShuffleMergedFetchFallbackCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_fetch_fallback_count = 43;
- getShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_blocks_fetched = 36;
- getShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_local_blocks_fetched = 36;
- getShuffleMergedLocalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_local_blocks_fetched = 36;
- getShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_blocks_fetched = 56;
- getShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_local_blocks_fetched = 56;
- getShuffleMergedLocalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_local_blocks_fetched = 56;
- getShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_blocks_fetched = 45;
- getShuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_local_blocks_fetched = 45;
- getShuffleMergedLocalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_local_blocks_fetched = 45;
- getShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_bytes_read = 40;
- getShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_local_bytes_read = 40;
- getShuffleMergedLocalBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_local_bytes_read = 40;
- getShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_bytes_read = 60;
- getShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_local_bytes_read = 60;
- getShuffleMergedLocalBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_local_bytes_read = 60;
- getShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_bytes_read = 49;
- getShuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_local_bytes_read = 49;
- getShuffleMergedLocalBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_local_bytes_read = 49;
- getShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_chunks_fetched = 38;
- getShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_local_chunks_fetched = 38;
- getShuffleMergedLocalChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_local_chunks_fetched = 38;
- getShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_chunks_fetched = 58;
- getShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_local_chunks_fetched = 58;
- getShuffleMergedLocalChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_local_chunks_fetched = 58;
- getShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_chunks_fetched = 47;
- getShuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_local_chunks_fetched = 47;
- getShuffleMergedLocalChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_local_chunks_fetched = 47;
- getShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_blocks_fetched = 35;
- getShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_remote_blocks_fetched = 35;
- getShuffleMergedRemoteBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_remote_blocks_fetched = 35;
- getShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_blocks_fetched = 55;
- getShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_remote_blocks_fetched = 55;
- getShuffleMergedRemoteBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_remote_blocks_fetched = 55;
- getShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_blocks_fetched = 44;
- getShuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_remote_blocks_fetched = 44;
- getShuffleMergedRemoteBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_remote_blocks_fetched = 44;
- getShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_bytes_read = 39;
- getShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_remote_bytes_read = 39;
- getShuffleMergedRemoteBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_remote_bytes_read = 39;
- getShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_bytes_read = 59;
- getShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_remote_bytes_read = 59;
- getShuffleMergedRemoteBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_remote_bytes_read = 59;
- getShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_bytes_read = 48;
- getShuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_remote_bytes_read = 48;
- getShuffleMergedRemoteBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_remote_bytes_read = 48;
- getShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_chunks_fetched = 37;
- getShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_remote_chunks_fetched = 37;
- getShuffleMergedRemoteChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_remote_chunks_fetched = 37;
- getShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_chunks_fetched = 57;
- getShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_remote_chunks_fetched = 57;
- getShuffleMergedRemoteChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_remote_chunks_fetched = 57;
- getShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_chunks_fetched = 46;
- getShuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_remote_chunks_fetched = 46;
- getShuffleMergedRemoteChunksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_remote_chunks_fetched = 46;
- getShuffleMergedRemoteReqDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_req_duration = 51;
- getShuffleMergedRemoteReqDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_merged_remote_req_duration = 51;
- getShuffleMergedRemoteReqDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_merged_remote_req_duration = 51;
- getShuffleMergedRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_reqs_duration = 42;
- getShuffleMergedRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_merged_remote_reqs_duration = 42;
- getShuffleMergedRemoteReqsDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_merged_remote_reqs_duration = 42;
- getShuffleMergedRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_reqs_duration = 62;
- getShuffleMergedRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_merged_remote_reqs_duration = 62;
- getShuffleMergedRemoteReqsDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_merged_remote_reqs_duration = 62;
- getShuffleMergersCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 shuffle_mergers_count = 64;
- getShuffleMergersCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int32 shuffle_mergers_count = 64;
- getShuffleMergersCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int32 shuffle_mergers_count = 64;
- getShufflePushMergerLocations() - Method in class org.apache.spark.ShuffleStatus
- getShufflePushMergerLocations(int, int) - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get the list of host locations for push based shuffle
- GetShufflePushMergerLocations(int, Set<String>) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetShufflePushMergerLocations
- GetShufflePushMergerLocations$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetShufflePushMergerLocations$
- getShufflePushReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- getShufflePushReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- getShufflePushReadMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- getShufflePushReadMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- getShufflePushReadMetricsDist() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- getShufflePushReadMetricsDist() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- getShufflePushReadMetricsDist() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- getShufflePushReadMetricsDistBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- getShufflePushReadMetricsDistOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- getShufflePushReadMetricsDistOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- getShufflePushReadMetricsDistOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- getShufflePushReadMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- getShufflePushReadMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- getShufflePushReadMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- getShuffleRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_read = 9;
- getShuffleRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 shuffle_read = 9;
- getShuffleRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 shuffle_read = 9;
- getShuffleRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- getShuffleRead(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_read = 10;
- getShuffleRead(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_read = 10;
- getShuffleReadBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_read_bytes = 22;
- getShuffleReadBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_read_bytes = 22;
- getShuffleReadBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_read_bytes = 22;
- getShuffleReadBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_read_bytes = 34;
- getShuffleReadBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_read_bytes = 34;
- getShuffleReadBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_read_bytes = 34;
- getShuffleReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- getShuffleReadCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_read = 10;
- getShuffleReadCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_read = 10;
- getShuffleReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- getShuffleReadList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_read = 10;
- getShuffleReadList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_read = 10;
- getShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- getShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- getShuffleReadMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- getShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- getShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- getShuffleReadMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- getShuffleReadMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- getShuffleReadMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- getShuffleReadMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- getShuffleReadMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- getShuffleReadMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- getShuffleReadMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- getShuffleReadMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- getShuffleReadMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- getShuffleReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_read_records = 10;
- getShuffleReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 shuffle_read_records = 10;
- getShuffleReadRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 shuffle_read_records = 10;
- getShuffleReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_read_records = 35;
- getShuffleReadRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_read_records = 35;
- getShuffleReadRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_read_records = 35;
- getShuffleReadRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecords(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecordsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_read_records = 11;
- getShuffleReadRecordsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_read_records = 11;
- getShuffleRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_records_read = 23;
- getShuffleRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_records_read = 23;
- getShuffleRecordsRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_records_read = 23;
- getShuffleRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_records_read = 36;
- getShuffleRecordsRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_records_read = 36;
- getShuffleRecordsRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_records_read = 36;
- getShuffleRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_records_written = 39;
- getShuffleRecordsWritten() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_records_written = 39;
- getShuffleRecordsWritten() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_records_written = 39;
- getShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_blocks_fetched = 24;
- getShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_remote_blocks_fetched = 24;
- getShuffleRemoteBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_remote_blocks_fetched = 24;
- getShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_blocks_fetched = 28;
- getShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_remote_blocks_fetched = 28;
- getShuffleRemoteBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_remote_blocks_fetched = 28;
- getShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_blocks_fetched = 30;
- getShuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_remote_blocks_fetched = 30;
- getShuffleRemoteBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_remote_blocks_fetched = 30;
- getShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_bytes_read = 27;
- getShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_remote_bytes_read = 27;
- getShuffleRemoteBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_remote_bytes_read = 27;
- getShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_bytes_read = 31;
- getShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_remote_bytes_read = 31;
- getShuffleRemoteBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_remote_bytes_read = 31;
- getShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_bytes_read = 33;
- getShuffleRemoteBytesRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_remote_bytes_read = 33;
- getShuffleRemoteBytesRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_remote_bytes_read = 33;
- getShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_bytes_read_to_disk = 28;
- getShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_remote_bytes_read_to_disk = 28;
- getShuffleRemoteBytesReadToDisk() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_remote_bytes_read_to_disk = 28;
- getShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_bytes_read_to_disk = 32;
- getShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_remote_bytes_read_to_disk = 32;
- getShuffleRemoteBytesReadToDisk() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_remote_bytes_read_to_disk = 32;
- getShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_bytes_read_to_disk = 34;
- getShuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_remote_bytes_read_to_disk = 34;
- getShuffleRemoteBytesReadToDisk() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_remote_bytes_read_to_disk = 34;
- getShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_reqs_duration = 41;
- getShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_remote_reqs_duration = 41;
- getShuffleRemoteReqsDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_remote_reqs_duration = 41;
- getShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_reqs_duration = 61;
- getShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_remote_reqs_duration = 61;
- getShuffleRemoteReqsDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_remote_reqs_duration = 61;
- getShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_reqs_duration = 50;
- getShuffleRemoteReqsDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_remote_reqs_duration = 50;
- getShuffleRemoteReqsDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_remote_reqs_duration = 50;
- getShuffleTotalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_total_blocks_fetched = 29;
- getShuffleTotalBlocksFetched() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_total_blocks_fetched = 29;
- getShuffleTotalBlocksFetched() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_total_blocks_fetched = 29;
- getShuffleWrite() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_write = 11;
- getShuffleWrite() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 shuffle_write = 11;
- getShuffleWrite() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 shuffle_write = 11;
- getShuffleWrite(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- getShuffleWrite(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_write = 12;
- getShuffleWrite(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_write = 12;
- getShuffleWriteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_bytes = 30;
- getShuffleWriteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_write_bytes = 30;
- getShuffleWriteBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_write_bytes = 30;
- getShuffleWriteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_bytes = 36;
- getShuffleWriteBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_write_bytes = 36;
- getShuffleWriteBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_write_bytes = 36;
- getShuffleWriteCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- getShuffleWriteCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_write = 12;
- getShuffleWriteCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_write = 12;
- getShuffleWriteList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- getShuffleWriteList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_write = 12;
- getShuffleWriteList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_write = 12;
- getShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- getShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- getShuffleWriteMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- getShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- getShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- getShuffleWriteMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- getShuffleWriteMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- getShuffleWriteMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- getShuffleWriteMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- getShuffleWriteMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- getShuffleWriteMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- getShuffleWriteMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- getShuffleWriteMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- getShuffleWriteMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- getShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_records = 31;
- getShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_write_records = 31;
- getShuffleWriteRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_write_records = 31;
- getShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_write_records = 12;
- getShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 shuffle_write_records = 12;
- getShuffleWriteRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 shuffle_write_records = 12;
- getShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_records = 38;
- getShuffleWriteRecords() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_write_records = 38;
- getShuffleWriteRecords() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_write_records = 38;
- getShuffleWriteRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecords(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecordsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double shuffle_write_records = 13;
- getShuffleWriteRecordsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double shuffle_write_records = 13;
- getShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_time = 32;
- getShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
double shuffle_write_time = 32;
- getShuffleWriteTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
double shuffle_write_time = 32;
- getShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_time = 37;
- getShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 shuffle_write_time = 37;
- getShuffleWriteTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 shuffle_write_time = 37;
- getShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_write_time = 38;
- getShuffleWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 shuffle_write_time = 38;
- getShuffleWriteTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 shuffle_write_time = 38;
- getSimpleMessage() - Method in exception org.apache.spark.sql.AnalysisException
- getSimpleName(Class<?>) - Method in interface org.apache.spark.util.SparkClassUtils
-
Safer than Class obj's getSimpleName which may throw Malformed class name error in scala.
- getSimpleName(Class<?>) - Static method in class org.apache.spark.util.Utils
- getSink() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- getSink() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- getSink() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- getSinkBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- getSinkOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- getSinkOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- getSinkOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- getSize() - Method in class org.apache.spark.ml.feature.VectorSizeHint
-
group getParam
- getSizeAsBytes(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as bytes; throws a NoSuchElementException if it's not set.
- getSizeAsBytes(String, long) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as bytes, falling back to a default if not set.
- getSizeAsBytes(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as bytes, falling back to a default if not set.
- getSizeAsGb(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Gibibytes; throws a NoSuchElementException if it's not set.
- getSizeAsGb(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Gibibytes, falling back to a default if not set.
- getSizeAsKb(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Kibibytes; throws a NoSuchElementException if it's not set.
- getSizeAsKb(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Kibibytes, falling back to a default if not set.
- getSizeAsMb(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Mebibytes; throws a NoSuchElementException if it's not set.
- getSizeAsMb(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Mebibytes, falling back to a default if not set.
- getSizeForBlock(int) - Method in interface org.apache.spark.scheduler.MapStatus
-
Estimated size for the reduce block, in bytes.
- getSizeInBytes() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the current size in bytes of this `Matrix`.
- getSkippedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- getSkippedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
repeated int32 skipped_stages = 2;
- getSkippedStages(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
repeated int32 skipped_stages = 2;
- getSkippedStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- getSkippedStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
repeated int32 skipped_stages = 2;
- getSkippedStagesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
repeated int32 skipped_stages = 2;
- getSkippedStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- getSkippedStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
repeated int32 skipped_stages = 2;
- getSkippedStagesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
repeated int32 skipped_stages = 2;
- getSlotDescs() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- getSmoothing() - Method in interface org.apache.spark.ml.classification.NaiveBayesParams
- getSolver() - Method in interface org.apache.spark.ml.param.shared.HasSolver
- getSortedTaskSetQueue() - Method in interface org.apache.spark.scheduler.Schedulable
- getSources(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSources(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSources(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSourcesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- getSparkClassLoader() - Method in interface org.apache.spark.util.SparkClassUtils
- getSparkClassLoader() - Static method in class org.apache.spark.util.Utils
- getSparkHome() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference).
- getSparkOrYarnConfig(SparkConf, String, String) - Static method in class org.apache.spark.util.Utils
-
Return the value of a config either through the SparkConf or the Hadoop configuration.
- getSparkProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkProperties(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkPropertiesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- getSparkUser() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string spark_user = 6;
- getSparkUser() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string spark_user = 6;
- getSparkUser() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string spark_user = 6;
- getSparkUserBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string spark_user = 6;
- getSparkUserBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string spark_user = 6;
- getSparkUserBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string spark_user = 6;
- getSparseSizeInBytes(boolean) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the size of the minimal sparse representation of this `Matrix`.
- getSpeculationSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- getSpeculationSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- getSpeculationSummary() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- getSpeculationSummaryBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- getSpeculationSummaryOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- getSpeculationSummaryOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- getSpeculationSummaryOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- getSpeculative() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
bool speculative = 12;
- getSpeculative() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
bool speculative = 12;
- getSpeculative() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
bool speculative = 12;
- getSpeculative() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
bool speculative = 12;
- getSpeculative() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
bool speculative = 12;
- getSpeculative() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
bool speculative = 12;
- getSplit() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
- getSplits() - Method in class org.apache.spark.ml.feature.Bucketizer
- getSplitsArray() - Method in class org.apache.spark.ml.feature.Bucketizer
- getSql() - Method in class org.apache.spark.sql.connector.catalog.ColumnDefaultValue
-
Returns the SQL string (Spark SQL dialect) of the default value expression.
- getSqlExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
optional int64 sql_execution_id = 3;
- getSqlExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
optional int64 sql_execution_id = 3;
- getSqlExecutionId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
optional int64 sql_execution_id = 3;
- getSqlState() - Method in interface org.apache.spark.SparkThrowable
- getSqlState(String) - Method in class org.apache.spark.ErrorClassesJsonReader
- getSqlState(String) - Static method in class org.apache.spark.SparkThrowableHelper
- getSrcCol() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapperOrBuilder
-
int32 stage_attempt_id = 2;
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 stage_attempt_id = 41;
- getStageAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int32 stage_attempt_id = 41;
- getStageAttemptId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int32 stage_attempt_id = 41;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
int64 stage_id = 1;
- getStageId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
int64 stage_id = 1;
- getStageId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
int64 stage_id = 1;
- getStageId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
-
int64 stage_id = 1;
- getStageId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapperOrBuilder
-
int64 stage_id = 1;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 stage_id = 2;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
int64 stage_id = 2;
- getStageId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
int64 stage_id = 2;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 stage_id = 40;
- getStageId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 stage_id = 40;
- getStageId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 stage_id = 40;
- getStageIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- getStageIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
repeated int64 stage_ids = 6;
- getStageIds(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
repeated int64 stage_ids = 6;
- getStageIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- getStageIds(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
-
repeated int64 stage_ids = 2;
- getStageIds(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.PoolDataOrBuilder
-
repeated int64 stage_ids = 2;
- getStageIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- getStageIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
repeated int64 stage_ids = 6;
- getStageIdsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
repeated int64 stage_ids = 6;
- getStageIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- getStageIdsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
-
repeated int64 stage_ids = 2;
- getStageIdsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PoolDataOrBuilder
-
repeated int64 stage_ids = 2;
- getStageIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- getStageIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
repeated int64 stage_ids = 6;
- getStageIdsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
repeated int64 stage_ids = 6;
- getStageIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- getStageIdsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
-
repeated int64 stage_ids = 2;
- getStageIdsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PoolDataOrBuilder
-
repeated int64 stage_ids = 2;
- getStageInfo(int) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns stage information, or
null
if the stage info could not be found or was garbage collected. - getStageInfo(int) - Method in class org.apache.spark.SparkStatusTracker
-
Returns stage information, or
None
if the stage info could not be found or was garbage collected. - getStagePath(String, int, int, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Get path for saving the given stage.
- getStages() - Method in class org.apache.spark.ml.Pipeline
- getStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- getStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated int64 stages = 12;
- getStages(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated int64 stages = 12;
- getStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- getStagesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated int64 stages = 12;
- getStagesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated int64 stages = 12;
- getStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- getStagesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
repeated int64 stages = 12;
- getStagesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
repeated int64 stages = 12;
- getStandardization() - Method in interface org.apache.spark.ml.param.shared.HasStandardization
- getStart() - Method in class org.apache.spark.sql.connector.catalog.IdentityColumnSpec
- getStartOffset() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Returns the starting offset of the block currently being read, or -1 if it is unknown.
- getStartOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string start_offset = 2;
- getStartOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string start_offset = 2;
- getStartOffset() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string start_offset = 2;
- getStartOffsetBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string start_offset = 2;
- getStartOffsetBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string start_offset = 2;
- getStartOffsetBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string start_offset = 2;
- getStartTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 start_time = 2;
- getStartTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
int64 start_time = 2;
- getStartTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
int64 start_time = 2;
- getStartTimeEpoch() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- getStartTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
int64 start_timestamp = 6;
- getStartTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
int64 start_timestamp = 6;
- getStartTimestamp() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
int64 start_timestamp = 6;
- getState() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Returns the current application state.
- getState() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.:: DeveloperApi ::
- getState() - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.:: DeveloperApi ::
- getStatement() - Method in class org.apache.spark.ml.feature.SQLTransformer
- getStateOperators(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperators(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperators(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStateOperatorsOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- getStatus() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- getStatus() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string status = 10;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string status = 10;
- getStatus() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string status = 10;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string status = 10;
- getStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string status = 10;
- getStatus() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string status = 10;
- getStatusBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string status = 10;
- getStatusBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string status = 10;
- getStatusBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string status = 10;
- getStatusBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string status = 10;
- getStatusBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string status = 10;
- getStatusBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string status = 10;
- getStatusValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- getStatusValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- getStatusValue() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- getStatusValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- getStatusValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- getStatusValue() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- getStderr(Process, long) - Static method in class org.apache.spark.util.Utils
-
Return the stderr of a process after waiting for the process to terminate.
- getStep() - Method in class org.apache.spark.sql.connector.catalog.IdentityColumnSpec
- getStepSize() - Method in interface org.apache.spark.ml.param.shared.HasStepSize
- getStopWords() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- getStorageLevel() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
- getStorageLevel() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- getStorageLevel() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- getStorageLevel() - Method in class org.apache.spark.rdd.RDD
-
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
- getStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string storage_level = 2;
- getStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
optional string storage_level = 2;
- getStorageLevel() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
optional string storage_level = 2;
- getStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string storage_level = 5;
- getStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
optional string storage_level = 5;
- getStorageLevel() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
optional string storage_level = 5;
- getStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string storage_level = 4;
- getStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string storage_level = 4;
- getStorageLevel() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string storage_level = 4;
- getStorageLevelBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string storage_level = 2;
- getStorageLevelBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
optional string storage_level = 2;
- getStorageLevelBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
optional string storage_level = 2;
- getStorageLevelBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string storage_level = 5;
- getStorageLevelBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
optional string storage_level = 5;
- getStorageLevelBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
optional string storage_level = 5;
- getStorageLevelBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string storage_level = 4;
- getStorageLevelBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string storage_level = 4;
- getStorageLevelBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string storage_level = 4;
- GetStorageStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetStorageStatus$
- getStrategy() - Method in interface org.apache.spark.ml.feature.ImputerParams
- getString() - Method in class org.apache.spark.types.variant.Variant
- getString(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getString(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a String object.
- getString(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a String.
- getStringArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a String array.
- getStringField(boolean, Function0<String>) - Static method in class org.apache.spark.status.protobuf.Utils
- getStringIndexerOrderType() - Method in interface org.apache.spark.ml.feature.RFormulaBase
- getStringOrderType() - Method in interface org.apache.spark.ml.feature.StringIndexerBase
- getStruct(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of struct type as a
Row
object. - getStruct(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the struct type value for
rowId
. - getStruct(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getStruct(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getStruct(int, int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 submission_time = 4;
- getSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional int64 submission_time = 4;
- getSubmissionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional int64 submission_time = 4;
- getSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 submission_time = 8;
- getSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
int64 submission_time = 8;
- getSubmissionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
int64 submission_time = 8;
- getSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 submission_time = 10;
- getSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional int64 submission_time = 10;
- getSubmissionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional int64 submission_time = 10;
- getSubsamplingRate() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getSubsamplingRate() - Method in interface org.apache.spark.ml.tree.TreeEnsembleParams
- getSubsamplingRate() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getSucceededTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 succeeded_tasks = 3;
- getSucceededTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int32 succeeded_tasks = 3;
- getSucceededTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int32 succeeded_tasks = 3;
- getSucceededTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- getSucceededTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double succeeded_tasks = 4;
- getSucceededTasks(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double succeeded_tasks = 4;
- getSucceededTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- getSucceededTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double succeeded_tasks = 4;
- getSucceededTasksCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double succeeded_tasks = 4;
- getSucceededTasksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- getSucceededTasksList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double succeeded_tasks = 4;
- getSucceededTasksList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double succeeded_tasks = 4;
- getSummary(QueryContext) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- getSummary(QueryContext) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- getSummary(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- getSummary(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- getSummary(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- getSupportCompressionLevel() - Method in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- getSystemProperties() - Static method in class org.apache.spark.util.Utils
-
Returns the system properties map that is thread-safe to iterator over.
- getSystemProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemProperties(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesOrBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesOrBuilder(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesOrBuilderList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getSystemPropertiesOrBuilderList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- getTable(String) - Method in class org.apache.spark.sql.api.Catalog
-
Get the table or view with the specified name.
- getTable(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Get the table or view with the specified name in the specified database under the Hive Metastore.
- getTable(CatalogPlugin, Identifier, Option<TimeTravelSpec>, Option<String>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- getTable(StructType, Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.TableProvider
-
Return a
Table
instance with the specified table schema, partitioning and properties to do read/write. - getTableCommentQuery(String, String) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- getTableCommentQuery(String, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getTableCommentQuery(String, String) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getTableCommentQuery(String, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getTableCommentQuery(String, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getTableExistsQuery(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
- getTableExistsQuery(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Get the SQL query that should be used to find if the given table exists.
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getTableNames(SparkSession, String) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- getTableParameters(HttpServletRequest, String, String) - Method in interface org.apache.spark.ui.PagedTable
-
Returns parameter of this table.
- getTableProviderCatalog(SupportsCatalogOptions, CatalogManager, CaseInsensitiveStringMap) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- getTableSample(TableSampleInfo) - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- getTableSample(TableSampleInfo) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getTableSample(TableSampleInfo) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getTableSample(TableSampleInfo) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- getTablesByTypeUnsupportedByHiveVersionError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- getTags() - Method in class org.apache.spark.sql.api.SparkSession
-
Get the operation tags that are currently set to be assigned to all the operations started by this thread in this session.
- getTags() - Method in class org.apache.spark.sql.SparkSession
- getTaskCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int64 task_count = 4;
- getTaskCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
int64 task_count = 4;
- getTaskCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
int64 task_count = 4;
- getTaskId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 task_id = 1;
- getTaskId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
int64 task_id = 1;
- getTaskId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
int64 task_id = 1;
- getTaskId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 task_id = 1;
- getTaskId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
int64 task_id = 1;
- getTaskId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
int64 task_id = 1;
- getTaskInfos() - Method in class org.apache.spark.BarrierTaskContext
-
:: Experimental :: Returns
BarrierTaskInfo
for all tasks in this barrier stage, ordered by partition ID. - getTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string task_locality = 11;
- getTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string task_locality = 11;
- getTaskLocality() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string task_locality = 11;
- getTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string task_locality = 11;
- getTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string task_locality = 11;
- getTaskLocality() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string task_locality = 11;
- getTaskLocalityBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string task_locality = 11;
- getTaskLocalityBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string task_locality = 11;
- getTaskLocalityBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string task_locality = 11;
- getTaskLocalityBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string task_locality = 11;
- getTaskLocalityBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string task_locality = 11;
- getTaskLocalityBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string task_locality = 11;
- getTaskMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- getTaskMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- getTaskMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- getTaskMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- getTaskMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- getTaskMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- getTaskMetricsDistributions() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- getTaskMetricsDistributionsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- getTaskMetricsDistributionsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- getTaskMetricsDistributionsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- getTaskMetricsDistributionsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- getTaskMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- getTaskMetricsOrBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- getTaskMetricsOrBuilder() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- getTaskResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
Deprecated.
- getTaskResources() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
Deprecated.
- getTaskResources() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
Deprecated.
- getTaskResourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- getTaskResourcesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- getTaskResourcesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesOrDefault(String, StoreTypes.TaskResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesOrDefault(String, StoreTypes.TaskResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesOrDefault(String, StoreTypes.TaskResourceRequest) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesOrThrow(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTaskResourcesOrThrow(String) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfoOrBuilder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- getTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
Deprecated.
- getTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
Deprecated.
- getTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
Deprecated.
- getTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- getTasksCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- getTasksCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksMap() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksMap() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksOrDefault(long, StoreTypes.TaskData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksOrDefault(long, StoreTypes.TaskData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksOrDefault(long, StoreTypes.TaskData) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksOrThrow(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTasksOrThrow(long) - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- getTaskThreadDump(long, String) - Method in interface org.apache.spark.scheduler.SchedulerBackend
- getTaskTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 task_time = 1;
- getTaskTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
int64 task_time = 1;
- getTaskTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
int64 task_time = 1;
- getTaskTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- getTaskTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double task_time = 2;
- getTaskTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double task_time = 2;
- getTaskTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- getTaskTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double task_time = 2;
- getTaskTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double task_time = 2;
- getTaskTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- getTaskTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
repeated double task_time = 2;
- getTaskTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
repeated double task_time = 2;
- getTau0() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
A (positive) learning parameter that downweights early iterations.
- getText() - Method in class org.apache.spark.sql.scripting.SingleStatementExec
-
Get the SQL query text corresponding to this statement.
- getThreadDump() - Static method in class org.apache.spark.util.Utils
-
Return a thread dump of all threads' stacktraces.
- getThreadDumpForThread(long) - Static method in class org.apache.spark.util.Utils
- getThreshold() - Method in class org.apache.spark.ml.classification.LogisticRegression
- getThreshold() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- getThreshold() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
Get threshold for binary classification.
- getThreshold() - Method in interface org.apache.spark.ml.param.shared.HasThreshold
- getThreshold() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
- getThreshold() - Method in class org.apache.spark.mllib.classification.SVMModel
-
Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
- getThresholds() - Method in class org.apache.spark.ml.classification.LogisticRegression
- getThresholds() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- getThresholds() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
Get thresholds for binary or multiclass classification.
- getThresholds() - Method in interface org.apache.spark.ml.param.shared.HasThresholds
- getThroughOrigin() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- getTimeAsMs(String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as milliseconds; throws a NoSuchElementException if it's not set.
- getTimeAsMs(String, String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as milliseconds, falling back to a default if not set.
- getTimeAsSeconds(String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as seconds; throws a NoSuchElementException if it's not set.
- getTimeAsSeconds(String, String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as seconds, falling back to a default if not set.
- getTimeMillis() - Method in interface org.apache.spark.util.Clock
- getTimeoutTimestampMs() - Method in interface org.apache.spark.sql.streaming.TestGroupState
-
Returns the timestamp if
setTimeoutTimestamp()
is called. - getTimer(L) - Method in interface org.apache.spark.util.ListenerBus
-
Returns a CodaHale metrics Timer for measuring the listener's event processing time.
- getTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string timestamp = 4;
- getTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string timestamp = 4;
- getTimestamp() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string timestamp = 4;
- getTimestamp() - Method in class org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
- getTimestamp(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of date type as java.sql.Timestamp.
- getTimestampBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string timestamp = 4;
- getTimestampBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string timestamp = 4;
- getTimestampBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string timestamp = 4;
- getTimeZoneOffset() - Static method in class org.apache.spark.ui.UIUtils
- GETTING_RESULT_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- GETTING_RESULT_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
- GETTING_RESULT_TIME() - Static method in class org.apache.spark.ui.ToolTips
- GETTING_RESULT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- GETTING_RESULT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- GETTING_RESULT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- gettingResult() - Method in class org.apache.spark.scheduler.TaskInfo
- gettingResultTime() - Method in class org.apache.spark.scheduler.TaskInfo
-
The time when the task started remotely getting the result.
- gettingResultTime() - Method in class org.apache.spark.status.api.v1.TaskData
- gettingResultTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- gettingResultTime(long, long, long) - Static method in class org.apache.spark.status.AppStatusUtils
- gettingResultTime(TaskData) - Static method in class org.apache.spark.status.AppStatusUtils
- getToId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
int32 to_id = 2;
- getToId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
-
int32 to_id = 2;
- getToId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdgeOrBuilder
-
int32 to_id = 2;
- getToId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
int64 to_id = 2;
- getToId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
-
int64 to_id = 2;
- getToId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdgeOrBuilder
-
int64 to_id = 2;
- getTokenJaasParams(KafkaTokenClusterConf) - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- getTol() - Method in interface org.apache.spark.ml.param.shared.HasTol
- getToLowercase() - Method in class org.apache.spark.ml.feature.RegexTokenizer
- getTopicConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getTopicConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
- getTopicDistributionCol() - Method in interface org.apache.spark.ml.clustering.LDAParams
- getTopologyForHost(String) - Method in class org.apache.spark.storage.DefaultTopologyMapper
- getTopologyForHost(String) - Method in class org.apache.spark.storage.FileBasedTopologyMapper
- getTopologyForHost(String) - Method in class org.apache.spark.storage.TopologyMapper
-
Gets the topology information given the host name
- getTotalBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetched(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetched(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetchedCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetchedCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetchedList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
repeated double total_blocks_fetched = 8;
- getTotalBlocksFetchedList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
repeated double total_blocks_fetched = 8;
- getTotalCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 total_cores = 7;
- getTotalCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 total_cores = 7;
- getTotalCores() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 total_cores = 7;
- getTotalCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
int32 total_cores = 4;
- getTotalCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
int32 total_cores = 4;
- getTotalCores() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
int32 total_cores = 4;
- getTotalDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_duration = 13;
- getTotalDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 total_duration = 13;
- getTotalDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 total_duration = 13;
- getTotalGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_gc_time = 14;
- getTotalGcTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 total_gc_time = 14;
- getTotalGcTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 total_gc_time = 14;
- getTotalInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_input_bytes = 15;
- getTotalInputBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 total_input_bytes = 15;
- getTotalInputBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 total_input_bytes = 15;
- getTotalOffHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 total_off_heap_storage_memory = 4;
- getTotalOffHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
-
int64 total_off_heap_storage_memory = 4;
- getTotalOffHeapStorageMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.MemoryMetricsOrBuilder
-
int64 total_off_heap_storage_memory = 4;
- getTotalOnHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 total_on_heap_storage_memory = 3;
- getTotalOnHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
-
int64 total_on_heap_storage_memory = 3;
- getTotalOnHeapStorageMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.MemoryMetricsOrBuilder
-
int64 total_on_heap_storage_memory = 3;
- getTotalShuffleRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_shuffle_read = 16;
- getTotalShuffleRead() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 total_shuffle_read = 16;
- getTotalShuffleRead() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 total_shuffle_read = 16;
- getTotalShuffleWrite() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_shuffle_write = 17;
- getTotalShuffleWrite() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int64 total_shuffle_write = 17;
- getTotalShuffleWrite() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int64 total_shuffle_write = 17;
- getTotalTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 total_tasks = 12;
- getTotalTasks() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
int32 total_tasks = 12;
- getTotalTasks() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
int32 total_tasks = 12;
- getTrainRatio() - Method in interface org.apache.spark.ml.tuning.TrainValidationSplitParams
- getTreeIterator() - Method in class org.apache.spark.sql.scripting.CaseStatementExec
- getTreeIterator() - Method in class org.apache.spark.sql.scripting.CompoundNestedStatementIteratorExec
- getTreeIterator() - Method in class org.apache.spark.sql.scripting.IfElseStatementExec
- getTreeIterator() - Method in interface org.apache.spark.sql.scripting.NonLeafStatementExec
-
Construct the iterator to traverse the tree rooted at this node in an in-order traversal.
- getTreeIterator() - Method in class org.apache.spark.sql.scripting.RepeatStatementExec
- getTreeIterator() - Method in class org.apache.spark.sql.scripting.WhileStatementExec
- getTreeStrategy() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- getTruncateQuery(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
The SQL query that should be used to truncate a table.
- getTruncateQuery(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getTruncateQuery(String, Option<Object>) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
-
The SQL query used to truncate a table.
- getTruncateQuery(String, Option<Object>) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getTruncateQuery(String, Option<Object>) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
The SQL query that should be used to truncate a table.
- getTruncateQuery(String, Option<Object>) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getTruncateQuery(String, Option<Object>) - Method in class org.apache.spark.sql.jdbc.OracleDialect
-
The SQL query used to truncate a table.
- getTruncateQuery(String, Option<Object>) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
-
The SQL query used to truncate a table.
- getTruncateQuery(String, Option<Object>) - Method in class org.apache.spark.sql.jdbc.TeradataDialect
-
The SQL query used to truncate a table.
- getTruncateQuery$default$2() - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getType() - Method in class org.apache.spark.types.variant.Variant
- getType(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getTypeInfo() - Method in class org.apache.spark.types.variant.Variant
- getTypeInfo(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- getUDTFor(String) - Static method in class org.apache.spark.sql.types.UDTRegistration
-
Returns the Class of UserDefinedType for the name of a given user class.
- getUidMap(Params) - Static method in class org.apache.spark.ml.util.MetaAlgorithmReadWrite
-
Examine the given estimator (which may be a compound estimator) and extract a mapping from UIDs to corresponding
Params
instances. - getUiRoot(ServletContext) - Static method in class org.apache.spark.status.api.v1.UIRootFromServletContext
- getUpdate() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string update = 3;
- getUpdate() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string update = 3;
- getUpdate() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string update = 3;
- getUpdateBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string update = 3;
- getUpdateBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string update = 3;
- getUpdateBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string update = 3;
- getUpdateColumnNullabilityQuery(String, String, boolean) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getUpdateColumnNullabilityQuery(String, String, boolean) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- getUpdateColumnNullabilityQuery(String, String, boolean) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getUpdateColumnNullabilityQuery(String, String, boolean) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- getUpdateColumnNullabilityQuery(String, String, boolean) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getUpdateColumnNullabilityQuery(String, String, boolean) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getUpdateColumnNullabilityQuery(String, String, boolean) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getUpdateColumnNullabilityQuery(String, String, boolean) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- getUpdateColumnTypeQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- getUpdateColumnTypeQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- getUpdateColumnTypeQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- getUpdateColumnTypeQuery(String, String, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getUpdateColumnTypeQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- getUpdateColumnTypeQuery(String, String, String) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- getUpper() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
- getUpperBound(double) - Static method in class org.apache.spark.util.random.PoissonBounds
-
Returns a lambda such that Pr[X < s] is very small, where X ~ Pois(lambda).
- getUpperBound(double, long, double) - Static method in class org.apache.spark.util.random.BinomialBounds
-
Returns a threshold
p
such that if we conduct n Bernoulli trials with success rate =p
, it is very unlikely to have less thanfraction * n
successes. - getUpperBoundsOnCoefficients() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
- getUpperBoundsOnIntercepts() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
- getUriBuilder(String) - Static method in class org.apache.spark.util.Utils
-
Create a UriBuilder from URI string.
- getUriBuilder(URI) - Static method in class org.apache.spark.util.Utils
-
Create a UriBuilder from URI object.
- getUsedBins() - Method in class org.apache.spark.sql.util.NumericHistogram
-
Returns the number of bins currently being used by the histogram.
- getUseDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool use_disk = 6;
- getUseDisk() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
bool use_disk = 6;
- getUseDisk() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
bool use_disk = 6;
- getUsedOffHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 used_off_heap_storage_memory = 2;
- getUsedOffHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
-
int64 used_off_heap_storage_memory = 2;
- getUsedOffHeapStorageMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.MemoryMetricsOrBuilder
-
int64 used_off_heap_storage_memory = 2;
- getUsedOnHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 used_on_heap_storage_memory = 1;
- getUsedOnHeapStorageMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
-
int64 used_on_heap_storage_memory = 1;
- getUsedOnHeapStorageMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.MemoryMetricsOrBuilder
-
int64 used_on_heap_storage_memory = 1;
- getUsedTimeNs(long) - Static method in class org.apache.spark.util.Utils
-
Return the string to tell how long has passed in milliseconds.
- getUseMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool use_memory = 5;
- getUseMemory() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
bool use_memory = 5;
- getUseMemory() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
bool use_memory = 5;
- getUseNodeIdCache() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- getUserCol() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
- getUserJars(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the jar files pointed by the "spark.jars" property.
- getUTF8String(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getUTF8String(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getUTF8String(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getUTF8String(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getUTF8String(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the string type value for
rowId
. - getValidationIndicatorCol() - Method in interface org.apache.spark.ml.param.shared.HasValidationIndicatorCol
- getValidationTol() - Method in interface org.apache.spark.ml.tree.GBTParams
- getValidationTol() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- getValue() - Method in class org.apache.spark.mllib.stat.test.BinarySample
- getValue() - Method in class org.apache.spark.sql.connector.catalog.ColumnDefaultValue
-
Returns the default value literal.
- getValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string value = 4;
- getValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string value = 4;
- getValue() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string value = 4;
- getValue() - Method in class org.apache.spark.types.variant.Variant
- getValue(int) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Gets a value given its index.
- getValue(K) - Method in interface org.apache.spark.sql.streaming.MapState
-
Get the state value if it exists
- getValue1() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value1 = 1;
- getValue1() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
-
optional string value1 = 1;
- getValue1() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PairStringsOrBuilder
-
optional string value1 = 1;
- getValue1Bytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value1 = 1;
- getValue1Bytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
-
optional string value1 = 1;
- getValue1Bytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PairStringsOrBuilder
-
optional string value1 = 1;
- getValue2() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value2 = 2;
- getValue2() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
-
optional string value2 = 2;
- getValue2() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PairStringsOrBuilder
-
optional string value2 = 2;
- getValue2Bytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value2 = 2;
- getValue2Bytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
-
optional string value2 = 2;
- getValue2Bytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PairStringsOrBuilder
-
optional string value2 = 2;
- getValueBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string value = 4;
- getValueBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string value = 4;
- getValueBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string value = 4;
- getValueDescriptor() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
- getValueDescriptor() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
- getValueDescriptor() - Method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
- getValuesMap(Seq<String>) - Method in interface org.apache.spark.sql.Row
-
Returns a Map consisting of names and values for the requested fieldNames For primitive types if value is null it returns 'zero value' specific for primitive i.e.
- getValueState(String, Encoder<T>) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to create new or return existing single value state variable of given type.
- getValueState(String, Encoder<T>, TTLConfig) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to create new or return existing single value state variable of given type with ttl.
- getValueVector() - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- getVarianceCol() - Method in interface org.apache.spark.ml.param.shared.HasVarianceCol
- getVariancePower() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
- getVarianceThreshold() - Method in interface org.apache.spark.ml.feature.VarianceThresholdSelectorParams
- getVariant(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- getVariant(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- getVariant(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- getVariant(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the Variant value for
rowId
. - getVectors() - Method in class org.apache.spark.ml.feature.Word2VecModel
- getVectors() - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Returns a map of words to their vector representations.
- getVectorSize() - Method in interface org.apache.spark.ml.feature.Word2VecBase
- getVendor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string vendor = 4;
- getVendor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string vendor = 4;
- getVendor() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string vendor = 4;
- getVendorBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string vendor = 4;
- getVendorBytes() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string vendor = 4;
- getVendorBytes() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string vendor = 4;
- getVocabSize() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
- getWeightCol() - Method in interface org.apache.spark.ml.param.shared.HasWeightCol
- getWidth(Row) - Static method in class org.apache.spark.ml.image.ImageSchema
-
Gets the width of the image
- getWindowSize() - Method in interface org.apache.spark.ml.feature.Word2VecBase
- getWithCentering() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
- getWithMean() - Method in interface org.apache.spark.ml.feature.StandardScalerParams
- getWithScaling() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
- getWithStd() - Method in interface org.apache.spark.ml.feature.StandardScalerParams
- getWrapperCase() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- getWrapperCase() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- getWrapperCase() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapperOrBuilder
- getWriteBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- getWriteBytes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_bytes = 1;
- getWriteBytes(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_bytes = 1;
- getWriteBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- getWriteBytesCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_bytes = 1;
- getWriteBytesCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_bytes = 1;
- getWriteBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- getWriteBytesList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_bytes = 1;
- getWriteBytesList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_bytes = 1;
- getWritePos() - Method in class org.apache.spark.types.variant.VariantBuilder
- getWriteRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- getWriteRecords(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_records = 2;
- getWriteRecords(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_records = 2;
- getWriteRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- getWriteRecordsCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_records = 2;
- getWriteRecordsCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_records = 2;
- getWriteRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- getWriteRecordsList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_records = 2;
- getWriteRecordsList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_records = 2;
- getWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 write_time = 2;
- getWriteTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
-
int64 write_time = 2;
- getWriteTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricsOrBuilder
-
int64 write_time = 2;
- getWriteTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- getWriteTime(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_time = 3;
- getWriteTime(int) - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_time = 3;
- getWriteTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- getWriteTimeCount() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_time = 3;
- getWriteTimeCount() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_time = 3;
- getWriteTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- getWriteTimeList() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
-
repeated double write_time = 3;
- getWriteTimeList() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributionsOrBuilder
-
repeated double write_time = 3;
- getYearMonthIntervalAsMonths(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Converts an year-month interval string to an int value
months
. - getYearMonthIntervalAsMonths(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- getYearMonthIntervalFields() - Method in class org.apache.spark.types.variant.Variant
- getYearMonthIntervalFields(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- Gini - Class in org.apache.spark.mllib.tree.impurity
-
Class for calculating the Gini impurity (http://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity) during multiclass classification.
- Gini() - Constructor for class org.apache.spark.mllib.tree.impurity.Gini
- GLMClassificationModel - Class in org.apache.spark.mllib.classification.impl
-
Helper class for import/export of GLM classification models.
- GLMClassificationModel() - Constructor for class org.apache.spark.mllib.classification.impl.GLMClassificationModel
- GLMClassificationModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.classification.impl
- GLMClassificationModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.classification.impl
-
Model data for import/export
- GLMClassificationModel.SaveLoadV1_0$.Data$ - Class in org.apache.spark.mllib.classification.impl
- GLMRegressionModel - Class in org.apache.spark.mllib.regression.impl
-
Helper methods for import/export of GLM regression models.
- GLMRegressionModel() - Constructor for class org.apache.spark.mllib.regression.impl.GLMRegressionModel
- GLMRegressionModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.regression.impl
- GLMRegressionModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.regression.impl
-
Model data for model import/export
- GLMRegressionModel.SaveLoadV1_0$.Data$ - Class in org.apache.spark.mllib.regression.impl
- glom() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by coalescing all elements within each partition into an array.
- glom() - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by coalescing all elements within each partition into an array.
- glom() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream.
- glom() - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream.
- goButtonFormPath() - Method in interface org.apache.spark.ui.PagedTable
-
Returns the submission path for the "go to page #" form.
- goodnessOfFit() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
- GPU() - Static method in class org.apache.spark.resource.ResourceUtils
- grad() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- grad(DenseMatrix<Object>, DenseMatrix<Object>, DenseVector<Object>) - Method in interface org.apache.spark.ml.ann.LayerModel
-
Computes the gradient.
- gradient() - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
The current weighted averaged gradient.
- gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.AbsoluteError
-
Method to calculate the gradients for the gradient boosting calculation for least absolute error calculation.
- gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.LogLoss
-
Method to calculate the loss gradients for the gradient boosting calculation for binary classification The gradient with respect to F(x) is: - 4 y / (1 + exp(2 y F(x)))
- gradient(double, double) - Method in interface org.apache.spark.mllib.tree.loss.Loss
-
Method to calculate the gradients for the gradient boosting calculation.
- gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.SquaredError
-
Method to calculate the gradients for the gradient boosting calculation for least squares error calculation.
- Gradient - Class in org.apache.spark.mllib.optimization
-
Class used to compute the gradient for a loss function, given a single data point.
- Gradient() - Constructor for class org.apache.spark.mllib.optimization.Gradient
- GradientBoostedTrees - Class in org.apache.spark.ml.tree.impl
- GradientBoostedTrees - Class in org.apache.spark.mllib.tree
-
A class that implements Stochastic Gradient Boosting for regression and binary classification.
- GradientBoostedTrees() - Constructor for class org.apache.spark.ml.tree.impl.GradientBoostedTrees
- GradientBoostedTrees(BoostingStrategy) - Constructor for class org.apache.spark.mllib.tree.GradientBoostedTrees
- GradientBoostedTreesModel - Class in org.apache.spark.mllib.tree.model
-
Represents a gradient boosted trees model.
- GradientBoostedTreesModel(Enumeration.Value, DecisionTreeModel[], double[]) - Constructor for class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- GradientDescent - Class in org.apache.spark.mllib.optimization
-
Class used to solve an optimization problem using Gradient Descent.
- gradientSumArray() - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
Array of gradient values that are mutated when new instances are added to the aggregator.
- Graph<VD,
ED> - Class in org.apache.spark.graphx -
The Graph abstractly represents a graph with arbitrary objects associated with vertices and edges.
- GraphGenerators - Class in org.apache.spark.graphx.util
-
A collection of graph generating functions.
- GraphGenerators() - Constructor for class org.apache.spark.graphx.util.GraphGenerators
- GraphImpl<VD,
ED> - Class in org.apache.spark.graphx.impl -
An implementation of
Graph
to support computation on graphs. - graphiteSinkInvalidProtocolError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- graphiteSinkPropertyMissingError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- GraphLoader - Class in org.apache.spark.graphx
-
Provides utilities for loading
Graph
s from files. - GraphLoader() - Constructor for class org.apache.spark.graphx.GraphLoader
- GraphOps<VD,
ED> - Class in org.apache.spark.graphx -
Contains additional functionality for
Graph
. - GraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Constructor for class org.apache.spark.graphx.GraphOps
- graphToGraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Implicitly extracts the
GraphOps
member from a graph. - GraphXUtils - Class in org.apache.spark.graphx
- GraphXUtils() - Constructor for class org.apache.spark.graphx.GraphXUtils
- greater(Duration) - Method in class org.apache.spark.streaming.Duration
- greater(Time) - Method in class org.apache.spark.streaming.Time
- greaterEq(Duration) - Method in class org.apache.spark.streaming.Duration
- greaterEq(Time) - Method in class org.apache.spark.streaming.Time
- GreaterThan - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a value greater thanvalue
. - GreaterThan(String, Object) - Constructor for class org.apache.spark.sql.sources.GreaterThan
- GreaterThanOrEqual - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a value greater than or equal tovalue
. - GreaterThanOrEqual(String, Object) - Constructor for class org.apache.spark.sql.sources.GreaterThanOrEqual
- greatest(String, String...) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of column names, skipping null values.
- greatest(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of column names, skipping null values.
- greatest(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of values, skipping null values.
- greatest(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of values, skipping null values.
- gridGraph(SparkContext, int, int) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
Create
rows
bycols
grid graph with each vertex connected to its row+1 and col+1 neighbors. - groupArr() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- groupBy(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Groups the Dataset using the specified columns, so that we can run aggregation on them.
- groupBy(String, String...) - Method in class org.apache.spark.sql.Dataset
- groupBy(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Groups the Dataset using the specified columns, so that we can run aggregation on them.
- groupBy(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- groupBy(Function<T, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD of grouped elements.
- groupBy(Function<T, U>, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD of grouped elements.
- groupBy(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Groups the Dataset using the specified columns, so we can run aggregation on them.
- groupBy(Column...) - Method in class org.apache.spark.sql.Dataset
- groupBy(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Groups the Dataset using the specified columns, so we can run aggregation on them.
- groupBy(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- groupBy(Function1<T, K>, int, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD of grouped elements.
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD of grouped items.
- groupBy(Function1<T, K>, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD of grouped items.
- groupByExpressions() - Method in record class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
Returns the value of the
groupByExpressions
record component. - groupByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Group the values for each key in the RDD into a single sequence.
- groupByKey() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Group the values for each key in the RDD into a single sequence.
- groupByKey() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
groupByKey
to each RDD. - groupByKey() - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
groupByKey
to each RDD. - groupByKey(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
groupByKey
to each RDD. - groupByKey(int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
groupByKey
to each RDD. - groupByKey(MapFunction<T, K>, Encoder<K>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Returns a
KeyValueGroupedDataset
where the data is grouped by the given keyfunc
. - groupByKey(MapFunction<T, K>, Encoder<K>) - Method in class org.apache.spark.sql.Dataset
- groupByKey(Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
groupByKey
on each RDD ofthis
DStream. - groupByKey(Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
groupByKey
on each RDD. - groupByKey(Function1<T, K>, Encoder<K>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a
KeyValueGroupedDataset
where the data is grouped by the given keyfunc
. - groupByKey(Function1<T, K>, Encoder<K>) - Method in class org.apache.spark.sql.Dataset
- groupByKeyAndWindow(Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
groupByKey
over a sliding window. - groupByKeyAndWindow(Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
groupByKey
over a sliding window. - groupByKeyAndWindow(Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
groupByKey
over a sliding window. - groupByKeyAndWindow(Duration, Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
groupByKey
over a sliding window. - groupByKeyAndWindow(Duration, Duration, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
groupByKey
over a sliding window onthis
DStream. - groupByKeyAndWindow(Duration, Duration, int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
groupByKey
over a sliding window onthis
DStream. - groupByKeyAndWindow(Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
groupByKey
over a sliding window onthis
DStream. - groupByKeyAndWindow(Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Create a new DStream by applying
groupByKey
over a sliding window onthis
DStream. - groupByPositionRangeError(int, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- groupByPositionRefersToAggregateFunctionError(int, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- GroupByType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.GroupByType$
- groupEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.Graph
-
Merges multiple edges between two vertices into a single edge.
- groupEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- groupHash() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- grouping(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: indicates whether a specified column in a GROUP BY list is aggregated or not, returns 1 for aggregated or 0 for not aggregated in the result set.
- grouping(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: indicates whether a specified column in a GROUP BY list is aggregated or not, returns 1 for aggregated or 0 for not aggregated in the result set.
- grouping_id(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the level of grouping, equals to
- grouping_id(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the level of grouping, equals to
- groupingColInvalidError(Expression, Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- groupingIDMismatchError(GroupingID, Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- groupingMustWithGroupingSetsOrCubeOrRollupError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- groupingSets(Seq<Seq<Column>>, Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Create multi-dimensional aggregation for the current Dataset using the specified grouping sets, so we can run aggregation on them.
- groupingSets(Seq<Seq<Column>>, Column...) - Method in class org.apache.spark.sql.Dataset
- groupingSets(Seq<Seq<Column>>, Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Create multi-dimensional aggregation for the current Dataset using the specified grouping sets, so we can run aggregation on them.
- groupingSets(Seq<Seq<Column>>, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- GroupingSetsType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.GroupingSetsType$
- groupingSizeTooLargeError(int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- GroupMappingServiceProvider - Interface in org.apache.spark.security
-
This Spark trait is used for mapping a given userName to a set of groups which it belongs to.
- GroupState<S> - Interface in org.apache.spark.sql.streaming
-
:: Experimental ::
- GroupStateTimeout - Class in org.apache.spark.sql.streaming
-
Represents the type of timeouts possible for the Dataset operations
mapGroupsWithState
andflatMapGroupsWithState
. - GroupStateTimeout() - Constructor for class org.apache.spark.sql.streaming.GroupStateTimeout
- groupWith(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Alias for cogroup.
- groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Alias for cogroup.
- groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Alias for cogroup.
- groupWith(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Alias for cogroup.
- groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Alias for cogroup.
- groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Alias for cogroup.
- gt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is greater than lowerBound
- gt(Object) - Method in class org.apache.spark.sql.Column
-
Greater than.
- gt(T, T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- gt(T, T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- gt(T, T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- gt(T, T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- gt(T, T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- gt(T, T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- gt(T, T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- gteq(T, T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- gteq(T, T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- gteq(T, T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- gteq(T, T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- gteq(T, T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- gteq(T, T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- gteq(T, T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- gtEq(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is greater than or equal to lowerBound
- guard(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
H
- HADOOP_PROPERTIES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- hadoopConfiguration() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Returns the Hadoop configuration used for the Hadoop code (e.g.
- hadoopConfiguration() - Method in class org.apache.spark.SparkContext
-
A default Hadoop Configuration for the Hadoop code (e.g.
- hadoopDelegationCreds() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
- HadoopDelegationTokenProvider - Interface in org.apache.spark.security
-
::DeveloperApi:: Hadoop delegation token provider.
- hadoopFile(String, int, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.SparkContext
-
Smarter version of hadoopFile() that uses class tags to figure out the classes of keys, values and the InputFormat so that users don't need to pass them directly.
- hadoopFile(String, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop file with an arbitrary InputFormat
- hadoopFile(String, Class<F>, Class<K>, Class<V>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop file with an arbitrary InputFormat
- hadoopFile(String, Class<F>, Class<K>, Class<V>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop file with an arbitrary InputFormat.
- hadoopFile(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.SparkContext
-
Smarter version of hadoopFile() that uses class tags to figure out the classes of keys, values and the InputFormat so that users don't need to pass them directly.
- HadoopFSUtils - Class in org.apache.spark.util
-
Utility functions to simplify and speed-up file listing.
- HadoopFSUtils() - Constructor for class org.apache.spark.util.HadoopFSUtils
- HadoopMapPartitionsWithSplitRDD$() - Constructor for class org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD$
- hadoopProperties() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
- hadoopRDD(JobConf, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf given its InputFormat and other necessary info (e.g.
- hadoopRDD(JobConf, Class<F>, Class<K>, Class<V>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g.
- hadoopRDD(JobConf, Class<F>, Class<K>, Class<V>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g.
- HadoopRDD<K,
V> - Class in org.apache.spark.rdd -
:: DeveloperApi :: An RDD that provides core functionality for reading data stored in Hadoop (e.g., files in HDFS, sources in HBase, or S3), using the older MapReduce API (
org.apache.hadoop.mapred
). - HadoopRDD(SparkContext, JobConf, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Constructor for class org.apache.spark.rdd.HadoopRDD
- HadoopRDD(SparkContext, Broadcast<SerializableConfiguration>, Option<Function1<JobConf, BoxedUnit>>, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Constructor for class org.apache.spark.rdd.HadoopRDD
- HadoopRDD(SparkContext, Broadcast<SerializableConfiguration>, Option<Function1<JobConf, BoxedUnit>>, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int, boolean, boolean) - Constructor for class org.apache.spark.rdd.HadoopRDD
- HadoopRDD.HadoopMapPartitionsWithSplitRDD$ - Class in org.apache.spark.rdd
- hammingLoss() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- hammingLoss() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns Hamming-loss
- handle(String, Request, HttpServletRequest, HttpServletResponse) - Method in class org.apache.spark.ui.ProxyRedirectHandler
- handleArray(byte[], int, VariantUtil.ArrayHandler<T>) - Static method in class org.apache.spark.types.variant.VariantUtil
- handleInvalid() - Method in class org.apache.spark.ml.feature.Bucketizer
-
Param for how to handle invalid entries containing NaN values.
- handleInvalid() - Method in class org.apache.spark.ml.feature.OneHotEncoder
- handleInvalid() - Method in interface org.apache.spark.ml.feature.OneHotEncoderBase
-
Param for how to handle invalid data during transform().
- handleInvalid() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- handleInvalid() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- handleInvalid() - Method in interface org.apache.spark.ml.feature.QuantileDiscretizerBase
-
Param for how to handle invalid entries.
- handleInvalid() - Method in class org.apache.spark.ml.feature.RFormula
- handleInvalid() - Method in interface org.apache.spark.ml.feature.RFormulaBase
-
Param for how to handle invalid data (unseen or NULL values) in features and label column of string type.
- handleInvalid() - Method in class org.apache.spark.ml.feature.RFormulaModel
- handleInvalid() - Method in class org.apache.spark.ml.feature.StringIndexer
- handleInvalid() - Method in interface org.apache.spark.ml.feature.StringIndexerBase
-
Param for how to handle invalid data (unseen labels or NULL values).
- handleInvalid() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- handleInvalid() - Method in class org.apache.spark.ml.feature.VectorAssembler
-
Param for how to handle invalid data (NULL values).
- handleInvalid() - Method in class org.apache.spark.ml.feature.VectorIndexer
- handleInvalid() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- handleInvalid() - Method in interface org.apache.spark.ml.feature.VectorIndexerParams
-
Param for how to handle invalid data (unseen labels or NULL values).
- handleInvalid() - Method in class org.apache.spark.ml.feature.VectorSizeHint
-
Param for how to handle invalid entries.
- handleInvalid() - Method in interface org.apache.spark.ml.param.shared.HasHandleInvalid
-
Param for how to handle invalid entries.
- handleObject(byte[], int, VariantUtil.ObjectHandler<T>) - Static method in class org.apache.spark.types.variant.VariantUtil
- HAS_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- hasAccumulators(StageData) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- hasAddress() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional string address = 1;
- hasAddress() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional string address = 1;
- hasAddress() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional string address = 1;
- HasAggregationDepth - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param aggregationDepth (default: 2).
- hasAppSparkVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string app_spark_version = 8;
- hasAppSparkVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string app_spark_version = 8;
- hasAppSparkVersion() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string app_spark_version = 8;
- hasAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string attempt_id = 1;
- hasAttemptId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string attempt_id = 1;
- hasAttemptId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string attempt_id = 1;
- hasAttr(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Test whether this attribute group contains a specific attribute.
- hasBeenMatched() - Method in class org.apache.spark.sql.scripting.IterateStatementExec
-
Label specified in the ITERATE statement might not belong to the immediate compound, but to the any surrounding compound.
- hasBeenMatched() - Method in class org.apache.spark.sql.scripting.LeaveStatementExec
-
Label specified in the LEAVE statement might not belong to the immediate surrounding compound, but to the any surrounding compound.
- hasBlockName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string block_name = 1;
- hasBlockName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
optional string block_name = 1;
- hasBlockName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
optional string block_name = 1;
- HasBlockSize - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param blockSize.
- hasBytesSpilled(StageData) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- hasCachedSerializedBroadcast() - Method in class org.apache.spark.ShuffleStatus
- hasCallsite() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string callsite = 5;
- hasCallsite() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
optional string callsite = 5;
- hasCallsite() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
optional string callsite = 5;
- HasCheckpointInterval - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param checkpointInterval.
- hasCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- hasCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- hasCluster() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- HasCollectSubModels - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param collectSubModels (default: false).
- hasCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 completion_time = 5;
- hasCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional int64 completion_time = 5;
- hasCompletionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional int64 completion_time = 5;
- hasCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional int64 completion_time = 9;
- hasCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional int64 completion_time = 9;
- hasCompletionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional int64 completion_time = 9;
- hasCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 completion_time = 12;
- hasCompletionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional int64 completion_time = 12;
- hasCompletionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional int64 completion_time = 12;
- hasCoresGranted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_granted = 3;
- hasCoresGranted() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 cores_granted = 3;
- hasCoresGranted() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 cores_granted = 3;
- hasCoresPerExecutor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_per_executor = 5;
- hasCoresPerExecutor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 cores_per_executor = 5;
- hasCoresPerExecutor() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 cores_per_executor = 5;
- hasDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
-
Tests whether the input param has a default value set.
- hasDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string desc = 3;
- hasDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
optional string desc = 3;
- hasDesc() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
optional string desc = 3;
- hasDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string desc = 3;
- hasDesc() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
optional string desc = 3;
- hasDesc() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
optional string desc = 3;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string description = 3;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string description = 3;
- hasDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string description = 3;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
optional string description = 1;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
-
optional string description = 1;
- hasDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SinkProgressOrBuilder
-
optional string description = 1;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string description = 1;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string description = 1;
- hasDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string description = 1;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string description = 3;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string description = 3;
- hasDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string description = 3;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string description = 40;
- hasDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string description = 40;
- hasDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string description = 40;
- hasDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string details = 4;
- hasDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string details = 4;
- hasDetails() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string details = 4;
- hasDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string details = 41;
- hasDetails() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string details = 41;
- hasDetails() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string details = 41;
- hasDiscoveryScript() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string discovery_script = 3;
- hasDiscoveryScript() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string discovery_script = 3;
- hasDiscoveryScript() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string discovery_script = 3;
- HasDistanceMeasure - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param distanceMeasure (default: "euclidean").
- hasDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 duration = 7;
- hasDuration() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional int64 duration = 7;
- hasDuration() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional int64 duration = 7;
- HasElasticNetParam - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param elasticNetParam.
- hasEndOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string end_offset = 3;
- hasEndOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string end_offset = 3;
- hasEndOffset() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string end_offset = 3;
- hasEndTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional int64 end_timestamp = 7;
- hasEndTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional int64 end_timestamp = 7;
- hasEndTimestamp() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional int64 end_timestamp = 7;
- hasErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string error_message = 10;
- hasErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string error_message = 10;
- hasErrorMessage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string error_message = 10;
- hasErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string error_message = 14;
- hasErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string error_message = 14;
- hasErrorMessage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string error_message = 14;
- hasErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string error_message = 14;
- hasErrorMessage() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string error_message = 14;
- hasErrorMessage() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string error_message = 14;
- hasException() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string exception = 5;
- hasException() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string exception = 5;
- hasException() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string exception = 5;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
optional string executor_id = 3;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
optional string executor_id = 3;
- hasExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
optional string executor_id = 3;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string executor_id = 2;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string executor_id = 2;
- hasExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string executor_id = 2;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string executor_id = 8;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string executor_id = 8;
- hasExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string executor_id = 8;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string executor_id = 8;
- hasExecutorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string executor_id = 8;
- hasExecutorId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string executor_id = 8;
- hasExecutorMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- hasExecutorMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- hasExecutorMetricsDistributions() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- hasFailureReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string failure_reason = 13;
- hasFailureReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string failure_reason = 13;
- hasFailureReason() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string failure_reason = 13;
- HasFeaturesCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param featuresCol (default: "features").
- hasFirstTaskLaunchedTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 first_task_launched_time = 11;
- hasFirstTaskLaunchedTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional int64 first_task_launched_time = 11;
- hasFirstTaskLaunchedTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional int64 first_task_launched_time = 11;
- HasFitIntercept - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param fitIntercept (default: true).
- hash() - Method in class org.apache.spark.storage.CacheId
- hash(Column...) - Static method in class org.apache.spark.sql.functions
-
Calculates the hash code of given columns, and returns the result as an int column.
- hash(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Calculates the hash code of given columns, and returns the result as an int column.
- HasHandleInvalid - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param handleInvalid.
- hashCode() - Method in class org.apache.spark.api.java.Optional
- hashCode() - Method in class org.apache.spark.graphx.EdgeDirection
- hashCode() - Method in class org.apache.spark.HashPartitioner
- hashCode() - Method in class org.apache.spark.ml.attribute.AttributeGroup
- hashCode() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- hashCode() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- hashCode() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- hashCode() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- hashCode() - Method in class org.apache.spark.ml.linalg.DenseVector
- hashCode() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- hashCode() - Method in class org.apache.spark.ml.linalg.SparseVector
- hashCode() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns a hash code value for the vector.
- hashCode() - Method in class org.apache.spark.ml.param.Param
- hashCode() - Method in class org.apache.spark.ml.tree.CategoricalSplit
- hashCode() - Method in class org.apache.spark.ml.tree.ContinuousSplit
- hashCode() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- hashCode() - Method in class org.apache.spark.mllib.linalg.DenseVector
- hashCode() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- hashCode() - Method in class org.apache.spark.mllib.linalg.SparseVector
- hashCode() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns a hash code value for the vector.
- hashCode() - Method in class org.apache.spark.mllib.linalg.VectorUDT
- hashCode() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- hashCode() - Method in class org.apache.spark.mllib.tree.model.Predict
- hashCode() - Method in class org.apache.spark.partial.BoundedDouble
- hashCode() - Method in interface org.apache.spark.Partition
- hashCode() - Method in class org.apache.spark.RangePartitioner
- hashCode() - Method in class org.apache.spark.resource.ExecutorResourceRequest
- hashCode() - Method in class org.apache.spark.resource.ResourceID
- hashCode() - Method in class org.apache.spark.resource.ResourceInformation
- hashCode() - Method in class org.apache.spark.resource.ResourceProfile
- hashCode() - Method in class org.apache.spark.resource.ResourceRequest
- hashCode() - Method in class org.apache.spark.resource.TaskResourceRequest
- hashCode() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- hashCode() - Method in class org.apache.spark.scheduler.InputFormatInfo
- hashCode() - Method in class org.apache.spark.scheduler.SplitInfo
- hashCode() - Method in class org.apache.spark.sql.Column
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.ColumnDefaultValue
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.IdentityColumnSpec
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.After
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.ClusterBy
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.RemoveProperty
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.SetProperty
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnDefaultValue
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnNullability
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnPosition
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
- hashCode() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- hashCode() - Method in record class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
Returns a hash code value for this object.
- hashCode() - Method in class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- hashCode() - Method in class org.apache.spark.sql.connector.expressions.aggregate.UserDefinedAggregateFunc
- hashCode() - Method in class org.apache.spark.sql.connector.expressions.GeneralScalarExpression
- hashCode() - Method in class org.apache.spark.sql.connector.expressions.UserDefinedScalarFunc
- hashCode() - Method in class org.apache.spark.sql.connector.read.streaming.CompositeReadLimit
- hashCode() - Method in class org.apache.spark.sql.connector.read.streaming.Offset
- hashCode() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxBytes
- hashCode() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxFiles
- hashCode() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxRows
- hashCode() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMinRows
- hashCode() - Method in interface org.apache.spark.sql.Row
- hashCode() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- hashCode() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- hashCode() - Method in class org.apache.spark.sql.sources.In
- hashCode() - Method in class org.apache.spark.sql.types.Decimal
- hashCode() - Method in class org.apache.spark.sql.types.Metadata
- hashCode() - Method in class org.apache.spark.sql.types.StringType
- hashCode() - Method in class org.apache.spark.sql.types.StructType
- hashCode() - Method in class org.apache.spark.sql.types.UserDefinedType
- hashCode() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- hashCode() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- hashCode() - Method in class org.apache.spark.storage.BlockManagerId
- hashCode() - Method in class org.apache.spark.storage.StorageLevel
- hashCode() - Method in class org.apache.spark.unsafe.types.CalendarInterval
- hashFuncVersion() - Method in class org.apache.spark.ml.feature.HashingTF
- HashingTF - Class in org.apache.spark.ml.feature
-
Maps a sequence of terms to their term frequencies using the hashing trick.
- HashingTF - Class in org.apache.spark.mllib.feature
-
Maps a sequence of terms to their term frequencies using the hashing trick.
- HashingTF() - Constructor for class org.apache.spark.ml.feature.HashingTF
- HashingTF() - Constructor for class org.apache.spark.mllib.feature.HashingTF
- HashingTF(int) - Constructor for class org.apache.spark.mllib.feature.HashingTF
- HashingTF(String) - Constructor for class org.apache.spark.ml.feature.HashingTF
- hasHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string host = 9;
- hasHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string host = 9;
- hasHost() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string host = 9;
- hasHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string host = 9;
- hasHost() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string host = 9;
- hasHost() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string host = 9;
- hasHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string host_port = 2;
- hasHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string host_port = 2;
- hasHostPort() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string host_port = 2;
- hasHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string host_port = 2;
- hasHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional string host_port = 2;
- hasHostPort() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional string host_port = 2;
- hasHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string host_port = 3;
- hasHostPort() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string host_port = 3;
- hasHostPort() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string host_port = 3;
- HashPartitioner - Class in org.apache.spark
-
A
Partitioner
that implements hash-based partitioning using Java'sObject.hashCode
. - HashPartitioner(int) - Constructor for class org.apache.spark.HashPartitioner
- hashPartitionerCannotPartitionArrayKeyError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional string id = 1;
- hasId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string id = 1;
- hasId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional string id = 1;
- hasId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
optional string id = 1;
- hasId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string id = 2;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string id = 2;
- hasId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string id = 2;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string id = 1;
- hasId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string id = 1;
- hasId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string id = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
.org.apache.spark.status.protobuf.JobData info = 1;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapperOrBuilder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- hasInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
-
.org.apache.spark.status.protobuf.StageData info = 1;
- hasInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataWrapperOrBuilder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- hasInput(StageData) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HasInputCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param inputCol.
- HasInputCols - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param inputCols.
- hasInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- hasInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- hasInputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- hasInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- hasInputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- hasInputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- hasJavaHome() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_home = 2;
- hasJavaHome() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string java_home = 2;
- hasJavaHome() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string java_home = 2;
- hasJavaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_version = 1;
- hasJavaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string java_version = 1;
- hasJavaVersion() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string java_version = 1;
- hasJobGroup() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string job_group = 7;
- hasJobGroup() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string job_group = 7;
- hasJobGroup() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string job_group = 7;
- hasLabelCol(StructType) - Method in interface org.apache.spark.ml.feature.RFormulaBase
- HasLabelCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param labelCol (default: "label").
- hasLatestOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string latest_offset = 4;
- hasLatestOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string latest_offset = 4;
- hasLatestOffset() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string latest_offset = 4;
- hasLinkPredictionCol() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Checks whether we should output link prediction.
- HasLoss - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param loss.
- HasMaxBlockSizeInMB - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param maxBlockSizeInMB (default: 0.0).
- hasMaxCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 max_cores = 4;
- hasMaxCores() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 max_cores = 4;
- hasMaxCores() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 max_cores = 4;
- HasMaxIter - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param maxIter.
- hasMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- hasMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- hasMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- hasMemoryPerExecutorMb() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 memory_per_executor_mb = 6;
- hasMemoryPerExecutorMb() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional int32 memory_per_executor_mb = 6;
- hasMemoryPerExecutorMb() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional int32 memory_per_executor_mb = 6;
- hasMetricType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string metric_type = 3;
- hasMetricType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
optional string metric_type = 3;
- hasMetricType() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetricOrBuilder
-
optional string metric_type = 3;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
-
optional string name = 1;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PoolDataOrBuilder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapperOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationNodeOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
optional string name = 1;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceInformationOrBuilder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapperOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
-
optional string name = 2;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeOrBuilder
-
optional string name = 2;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
optional string name = 1;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetricOrBuilder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string name = 39;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string name = 39;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string name = 39;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string name = 1;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string name = 1;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string name = 1;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string name = 3;
- hasName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string name = 3;
- hasName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string name = 3;
- hasNext() - Method in class org.apache.spark.ContextAwareIterator
-
Deprecated.
- hasNext() - Method in class org.apache.spark.InterruptibleIterator
- hasNextRow() - Method in interface org.apache.spark.sql.avro.AvroUtils.RowReader
- hasNode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- hasNode() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- hasNode() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapperOrBuilder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- hasNonUTF8BinaryCollation(DataType) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if a given data type has a non utf8 binary (implicit) collation type.
- hasNull() - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- hasNull() - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns true if this column vector contains any null values.
- HasNumFeatures - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param numFeatures (default: 262144).
- hasOffHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_remaining = 8;
- hasOffHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 off_heap_memory_remaining = 8;
- hasOffHeapMemoryRemaining() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 off_heap_memory_remaining = 8;
- hasOffHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_used = 6;
- hasOffHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 off_heap_memory_used = 6;
- hasOffHeapMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 off_heap_memory_used = 6;
- hasOffsetCol() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Checks whether offset column is set and nonempty.
- hasOnHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_remaining = 7;
- hasOnHeapMemoryRemaining() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 on_heap_memory_remaining = 7;
- hasOnHeapMemoryRemaining() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 on_heap_memory_remaining = 7;
- hasOnHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_used = 5;
- hasOnHeapMemoryUsed() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
optional int64 on_heap_memory_used = 5;
- hasOnHeapMemoryUsed() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDDataDistributionOrBuilder
-
optional int64 on_heap_memory_used = 5;
- hasOperatorName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
optional string operator_name = 1;
- hasOperatorName() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
optional string operator_name = 1;
- hasOperatorName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgressOrBuilder
-
optional string operator_name = 1;
- hasOutput(StageData) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HasOutputCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param outputCol (default: uid + "__output").
- HasOutputCols - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param outputCols.
- hasOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- hasOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- hasOutputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- hasOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- hasOutputMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- hasOutputMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- HasParallelism - Interface in org.apache.spark.ml.param.shared
-
Trait to define a level of parallelism for algorithms that are able to use multithreaded execution, and provide a thread-pool based execution context.
- hasParam(String) - Method in interface org.apache.spark.ml.param.Params
-
Tests whether this instance contains a param with a given name.
- hasParent() - Method in class org.apache.spark.ml.Model
-
Indicates whether this
Model
has a corresponding parent. - HasPartitionKey - Interface in org.apache.spark.sql.connector.read
-
A mix-in for input partitions whose records are clustered on the same set of partition keys (provided via
SupportsReportPartitioning
, see below). - HasPartitionStatistics - Interface in org.apache.spark.sql.connector.read
-
A mix-in for input partitions whose records are clustered on the same set of partition keys (provided via
SupportsReportPartitioning
, see below). - hasPeakExecutorMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- hasPeakExecutorMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- hasPeakExecutorMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- hasPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- hasPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- hasPeakMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- hasPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- hasPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- hasPeakMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- hasPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- hasPeakMemoryMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- hasPeakMemoryMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- hasPhysicalPlanDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string physical_plan_description = 5;
- hasPhysicalPlanDescription() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
-
optional string physical_plan_description = 5;
- hasPhysicalPlanDescription() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIDataOrBuilder
-
optional string physical_plan_description = 5;
- HasPredictionCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param predictionCol (default: "prediction").
- HasProbabilityCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param probabilityCol (default: "probability").
- hasProgress() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- hasProgress() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- hasProgress() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapperOrBuilder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- hasQuantile() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
optional string quantile = 3;
- hasQuantile() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
optional string quantile = 3;
- hasQuantile() - Method in interface org.apache.spark.status.protobuf.StoreTypes.CachedQuantileOrBuilder
-
optional string quantile = 3;
- hasQuantilesCol() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
-
Checks whether the input has quantiles column name.
- HasRawPredictionCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param rawPredictionCol (default: "rawPrediction").
- HasRegParam - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param regParam.
- HasRelativeError - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param relativeError (default: 0.001).
- hasRemoveReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string remove_reason = 22;
- hasRemoveReason() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional string remove_reason = 22;
- hasRemoveReason() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional string remove_reason = 22;
- hasRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional int64 remove_time = 21;
- hasRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
optional int64 remove_time = 21;
- hasRemoveTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryOrBuilder
-
optional int64 remove_time = 21;
- hasRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional int64 remove_time = 6;
- hasRemoveTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
optional int64 remove_time = 6;
- hasRemoveTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryOrBuilder
-
optional int64 remove_time = 6;
- hasResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string resource_name = 1;
- hasResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string resource_name = 1;
- hasResourceName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string resource_name = 1;
- hasResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
optional string resource_name = 1;
- hasResourceName() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
-
optional string resource_name = 1;
- hasResourceName() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequestOrBuilder
-
optional string resource_name = 1;
- hasResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 result_fetch_start = 6;
- hasResultFetchStart() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional int64 result_fetch_start = 6;
- hasResultFetchStart() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional int64 result_fetch_start = 6;
- hasRootAsShutdownDeleteDir(File) - Static method in class org.apache.spark.util.ShutdownHookManager
- hasRootCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- hasRootCluster() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- hasRootCluster() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapperOrBuilder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- hasRpInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- hasRpInfo() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- hasRpInfo() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapperOrBuilder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- hasRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string run_id = 3;
- hasRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
-
optional string run_id = 3;
- hasRunId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryDataOrBuilder
-
optional string run_id = 3;
- hasRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string run_id = 2;
- hasRunId() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string run_id = 2;
- hasRunId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string run_id = 2;
- hasRuntime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- hasRuntime() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- hasRuntime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoOrBuilder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- hasScalaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string scala_version = 3;
- hasScalaVersion() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
-
optional string scala_version = 3;
- hasScalaVersion() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RuntimeInfoOrBuilder
-
optional string scala_version = 3;
- hasSchedulingPool() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string scheduling_pool = 42;
- hasSchedulingPool() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional string scheduling_pool = 42;
- hasSchedulingPool() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional string scheduling_pool = 42;
- HasSeed - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param seed (default: this.getClass.getName.hashCode.toLong).
- hasShufflePushReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- hasShufflePushReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- hasShufflePushReadMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- hasShufflePushReadMetricsDist() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- hasShufflePushReadMetricsDist() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- hasShufflePushReadMetricsDist() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- hasShuffleRead(StageData) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- hasShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- hasShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- hasShuffleReadMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- hasShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- hasShuffleReadMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- hasShuffleReadMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- hasShuffleWrite(StageData) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- hasShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- hasShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- hasShuffleWriteMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributionsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- hasShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- hasShuffleWriteMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- hasShuffleWriteMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskMetricsOrBuilder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- hasShutdownDeleteDir(File) - Static method in class org.apache.spark.util.ShutdownHookManager
- hasSink() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- hasSink() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- hasSink() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- HasSolver - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param solver.
- hasSparkUser() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string spark_user = 6;
- hasSparkUser() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
optional string spark_user = 6;
- hasSparkUser() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfoOrBuilder
-
optional string spark_user = 6;
- hasSpeculationSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- hasSpeculationSummary() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- hasSpeculationSummary() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- hasSqlExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
optional int64 sql_execution_id = 3;
- hasSqlExecutionId() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
-
optional int64 sql_execution_id = 3;
- hasSqlExecutionId() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataWrapperOrBuilder
-
optional int64 sql_execution_id = 3;
- HasStandardization - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param standardization (default: true).
- hasStartOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string start_offset = 2;
- hasStartOffset() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
-
optional string start_offset = 2;
- hasStartOffset() - Method in interface org.apache.spark.status.protobuf.StoreTypes.SourceProgressOrBuilder
-
optional string start_offset = 2;
- hasStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string status = 10;
- hasStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string status = 10;
- hasStatus() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string status = 10;
- hasStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string status = 10;
- hasStatus() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string status = 10;
- hasStatus() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string status = 10;
- HasStepSize - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param stepSize.
- hasStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string storage_level = 2;
- hasStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
optional string storage_level = 2;
- hasStorageLevel() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfoOrBuilder
-
optional string storage_level = 2;
- hasStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string storage_level = 5;
- hasStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
-
optional string storage_level = 5;
- hasStorageLevel() - Method in interface org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoOrBuilder
-
optional string storage_level = 5;
- hasStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string storage_level = 4;
- hasStorageLevel() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
-
optional string storage_level = 4;
- hasStorageLevel() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamBlockDataOrBuilder
-
optional string storage_level = 4;
- hasSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 submission_time = 4;
- hasSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
-
optional int64 submission_time = 4;
- hasSubmissionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.JobDataOrBuilder
-
optional int64 submission_time = 4;
- hasSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 submission_time = 10;
- hasSubmissionTime() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional int64 submission_time = 10;
- hasSubmissionTime() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional int64 submission_time = 10;
- hasSubModels() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- hasSubModels() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- hasSummary() - Method in interface org.apache.spark.ml.util.HasTrainingSummary
-
Indicates whether a training summary exists for this model instance.
- hasTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string task_locality = 11;
- hasTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional string task_locality = 11;
- hasTaskLocality() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional string task_locality = 11;
- hasTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string task_locality = 11;
- hasTaskLocality() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
optional string task_locality = 11;
- hasTaskLocality() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapperOrBuilder
-
optional string task_locality = 11;
- hasTaskMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- hasTaskMetrics() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- hasTaskMetrics() - Method in interface org.apache.spark.status.protobuf.StoreTypes.TaskDataOrBuilder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- hasTaskMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- hasTaskMetricsDistributions() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- hasTaskMetricsDistributions() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StageDataOrBuilder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- HasThreshold - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param threshold.
- HasThresholds - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param thresholds.
- hasTimedOut() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Whether the function has been called because the key has timed out.
- hasTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string timestamp = 4;
- hasTimestamp() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
optional string timestamp = 4;
- hasTimestamp() - Method in interface org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressOrBuilder
-
optional string timestamp = 4;
- HasTol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param tol.
- HasTrainingSummary<T> - Interface in org.apache.spark.ml.util
-
Trait for models that provides Training summary.
- hasUpdate() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string update = 3;
- hasUpdate() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string update = 3;
- hasUpdate() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string update = 3;
- HasValidationIndicatorCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param validationIndicatorCol.
- hasValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string value = 4;
- hasValue() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
-
optional string value = 4;
- hasValue() - Method in interface org.apache.spark.status.protobuf.StoreTypes.AccumulableInfoOrBuilder
-
optional string value = 4;
- hasValue(String) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Tests whether this attribute contains a specific value.
- hasValue1() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value1 = 1;
- hasValue1() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
-
optional string value1 = 1;
- hasValue1() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PairStringsOrBuilder
-
optional string value1 = 1;
- hasValue2() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value2 = 2;
- hasValue2() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
-
optional string value2 = 2;
- hasValue2() - Method in interface org.apache.spark.status.protobuf.StoreTypes.PairStringsOrBuilder
-
optional string value2 = 2;
- HasVarianceCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param varianceCol.
- HasVarianceImpurity - Interface in org.apache.spark.ml.tree
- hasVendor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string vendor = 4;
- hasVendor() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
optional string vendor = 4;
- hasVendor() - Method in interface org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequestOrBuilder
-
optional string vendor = 4;
- hasWeightCol() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Checks whether weight column is set and nonempty.
- hasWeightCol() - Method in interface org.apache.spark.ml.regression.IsotonicRegressionBase
-
Checks whether the input has weight column.
- HasWeightCol - Interface in org.apache.spark.ml.param.shared
-
Trait for shared param weightCol.
- hasWriteObjectMethod() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- hasWriteReplaceMethod() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- HdfsUtils - Class in org.apache.spark.streaming.util
- HdfsUtils() - Constructor for class org.apache.spark.streaming.util.HdfsUtils
- head() - Method in class org.apache.spark.sql.api.Dataset
-
Returns the first row.
- head(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns the first
n
rows. - head(int) - Method in class org.apache.spark.sql.Dataset
- HEADER_ACCUMULATORS() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_ATTEMPT() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_DESER_TIME() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_DISK_SPILL() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_DURATION() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_ERROR() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_EXECUTOR() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_GC_TIME() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_GETTING_RESULT_TIME() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_HOST() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_ID() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_INPUT_SIZE() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_LAUNCH_TIME() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_LOCALITY() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_MEM_SPILL() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_OUTPUT_SIZE() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_PEAK_MEM() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_SCHEDULER_DELAY() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_SER_TIME() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_SHUFFLE_READ_FETCH_WAIT_TIME() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_SHUFFLE_REMOTE_READS() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_SHUFFLE_TOTAL_READS() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_SHUFFLE_WRITE_SIZE() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_SHUFFLE_WRITE_TIME() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_STATUS() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- HEADER_TASK_INDEX() - Static method in class org.apache.spark.ui.jobs.ApiHelper
- headerRow(Seq<Tuple3<String, Object, Option<String>>>, boolean, int, String, String, String, String) - Method in interface org.apache.spark.ui.PagedTable
- headers() - Method in interface org.apache.spark.ui.PagedTable
- headerSparkPage(HttpServletRequest, String, Function0<Seq<Node>>, SparkUITab, Option<String>, boolean, boolean) - Static method in class org.apache.spark.ui.UIUtils
-
Returns a spark page with correctly formatted headers
- height() - Method in interface org.apache.spark.sql.connector.read.colstats.Histogram
- hex(Column) - Static method in class org.apache.spark.sql.functions
-
Computes hex value of the given column.
- hi() - Method in interface org.apache.spark.sql.connector.read.colstats.HistogramBin
- high() - Method in class org.apache.spark.partial.BoundedDouble
- HingeGradient - Class in org.apache.spark.mllib.optimization
-
Compute gradient and loss for a Hinge loss function, as used in SVM binary classification.
- HingeGradient() - Constructor for class org.apache.spark.mllib.optimization.HingeGradient
- hint(String, Object...) - Method in class org.apache.spark.sql.api.Dataset
-
Specifies some hint on the current Dataset.
- hint(String, Object...) - Method in class org.apache.spark.sql.Dataset
- hint(String, Seq<Object>) - Method in class org.apache.spark.sql.api.Dataset
-
Specifies some hint on the current Dataset.
- hint(String, Seq<Object>) - Method in class org.apache.spark.sql.Dataset
- histogram() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
- histogram(double[]) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute a histogram using the provided buckets.
- histogram(double[], boolean) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute a histogram using the provided buckets.
- histogram(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute a histogram of the data using bucketCount number of buckets evenly spaced between the minimum and maximum of the RDD.
- histogram(int) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute a histogram of the data using bucketCount number of buckets evenly spaced between the minimum and maximum of the RDD.
- histogram(Double[], boolean) - Method in class org.apache.spark.api.java.JavaDoubleRDD
- Histogram - Interface in org.apache.spark.sql.connector.read.colstats
-
An interface to represent an equi-height histogram, which is a part of
ColumnStatistics
. - histogram_numeric(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: computes a histogram on numeric 'expr' using nb bins.
- HistogramBin - Interface in org.apache.spark.sql.connector.read.colstats
-
An interface to represent a bin in an equi-height histogram.
- histogramOnEmptyRDDOrContainingInfinityOrNaNError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- HiveCatalogMetrics - Class in org.apache.spark.metrics.source
-
Metrics for access to the hive external catalog.
- HiveCatalogMetrics() - Constructor for class org.apache.spark.metrics.source.HiveCatalogMetrics
- hiveTableTypeUnsupportedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- hiveTableWithAnsiIntervalsError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- hll_sketch_agg(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch configured with default lgConfigK value.
- hll_sketch_agg(String, int) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch configured with lgConfigK arg.
- hll_sketch_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch configured with default lgConfigK value.
- hll_sketch_agg(Column, int) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch configured with lgConfigK arg.
- hll_sketch_agg(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch configured with lgConfigK arg.
- hll_sketch_estimate(String) - Static method in class org.apache.spark.sql.functions
-
Returns the estimated number of unique values given the binary representation of a Datasketches HllSketch.
- hll_sketch_estimate(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the estimated number of unique values given the binary representation of a Datasketches HllSketch.
- hll_union(String, String) - Static method in class org.apache.spark.sql.functions
-
Merges two binary representations of Datasketches HllSketch objects, using a Datasketches Union object.
- hll_union(String, String, boolean) - Static method in class org.apache.spark.sql.functions
-
Merges two binary representations of Datasketches HllSketch objects, using a Datasketches Union object.
- hll_union(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Merges two binary representations of Datasketches HllSketch objects, using a Datasketches Union object.
- hll_union(Column, Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Merges two binary representations of Datasketches HllSketch objects, using a Datasketches Union object.
- hll_union_agg(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch, generated by merging previously created Datasketches HllSketch instances via a Datasketches Union instance.
- hll_union_agg(String, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch, generated by merging previously created Datasketches HllSketch instances via a Datasketches Union instance.
- hll_union_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch, generated by merging previously created Datasketches HllSketch instances via a Datasketches Union instance.
- hll_union_agg(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch, generated by merging previously created Datasketches HllSketch instances via a Datasketches Union instance.
- hll_union_agg(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the updatable binary representation of the Datasketches HllSketch, generated by merging previously created Datasketches HllSketch instances via a Datasketches Union instance.
- hllInvalidInputSketchBuffer(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- hllInvalidLgK(String, int, int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- hllUnionDifferentLgK(int, int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- holdingLocks() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
Deprecated.using synchronizers and monitors instead. Since 4.0.0.
- horzcat(Matrix[]) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Horizontally concatenate a sequence of matrices.
- horzcat(Matrix[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Horizontally concatenate a sequence of matrices.
- host() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.DecommissionExecutorsOnHost
- host() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
- host() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
- host() - Method in class org.apache.spark.scheduler.TaskInfo
- host() - Method in interface org.apache.spark.scheduler.TaskLocation
- host() - Method in interface org.apache.spark.SparkExecutorInfo
- host() - Method in class org.apache.spark.SparkExecutorInfoImpl
- host() - Method in class org.apache.spark.status.api.v1.TaskData
- host() - Method in class org.apache.spark.storage.BlockManagerId
- host() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveShufflePushMergerLocation
- HOST() - Static method in class org.apache.spark.status.TaskIndexNames
- HOST_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- HOST_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- HOST_PORT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- HOST_PORT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- HOST_PORT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- hostId() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- hostId() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- hostId() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
- hostId() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- hostId() - Method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
Deprecated.
- hostId() - Method in class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
- hostLocation() - Method in class org.apache.spark.scheduler.SplitInfo
- hostname() - Method in interface org.apache.spark.api.plugin.PluginContext
-
The host name which is being used by the Spark process for communication.
- hostname() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- hostOptionNotSetError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- hostPort() - Method in class org.apache.spark.scheduler.MiscellaneousProcessDetails
- hostPort() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- hostPort() - Method in class org.apache.spark.status.api.v1.ProcessSummary
- hostPort() - Method in class org.apache.spark.storage.BlockManagerId
- hostsToFilter() - Method in class org.apache.spark.storage.BlockManagerMessages.GetShufflePushMergerLocations
- hostToLocalTaskCount() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
- hour(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the hours as an integer from a given date/timestamp/string.
- HOUR() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- hours() - Static method in class org.apache.spark.scheduler.StatsReportListener
- hours(String) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create an hourly transform for a timestamp column.
- hours(Column) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) A transform for timestamps to partition data into hours.
- hours(Column) - Method in class org.apache.spark.sql.functions.partitioning$
-
(Scala-specific) A transform for timestamps to partition data into hours.
- hours(NamedReference) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- html() - Method in class org.apache.spark.status.api.v1.StackTrace
- htmlResponderToServlet(Function1<HttpServletRequest, Seq<Node>>) - Static method in class org.apache.spark.ui.JettyUtils
- httpRequest() - Method in interface org.apache.spark.status.api.v1.ApiRequestContext
- httpResponseCode(URL, String, Seq<Tuple2<String, String>>) - Static method in class org.apache.spark.TestUtils
-
Returns the response code from an HTTP(S) URL.
- httpResponseMessage(URL, String, Seq<Tuple2<String, String>>) - Static method in class org.apache.spark.TestUtils
-
Returns the response message from an HTTP(S) URL.
- HttpSecurityFilter - Class in org.apache.spark.ui
-
A servlet filter that implements HTTP security features.
- HttpSecurityFilter(SparkConf, org.apache.spark.SecurityManager) - Constructor for class org.apache.spark.ui.HttpSecurityFilter
- hypot(double, String) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow. - hypot(double, Column) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow. - hypot(String, double) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow. - hypot(String, String) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow. - hypot(String, Column) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow. - hypot(Column, double) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow. - hypot(Column, String) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow. - hypot(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Computes
sqrt(a^2^ + b^2^)
without intermediate overflow or underflow.
I
- i() - Method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
- id() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
A unique ID for this RDD (within its SparkContext).
- id() - Method in class org.apache.spark.broadcast.Broadcast
- id() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- id() - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
- id() - Method in class org.apache.spark.mllib.tree.model.Node
- id() - Method in class org.apache.spark.rdd.RDD
-
A unique ID for this RDD (within its SparkContext).
- id() - Method in class org.apache.spark.resource.ResourceProfile
-
A unique id of this ResourceProfile
- id() - Method in class org.apache.spark.resource.ResourceRequest
- id() - Method in class org.apache.spark.scheduler.AccumulableInfo
- id() - Method in class org.apache.spark.scheduler.TaskInfo
- id() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns the unique id of this query that persists across restarts from checkpoint data.
- id() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryIdleEvent
- id() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
- id() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
- id() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- id() - Method in class org.apache.spark.status.api.v1.AccumulableInfo
- id() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
- id() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- id() - Method in class org.apache.spark.status.api.v1.ProcessSummary
- id() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- id() - Method in class org.apache.spark.status.api.v1.ResourceProfileInfo
- id() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- id() - Method in class org.apache.spark.storage.RDDInfo
- id() - Method in class org.apache.spark.streaming.dstream.InputDStream
-
This is a unique identifier for the input stream.
- id() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- id() - Method in class org.apache.spark.util.AccumulatorV2
-
Returns the id of this accumulator, can only be called after registration.
- ID - Enum constant in enum class org.apache.spark.status.api.v1.TaskSorting
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- ident() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- Identifiable - Interface in org.apache.spark.ml.util
-
Trait for an object with an immutable unique ID that identifies itself and its derivatives.
- Identifier - Interface in org.apache.spark.sql.connector.catalog
-
Identifies an object in a catalog.
- IdentifierHelper(Identifier) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- identifierTooManyNamePartsError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- identifierTooManyNamePartsError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- identity(String) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create an identity transform for a column.
- identity(NamedReference) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- Identity$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
- identityColumnDuplicatedSequenceGeneratorOption(SqlBaseParser.IdentityColSpecContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- identityColumnIllegalStep(SqlBaseParser.IdentityColSpecContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- identityColumnSpec() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns the identity column specification of this table column.
- IdentityColumnSpec - Class in org.apache.spark.sql.connector.catalog
-
Identity column specification.
- IdentityColumnSpec(long, long, boolean) - Constructor for class org.apache.spark.sql.connector.catalog.IdentityColumnSpec
-
Creates an identity column specification.
- identityColumnUnsupportedDataType(SqlBaseParser.IdentityColumnContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- idf() - Method in class org.apache.spark.ml.feature.IDFModel
-
Returns the IDF vector.
- idf() - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
Returns the current IDF vector, docFreq, number of documents
- idf() - Method in class org.apache.spark.mllib.feature.IDFModel
- IDF - Class in org.apache.spark.ml.feature
-
Compute the Inverse Document Frequency (IDF) given a collection of documents.
- IDF - Class in org.apache.spark.mllib.feature
-
Inverse document frequency (IDF).
- IDF() - Constructor for class org.apache.spark.ml.feature.IDF
- IDF() - Constructor for class org.apache.spark.mllib.feature.IDF
- IDF(int) - Constructor for class org.apache.spark.mllib.feature.IDF
- IDF(String) - Constructor for class org.apache.spark.ml.feature.IDF
- IDF.DocumentFrequencyAggregator - Class in org.apache.spark.mllib.feature
-
Document frequency aggregator.
- IDFBase - Interface in org.apache.spark.ml.feature
- IDFModel - Class in org.apache.spark.ml.feature
-
Model fitted by
IDF
. - IDFModel - Class in org.apache.spark.mllib.feature
-
Represents an IDF model that can transform term frequency vectors.
- IfElseStatementExec - Class in org.apache.spark.sql.scripting
-
Executable node for IfElseStatement.
- IfElseStatementExec(Seq<SingleStatementExec>, Seq<CompoundBodyExec>, Option<CompoundBodyExec>, SparkSession) - Constructor for class org.apache.spark.sql.scripting.IfElseStatementExec
- ifExists() - Method in class org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn
- ifnull(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns
col2
ifcol1
is null, orcol1
otherwise. - Ignore - Enum constant in enum class org.apache.spark.sql.SaveMode
-
Ignore mode means that when saving a DataFrame to a data source, if data already exists, the save operation is expected to not save the contents of the DataFrame and to not change the existing data.
- ilike(String) - Method in class org.apache.spark.sql.Column
-
SQL ILIKE expression (case insensitive LIKE).
- ilike(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if str matches
pattern
withescapeChar
('\') case-insensitively, null if any arguments are null, false otherwise. - ilike(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if str matches
pattern
withescapeChar
case-insensitively, null if any arguments are null, false otherwise. - illegalParquetTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- illegalUrlError(UTF8String, IllegalArgumentException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- ImageDataSource - Class in org.apache.spark.ml.source.image
-
image
package implements Spark SQL data source API for loading image data asDataFrame
. - ImageDataSource() - Constructor for class org.apache.spark.ml.source.image.ImageDataSource
- imageFields() - Static method in class org.apache.spark.ml.image.ImageSchema
- imageSchema() - Static method in class org.apache.spark.ml.image.ImageSchema
-
DataFrame with a single column of images named "image" (nullable)
- ImageSchema - Class in org.apache.spark.ml.image
-
Defines the image schema and methods to read and manipulate images.
- ImageSchema() - Constructor for class org.apache.spark.ml.image.ImageSchema
- implicitCollationMismatchError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- implicitPrefs() - Method in class org.apache.spark.ml.recommendation.ALS
- implicitPrefs() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param to decide whether to use implicit preference.
- implicits() - Method in class org.apache.spark.sql.SparkSession
-
Accessor for nested Scala object
- implicits() - Method in class org.apache.spark.sql.SQLContext
-
Accessor for nested Scala object
- implicits$() - Constructor for class org.apache.spark.sql.SparkSession.implicits$
- implicits$() - Constructor for class org.apache.spark.sql.SQLContext.implicits$
- improveException(Object, NotSerializableException) - Static method in class org.apache.spark.serializer.SerializationDebugger
-
Improve the given NotSerializableException with the serialization path leading from the given object to the problematic object.
- Impurities - Class in org.apache.spark.mllib.tree.impurity
-
Factory for Impurity instances.
- Impurities() - Constructor for class org.apache.spark.mllib.tree.impurity.Impurities
- impurity() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- impurity() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- impurity() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- impurity() - Method in class org.apache.spark.ml.classification.GBTClassifier
- impurity() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- impurity() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- impurity() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- impurity() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- impurity() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- impurity() - Method in class org.apache.spark.ml.regression.GBTRegressor
- impurity() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- impurity() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- impurity() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- impurity() - Method in interface org.apache.spark.ml.tree.HasVarianceImpurity
-
Criterion used for information gain calculation (case-insensitive).
- impurity() - Method in class org.apache.spark.ml.tree.InternalNode
- impurity() - Method in class org.apache.spark.ml.tree.LeafNode
- impurity() - Method in class org.apache.spark.ml.tree.Node
-
Impurity measure at this node (for training data)
- impurity() - Method in interface org.apache.spark.ml.tree.TreeClassifierParams
-
Criterion used for information gain calculation (case-insensitive).
- impurity() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- impurity() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- impurity() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- impurity() - Method in class org.apache.spark.mllib.tree.model.Node
- Impurity - Interface in org.apache.spark.mllib.tree.impurity
-
Trait for calculating information gain.
- impurityStats() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- Imputer - Class in org.apache.spark.ml.feature
-
Imputation estimator for completing missing values, using the mean, median or mode of the columns in which the missing values are located.
- Imputer() - Constructor for class org.apache.spark.ml.feature.Imputer
- Imputer(String) - Constructor for class org.apache.spark.ml.feature.Imputer
- ImputerModel - Class in org.apache.spark.ml.feature
-
Model fitted by
Imputer
. - ImputerParams - Interface in org.apache.spark.ml.feature
-
Params for
Imputer
andImputerModel
. - in(String, DataType) - Static method in interface org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter
-
Creates a builder for an IN procedure parameter.
- In - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to one of the values in the array. - In(String, Object[]) - Constructor for class org.apache.spark.sql.sources.In
- In() - Static method in class org.apache.spark.graphx.EdgeDirection
-
Edges arriving at a vertex.
- IN - Enum constant in enum class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Mode
- INACTIVE() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- inArray(Object) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check for value in an allowed set of values.
- inArray(List<T>) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check for value in an allowed set of values.
- InBlock$() - Constructor for class org.apache.spark.ml.recommendation.ALS.InBlock$
- INCOMING_EDGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- incompatibleDataSourceRegisterError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- incompatibleDataToTableAmbiguousColumnNameError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableCannotFindDataError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableCannotSafelyCastError(String, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableExtraColumnsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableExtraStructFieldsError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableNullableArrayElementsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableNullableColumnError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableNullableMapValuesError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableStructMissingFieldsError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleDataToTableUnexpectedColumnNameError(String, String, int, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incompatibleJoinTypesError(String, String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- IncompatibleMergeException - Exception in org.apache.spark.util.sketch
- IncompatibleMergeException(String) - Constructor for exception org.apache.spark.util.sketch.IncompatibleMergeException
- incompatibleViewSchemaChangeError(String, String, int, Seq<Attribute>, Option<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- incorrectEndOffset(long, long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- incorrectRampUpRate(long, long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- INCREASING_RUNTIME - Enum constant in enum class org.apache.spark.status.api.v1.TaskSorting
- incrementFetchedPartitions(int) - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
- incrementFileCacheHits(int) - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
- incrementFilesDiscovered(int) - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
- incrementHiveClientCalls(int) - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
- incrementParallelListingJobCount(int) - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
- inDegrees() - Method in class org.apache.spark.graphx.GraphOps
- independence() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
- INDETERMINATE() - Static method in class org.apache.spark.rdd.DeterministicLevel
- indeterminateCollationError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- index() - Method in class org.apache.spark.ml.attribute.Attribute
-
Index of the attribute.
- index() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- index() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- index() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- index() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- index() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
- index() - Method in interface org.apache.spark.Partition
-
Get the partition's index within its parent RDD
- index() - Method in class org.apache.spark.scheduler.TaskInfo
-
The index of this task within its task set.
- index() - Method in class org.apache.spark.status.api.v1.TaskData
- index(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Return the index for the (i, j)-th element in the backing array.
- index(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Return the index for the (i, j)-th element in the backing array.
- INDEX() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- INDEX_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- INDEX_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- IndexedRow - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a row of
IndexedRowMatrix
. - IndexedRow(long, Vector) - Constructor for class org.apache.spark.mllib.linalg.distributed.IndexedRow
- IndexedRowMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a row-oriented
DistributedMatrix
with indexed rows. - IndexedRowMatrix(RDD<IndexedRow>) - Constructor for class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Alternative constructor leaving matrix dimensions to be determined automatically.
- IndexedRowMatrix(RDD<IndexedRow>, long, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
- indexExists(String) - Method in interface org.apache.spark.sql.connector.catalog.index.SupportsIndex
-
Checks whether an index exists in this table.
- indexExists(Connection, String, Identifier, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Checks whether an index exists
- indexExists(Connection, String, Identifier, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- indexExists(Connection, String, Identifier, JDBCOptions) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- indexExists(Connection, String, Identifier, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- indexName() - Method in class org.apache.spark.sql.connector.catalog.index.TableIndex
- indexName(String) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- indexOf(Object) - Method in class org.apache.spark.ml.feature.HashingTF
-
Returns the index of the input term.
- indexOf(Object) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Returns the index of the input term.
- indexOf(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Index of an attribute specified by name.
- indexOf(String) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Index of a specific value.
- indexToLevel(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the level of a tree which the given node is in.
- IndexToString - Class in org.apache.spark.ml.feature
-
A
Transformer
that maps a column of indices back to a new column of corresponding string values. - IndexToString() - Constructor for class org.apache.spark.ml.feature.IndexToString
- IndexToString(String) - Constructor for class org.apache.spark.ml.feature.IndexToString
- indexType() - Method in class org.apache.spark.sql.connector.catalog.index.TableIndex
- indexUpperTriangular(int, int, int) - Static method in class org.apache.spark.ml.impl.Utils
-
Indexing in an array representing the upper triangular part of a matrix into an n * n array representing the full symmetric matrix (column major).
- indices() - Method in class org.apache.spark.ml.feature.VectorSlicer
-
An array of indices to select features from a vector column.
- indices() - Method in class org.apache.spark.ml.linalg.SparseVector
- indices() - Method in class org.apache.spark.mllib.linalg.SparseVector
- IndylambdaScalaClosures - Class in org.apache.spark.util
- IndylambdaScalaClosures() - Constructor for class org.apache.spark.util.IndylambdaScalaClosures
- inferNumPartitions(long) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
-
When save a tree model, infer the number of partitions based on number of nodes.
- inferPartitioning(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.TableProvider
-
Infer the partitioning of the table identified by the given options.
- inferSchema(SparkSession, Map<String, String>, Seq<FileStatus>) - Static method in class org.apache.spark.sql.avro.AvroUtils
- inferSchema(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.TableProvider
-
Infer the schema of the table identified by the given options.
- inferSchemaUnsupportedForHiveError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- INFINITE_TIMEOUT() - Static method in class org.apache.spark.util.RpcUtils
-
Infinite timeout is used internally, so there's no timeout configuration property that controls it.
- info() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.MiscellaneousProcessAdded
- info() - Method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
- info() - Method in class org.apache.spark.status.LiveRDD
- info() - Method in class org.apache.spark.status.LiveStage
- info() - Method in class org.apache.spark.status.LiveTask
- info() - Method in class org.apache.spark.storage.BlockInfoWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- infoChanged(SparkAppHandle) - Method in interface org.apache.spark.launcher.SparkAppHandle.Listener
-
Callback for changes in any information that is not the handle's state.
- infoGain() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- InformationGainStats - Class in org.apache.spark.mllib.tree.model
-
Information gain statistics for each split param: gain information gain value param: impurity current node impurity param: leftImpurity left node impurity param: rightImpurity right node impurity param: leftPredict left node predict param: rightPredict right node predict
- InformationGainStats(double, double, double, double, Predict, Predict) - Constructor for class org.apache.spark.mllib.tree.model.InformationGainStats
- init(FilterConfig) - Method in class org.apache.spark.ui.HttpSecurityFilter
- init(FilterConfig) - Method in class org.apache.spark.ui.JWSFilter
-
Load and validate the configurtions: - IllegalArgumentException will happen if the user didn't provide this argument - WeakKeyException will happen if the user-provided value is insufficient
- init(PluginContext, Map<String, String>) - Method in interface org.apache.spark.api.plugin.ExecutorPlugin
-
Initialize the executor plugin.
- init(SparkContext, PluginContext) - Method in interface org.apache.spark.api.plugin.DriverPlugin
-
Initialize the plugin.
- initcap(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a new string column by converting the first letter of each word to uppercase.
- initCoefficients(int) - Method in interface org.apache.spark.ml.regression.FactorizationMachines
- initDaemon(Logger) - Static method in class org.apache.spark.util.Utils
-
Utility function that should be called early in
main()
for daemons to set up some common diagnostic state. - initialHash() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- initialize() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Initializes thread local by explicitly getting the value.
- initialize(boolean, SparkConf) - Method in interface org.apache.spark.broadcast.BroadcastFactory
- initialize(double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- initialize(double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- initialize(double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- initialize(double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
- initialize(String, CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.CatalogPlugin
-
Called to initialize configuration.
- initialize(String, CaseInsensitiveStringMap) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- initialize(RDD<Tuple2<Object, Vector>>, LDA) - Method in interface org.apache.spark.mllib.clustering.LDAOptimizer
-
Initializer for the optimizer.
- initialize(TaskScheduler, SchedulerBackend) - Method in interface org.apache.spark.scheduler.ExternalClusterManager
-
Initialize task scheduler and backend scheduler.
- initialize(MutableAggregationBuffer) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Initializes the given aggregation buffer, i.e.
- initializeApplication() - Method in interface org.apache.spark.shuffle.api.ShuffleDriverComponents
-
Called once in the driver to bootstrap this module that is specific to this application.
- Initialized() - Static method in class org.apache.spark.rdd.CheckpointState
- INITIALIZED - Enum constant in enum class org.apache.spark.streaming.StreamingContextState
-
The context has been created, but not been started yet.
- initializeExecutor(String, String, Map<String, String>) - Method in interface org.apache.spark.shuffle.api.ShuffleExecutorComponents
-
Called once per executor to bootstrap this module with state that is specific to that executor, specifically the application ID and executor ID.
- initialOffset() - Method in interface org.apache.spark.sql.connector.read.streaming.SparkDataStream
-
Returns the initial offset for a streaming query to start reading from.
- initialState(JavaPairRDD<KeyType, StateType>) - Method in class org.apache.spark.streaming.StateSpec
-
Set the RDD containing the initial states that will be used by
mapWithState
- initialState(RDD<Tuple2<KeyType, StateType>>) - Method in class org.apache.spark.streaming.StateSpec
-
Set the RDD containing the initial states that will be used by
mapWithState
- initialTypeNotTargetDataTypeError(DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- initialTypeNotTargetDataTypesError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- initialValue - Static variable in interface org.apache.spark.sql.connector.metric.CustomMetric
-
The initial value of this metric.
- initialValue() - Method in class org.apache.spark.partial.PartialResult
- initialWeights() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- initialWeights() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- initialWeights() - Method in interface org.apache.spark.ml.classification.MultilayerPerceptronParams
-
The initial weights of the model.
- initiateFallbackFetchForPushMergedBlock(BlockId, BlockManagerId) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the
iterator.next()
is invoked and the iterator processes a response of type: 1)ShuffleBlockFetcherIterator.SuccessFetchResult
2)ShuffleBlockFetcherIterator.FallbackOnPushMergedFailureResult
3)ShuffleBlockFetcherIterator.PushMergedRemoteMetaFailedFetchResult
- initMode() - Method in class org.apache.spark.ml.clustering.KMeans
- initMode() - Method in class org.apache.spark.ml.clustering.KMeansModel
- initMode() - Method in interface org.apache.spark.ml.clustering.KMeansParams
-
Param for the initialization algorithm.
- initMode() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- initMode() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
-
Param for the initialization algorithm.
- initModel(DenseVector<Object>, Random) - Method in interface org.apache.spark.ml.ann.Layer
-
Returns the instance of the layer with random generated weights.
- initStd() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- initStd() - Method in class org.apache.spark.ml.classification.FMClassifier
- initStd() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
-
Param for standard deviation of initial coefficients
- initStd() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- initStd() - Method in class org.apache.spark.ml.regression.FMRegressor
- initSteps() - Method in class org.apache.spark.ml.clustering.KMeans
- initSteps() - Method in class org.apache.spark.ml.clustering.KMeansModel
- initSteps() - Method in interface org.apache.spark.ml.clustering.KMeansParams
-
Param for the number of steps for the k-means|| initialization mode.
- injectCheckRule(Function1<SparkSession, Function1<LogicalPlan, BoxedUnit>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject an check analysis
Rule
builder into theSparkSession
. - injectColumnar(Function1<SparkSession, ColumnarRule>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a rule that can override the columnar execution of an executor.
- injectFunction(Tuple3<FunctionIdentifier, ExpressionInfo, Function1<Seq<Expression>, Expression>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Injects a custom function into the
FunctionRegistry
at runtime for all sessions. - injectOptimizerRule(Function1<SparkSession, Rule<LogicalPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject an optimizer
Rule
builder into theSparkSession
. - injectParser(Function2<SparkSession, ParserInterface, ParserInterface>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a custom parser into the
SparkSession
. - injectPlannerStrategy(Function1<SparkSession, SparkStrategy>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a planner
Strategy
builder into theSparkSession
. - injectPlanNormalizationRule(Function1<SparkSession, Rule<LogicalPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a plan normalization
Rule
builder into theSparkSession
. - injectPostHocResolutionRule(Function1<SparkSession, Rule<LogicalPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject an analyzer
Rule
builder into theSparkSession
. - injectPreCBORule(Function1<SparkSession, Rule<LogicalPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject an optimizer
Rule
builder that rewrites logical plans into theSparkSession
. - injectQueryPostPlannerStrategyRule(Function1<SparkSession, Rule<SparkPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a rule that applied between
plannerStrategy
andqueryStagePrepRules
, so it can get the whole plan before injecting exchanges. - injectQueryStageOptimizerRule(Function1<SparkSession, Rule<SparkPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a rule that can override the query stage optimizer phase of adaptive query execution.
- injectQueryStagePrepRule(Function1<SparkSession, Rule<SparkPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a rule that can override the query stage preparation phase of adaptive query execution.
- injectResolutionRule(Function1<SparkSession, Rule<LogicalPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject an analyzer resolution
Rule
builder into theSparkSession
. - injectRuntimeOptimizerRule(Function1<SparkSession, Rule<LogicalPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Inject a runtime
Rule
builder into theSparkSession
. - injectTableFunction(Tuple3<FunctionIdentifier, ExpressionInfo, Function1<Seq<Expression>, LogicalPlan>>) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
Injects a custom function into the
TableFunctionRegistry
at runtime for all sessions. - inline(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element in the given array of structs.
- inline_outer(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element in the given array of structs.
- inlineTableContainsScalarSubquery(LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- inNative() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- InnerClosureFinder - Class in org.apache.spark.util
- InnerClosureFinder(Set<Class<?>>) - Constructor for class org.apache.spark.util.InnerClosureFinder
- innerJoin(EdgeRDD<ED2>, Function4<Object, Object, ED, ED2, ED3>, ClassTag<ED2>, ClassTag<ED3>) - Method in class org.apache.spark.graphx.EdgeRDD
-
Inner joins this EdgeRDD with another EdgeRDD, assuming both are partitioned using the same
PartitionStrategy
. - innerJoin(EdgeRDD<ED2>, Function4<Object, Object, ED, ED2, ED3>, ClassTag<ED2>, ClassTag<ED3>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- innerJoin(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- innerJoin(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Inner joins this VertexRDD with an RDD containing vertex attribute pairs.
- innerZipJoin(VertexRDD<U>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- innerZipJoin(VertexRDD<U>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Efficiently inner joins this VertexRDD with another VertexRDD sharing the same index.
- INOUT - Enum constant in enum class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Mode
- inPlace() - Method in interface org.apache.spark.ml.ann.Layer
-
If true, the memory is not allocated for the output of this layer.
- InProcessLauncher - Class in org.apache.spark.launcher
-
In-process launcher for Spark applications.
- InProcessLauncher() - Constructor for class org.apache.spark.launcher.InProcessLauncher
- input() - Method in class org.apache.spark.ml.TransformStart
- INPUT() - Static method in class org.apache.spark.ui.ToolTips
- INPUT_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- INPUT_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- INPUT_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- INPUT_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- input_file_block_length() - Static method in class org.apache.spark.sql.functions
-
Returns the length of the block being read, or -1 if not available.
- input_file_block_start() - Static method in class org.apache.spark.sql.functions
-
Returns the start offset of the block being read, or -1 if not available.
- input_file_name() - Static method in class org.apache.spark.sql.functions
-
Creates a string column for the file name of the current Spark task.
- INPUT_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- INPUT_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- INPUT_METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
- INPUT_RECORDS() - Static method in class org.apache.spark.status.TaskIndexNames
- INPUT_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- INPUT_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- INPUT_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- INPUT_RECORDS_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- INPUT_ROWS_PER_SECOND_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- INPUT_SIZE() - Static method in class org.apache.spark.status.TaskIndexNames
- input$() - Constructor for class org.apache.spark.InternalAccumulator.input$
- inputBytes() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- inputBytes() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- inputBytes() - Method in class org.apache.spark.status.api.v1.StageData
- inputCol() - Method in class org.apache.spark.ml.feature.Binarizer
- inputCol() - Method in class org.apache.spark.ml.feature.Bucketizer
- inputCol() - Method in class org.apache.spark.ml.feature.CountVectorizer
- inputCol() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- inputCol() - Method in class org.apache.spark.ml.feature.HashingTF
- inputCol() - Method in class org.apache.spark.ml.feature.IDF
- inputCol() - Method in class org.apache.spark.ml.feature.IDFModel
- inputCol() - Method in class org.apache.spark.ml.feature.Imputer
- inputCol() - Method in class org.apache.spark.ml.feature.ImputerModel
- inputCol() - Method in class org.apache.spark.ml.feature.IndexToString
- inputCol() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- inputCol() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- inputCol() - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- inputCol() - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- inputCol() - Method in class org.apache.spark.ml.feature.MinMaxScaler
- inputCol() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- inputCol() - Method in class org.apache.spark.ml.feature.OneHotEncoder
- inputCol() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- inputCol() - Method in class org.apache.spark.ml.feature.PCA
- inputCol() - Method in class org.apache.spark.ml.feature.PCAModel
- inputCol() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- inputCol() - Method in class org.apache.spark.ml.feature.RobustScaler
- inputCol() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- inputCol() - Method in class org.apache.spark.ml.feature.StandardScaler
- inputCol() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- inputCol() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- inputCol() - Method in class org.apache.spark.ml.feature.StringIndexer
- inputCol() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- inputCol() - Method in class org.apache.spark.ml.feature.VectorIndexer
- inputCol() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- inputCol() - Method in class org.apache.spark.ml.feature.VectorSizeHint
- inputCol() - Method in class org.apache.spark.ml.feature.VectorSlicer
- inputCol() - Method in class org.apache.spark.ml.feature.Word2Vec
- inputCol() - Method in class org.apache.spark.ml.feature.Word2VecModel
- inputCol() - Method in interface org.apache.spark.ml.param.shared.HasInputCol
-
Param for input column name.
- inputCol() - Method in class org.apache.spark.ml.UnaryTransformer
- inputCols() - Method in class org.apache.spark.ml.feature.Binarizer
- inputCols() - Method in class org.apache.spark.ml.feature.Bucketizer
- inputCols() - Method in class org.apache.spark.ml.feature.FeatureHasher
- inputCols() - Method in class org.apache.spark.ml.feature.Imputer
- inputCols() - Method in class org.apache.spark.ml.feature.ImputerModel
- inputCols() - Method in class org.apache.spark.ml.feature.Interaction
- inputCols() - Method in class org.apache.spark.ml.feature.OneHotEncoder
- inputCols() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- inputCols() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- inputCols() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- inputCols() - Method in class org.apache.spark.ml.feature.StringIndexer
- inputCols() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- inputCols() - Method in class org.apache.spark.ml.feature.VectorAssembler
- inputCols() - Method in interface org.apache.spark.ml.param.shared.HasInputCols
-
Param for input column names.
- inputDStream() - Method in class org.apache.spark.streaming.api.java.JavaInputDStream
- inputDStream() - Method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
- InputDStream<T> - Class in org.apache.spark.streaming.dstream
-
This is the abstract base class for all input streams.
- InputDStream(StreamingContext, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.InputDStream
- inputExternalRowCannotBeNullError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- InputFileBlockHolder - Class in org.apache.spark.rdd
-
This holds file names of the current Spark task.
- InputFileBlockHolder() - Constructor for class org.apache.spark.rdd.InputFileBlockHolder
- inputFiles() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a best-effort snapshot of the files that compose this Dataset.
- inputFiles() - Method in class org.apache.spark.sql.Dataset
- inputFilterNotFullyConvertibleError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- inputFormatClazz() - Method in class org.apache.spark.scheduler.InputFormatInfo
- inputFormatClazz() - Method in class org.apache.spark.scheduler.SplitInfo
- InputFormatInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi :: Parses and holds information about inputFormat (and files) specified as a parameter.
- InputFormatInfo(Configuration, Class<?>, String) - Constructor for class org.apache.spark.scheduler.InputFormatInfo
- InputMetricDistributions - Class in org.apache.spark.status.api.v1
- inputMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- inputMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- InputMetrics - Class in org.apache.spark.status.api.v1
- InputPartition - Interface in org.apache.spark.sql.connector.read
-
A serializable representation of an input partition returned by
Batch.planInputPartitions()
and the corresponding ones in streaming . - inputRecords() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- inputRecords() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- inputRecords() - Method in class org.apache.spark.status.api.v1.StageData
- inputRowsPerSecond() - Method in class org.apache.spark.sql.streaming.SourceProgress
- inputRowsPerSecond() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
The aggregate (across all sources) rate of data arriving.
- inputSchema() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.A
StructType
represents data types of input arguments of this aggregate function. - inputSize() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- inputSourceDiffersFromDataSourceProviderError(String, String, CatalogTable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- inputStreamId() - Method in class org.apache.spark.streaming.scheduler.StreamInputInfo
- inputTypes() - Method in interface org.apache.spark.sql.connector.catalog.functions.BoundFunction
-
Returns the required
data types
of the input values to this function. - inRange(double, double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Version of `inRange()` which uses inclusive be default: [lowerBound, upperBound]
- inRange(double, double, boolean, boolean) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check for value in range lowerBound to upperBound.
- insert(Dataset<Row>, boolean) - Method in interface org.apache.spark.sql.sources.InsertableRelation
- insert(Map<String, Column>) - Method in class org.apache.spark.sql.WhenNotMatched
-
Specifies an action to insert non-matched rows into the DataFrame with the provided column assignments.
- insert(T) - Method in interface org.apache.spark.sql.connector.write.DeltaWriter
-
Inserts a new row.
- INSERT - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableWritePrivilege
-
The privilege for adding rows to the table.
- InsertableRelation - Interface in org.apache.spark.sql.sources
-
A BaseRelation that can be used to insert data into it through the insert method.
- insertAll() - Method in class org.apache.spark.sql.WhenNotMatched
-
Specifies an action to insert all non-matched rows into the DataFrame.
- insertedValueNumberNotMatchFieldNumberError(SqlBaseParser.NotMatchedClauseContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- insertInto(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Inserts the content of the
DataFrame
to the specified table. - insertIntoTable(String, StructField[]) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns an Insert SQL statement template for inserting a row into the target table via JDBC conn.
- insertIntoTable(String, StructField[]) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- insertIntoViewNotAllowedError(TableIdentifier, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- insertMismatchedColumnNumberError(Seq<Attribute>, Seq<Attribute>, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- insertMismatchedPartitionNumberError(StructType, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- insertOverwriteDirectoryUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- inShutdown() - Static method in class org.apache.spark.util.ShutdownHookManager
-
Detect whether this thread might be executing a shutdown hook.
- inspect(Object) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- instance() - Method in class org.apache.spark.ml.LoadInstanceEnd
- instance() - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
-
Get this impurity instance.
- instance() - Static method in class org.apache.spark.mllib.tree.impurity.Gini
-
Get this impurity instance.
- instance() - Static method in class org.apache.spark.mllib.tree.impurity.Variance
-
Get this impurity instance.
- INSTANCE - Static variable in class org.apache.spark.serializer.DummySerializerInstance
- INSTANT() - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder that serializes instances of the
java.time.Instant
class to the internal representation of nullable Catalyst's TimestampType. - instantiateSerializerFromConf(ConfigEntry<String>, SparkConf, boolean) - Static method in class org.apache.spark.util.Utils
- instantiateSerializerOrShuffleManager(String, SparkConf, boolean) - Static method in class org.apache.spark.util.Utils
- instr(Column, String) - Static method in class org.apache.spark.sql.functions
-
Locate the position of the first occurrence of substr column in the given string.
- insufficientTablePropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- insufficientTablePropertyPartError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- INT() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable int type.
- INT1 - Static variable in class org.apache.spark.types.variant.VariantUtil
- INT2 - Static variable in class org.apache.spark.types.variant.VariantUtil
- INT4 - Static variable in class org.apache.spark.types.variant.VariantUtil
- INT8 - Static variable in class org.apache.spark.types.variant.VariantUtil
- IntArrayParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Array[Int}
for Java. - IntArrayParam(Params, String, String) - Constructor for class org.apache.spark.ml.param.IntArrayParam
- IntArrayParam(Params, String, String, Function1<int[], Object>) - Constructor for class org.apache.spark.ml.param.IntArrayParam
- intColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
- IntegerExactNumeric - Class in org.apache.spark.sql.types
- IntegerExactNumeric() - Constructor for class org.apache.spark.sql.types.IntegerExactNumeric
- IntegerType - Class in org.apache.spark.sql.types
-
The data type representing
Int
values. - IntegerType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the IntegerType object.
- IntegerType() - Constructor for class org.apache.spark.sql.types.IntegerType
- IntegerTypeExpression - Class in org.apache.spark.sql.types
- IntegerTypeExpression() - Constructor for class org.apache.spark.sql.types.IntegerTypeExpression
- IntegralTypeExpression - Class in org.apache.spark.sql.types
- IntegralTypeExpression() - Constructor for class org.apache.spark.sql.types.IntegralTypeExpression
- INTER_JOB_WAIT_MS() - Static method in class org.apache.spark.ui.UIWorkloadGenerator
- interact(Term) - Static method in class org.apache.spark.ml.feature.Dot
- interact(Term) - Static method in class org.apache.spark.ml.feature.EmptyTerm
- interact(Term) - Method in interface org.apache.spark.ml.feature.InteractableTerm
-
Interactions of interactable terms.
- interact(Term) - Method in interface org.apache.spark.ml.feature.Term
-
Default interactions of a Term
- InteractableTerm - Interface in org.apache.spark.ml.feature
-
A term that may be part of an interaction, e.g.
- Interaction - Class in org.apache.spark.ml.feature
-
Implements the feature interaction transform.
- Interaction() - Constructor for class org.apache.spark.ml.feature.Interaction
- Interaction(String) - Constructor for class org.apache.spark.ml.feature.Interaction
- intercept() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- intercept() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- intercept() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
The model intercept for "binomial" logistic regression.
- intercept() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- intercept() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- intercept() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- intercept() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- intercept() - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
- intercept() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
- intercept() - Method in class org.apache.spark.mllib.classification.SVMModel
- intercept() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
- intercept() - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
- intercept() - Method in class org.apache.spark.mllib.regression.LassoModel
- intercept() - Method in class org.apache.spark.mllib.regression.LinearRegressionModel
- intercept() - Method in class org.apache.spark.mllib.regression.RidgeRegressionModel
- interceptVector() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- intermediateStorageLevel() - Method in class org.apache.spark.ml.recommendation.ALS
- intermediateStorageLevel() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for StorageLevel for intermediate datasets.
- InternalAccumulator - Class in org.apache.spark
-
A collection of fields and methods concerned with internal accumulators that represent task level metrics.
- InternalAccumulator() - Constructor for class org.apache.spark.InternalAccumulator
- InternalAccumulator.input$ - Class in org.apache.spark
- InternalAccumulator.output$ - Class in org.apache.spark
- InternalAccumulator.shuffleRead$ - Class in org.apache.spark
- InternalAccumulator.shuffleWrite$ - Class in org.apache.spark
- internalCompilerError(InternalCompilerException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- internalError(String) - Static method in exception org.apache.spark.SparkException
- internalError(String, String) - Static method in exception org.apache.spark.SparkException
- internalError(String, Throwable) - Static method in exception org.apache.spark.SparkException
- internalError(String, QueryContext[], String) - Static method in exception org.apache.spark.SparkException
- internalError(String, QueryContext[], String, Option<String>) - Static method in exception org.apache.spark.SparkException
- InternalFunctionRegistration - Class in org.apache.spark.sql.ml
- InternalFunctionRegistration() - Constructor for class org.apache.spark.sql.ml.InternalFunctionRegistration
- internalGetValueMap() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
- internalGetValueMap() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
- internalGetValueMap() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
- InternalKMeansModelWriter - Class in org.apache.spark.ml.clustering
-
A writer for KMeans that handles the "internal" (or default) format
- InternalKMeansModelWriter() - Constructor for class org.apache.spark.ml.clustering.InternalKMeansModelWriter
- InternalLinearRegressionModelWriter - Class in org.apache.spark.ml.regression
-
A writer for LinearRegression that handles the "internal" (or default) format
- InternalLinearRegressionModelWriter() - Constructor for class org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
- InternalNode - Class in org.apache.spark.ml.tree
-
Internal Decision Tree node.
- internOption(Option<Object>) - Static method in class org.apache.spark.util.AccumulatorContext
-
Naive way to reduce the duplicate Some objects for values 0 and -1 TODO: Eventually if this spreads out to more values then using Guava's weak interner would be a better solution.
- interruptAll() - Method in class org.apache.spark.sql.api.SparkSession
-
Request to interrupt all currently running operations of this session.
- interruptAll() - Method in class org.apache.spark.sql.SparkSession
-
Request to interrupt all currently running SQL operations of this session.
- interruptedError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- InterruptibleIterator<T> - Class in org.apache.spark
-
:: DeveloperApi :: An iterator that wraps around an existing iterator to provide task killing functionality.
- InterruptibleIterator(TaskContext, Iterator<T>) - Constructor for class org.apache.spark.InterruptibleIterator
- interruptOperation(String) - Method in class org.apache.spark.sql.api.SparkSession
-
Request to interrupt an operation of this session, given its operation ID.
- interruptOperation(String) - Method in class org.apache.spark.sql.SparkSession
-
Request to interrupt a SQL operation of this session, given its SQL execution ID.
- interruptTag(String) - Method in class org.apache.spark.sql.api.SparkSession
-
Request to interrupt all currently running operations of this session with the given job tag.
- interruptTag(String) - Method in class org.apache.spark.sql.SparkSession
-
Request to interrupt all currently running SQL operations of this session with the given job tag.
- interruptThread() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
- interruptThread() - Method in class org.apache.spark.scheduler.local.KillTask
- intersect(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing rows only in both this Dataset and another Dataset.
- intersect(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- intersectAll(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing rows only in both this Dataset and another Dataset while preserving the duplicates.
- intersectAll(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- intersectInPlace(BloomFilter) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Combines this bloom filter with another bloom filter by performing a bitwise AND of the underlying data.
- intersection(JavaDoubleRDD) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return the intersection of this RDD and another one.
- intersection(JavaPairRDD<K, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return the intersection of this RDD and another one.
- intersection(JavaRDD<T>) - Method in class org.apache.spark.api.java.JavaRDD
-
Return the intersection of this RDD and another one.
- intersection(RDD<T>) - Method in class org.apache.spark.rdd.RDD
-
Return the intersection of this RDD and another one.
- intersection(RDD<T>, int) - Method in class org.apache.spark.rdd.RDD
-
Return the intersection of this RDD and another one.
- intersection(RDD<T>, Partitioner, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return the intersection of this RDD and another one.
- INTERVAL_DS() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
- INTERVAL_YM() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
- intervalArithmeticOverflowError(String, String, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- intervalDividedByZeroError(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- IntervalFields(byte, byte) - Constructor for class org.apache.spark.types.variant.VariantUtil.IntervalFields
- intervalValueOutOfRangeError(SqlBaseParser.IntervalContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- IntParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Int]
for Java. - IntParam - Class in org.apache.spark.util
-
An extractor object for parsing strings into integers.
- IntParam() - Constructor for class org.apache.spark.util.IntParam
- IntParam(String, String, String) - Constructor for class org.apache.spark.ml.param.IntParam
- IntParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.IntParam
- IntParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.IntParam
- IntParam(Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.IntParam
- invalidAesIvLengthError(String, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidAesKeyLengthError(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidAgnosticEncoderError(Object) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- invalidAgnosticEncoderError(Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidAlphaParameter(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidArrayIndexError(int, int, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidateSerializedMapOutputStatusCache() - Method in class org.apache.spark.ShuffleStatus
-
Clears the cached serialized map output statuses.
- invalidateSerializedMergeOutputStatusCache() - Method in class org.apache.spark.ShuffleStatus
-
Clears the cached serialized merge result statuses.
- invalidateTable(Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- invalidateTable(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Invalidate cached table metadata for an
identifier
. - invalidateView(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Invalidate cached view metadata for an
identifier
. - invalidBitmapPositionError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidBooleanStatement(Origin, String) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- invalidBoundaryEndError(long) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- invalidBoundaryEndError(long) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidBoundaryStartError(long) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- invalidBoundaryStartError(long) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidBucketColumnDataTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidBucketFile(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidBucketNumberError(int, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidBucketsNumberError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidBucketsNumberError(String, SqlBaseParser.ApplyTransformContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidByteLengthLiteralError(String, SqlBaseParser.SampleByBytesContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidByteStringFormatError(Object) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidCatalogNameError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidChangeLogReaderVersion(long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidChangeLogWriterVersion(long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidCharsetError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidCheckpointFileError(Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
- invalidCoalesceHintParameterError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidColumnNameAsPathError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidColumnOrFieldDataTypeError(Seq<String>, DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidDataSourceError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidDatetimeUnitError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidDatetimeUnitError(ParserRuleContext, String, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidDayTimeField(byte, Seq<String>) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- invalidDayTimeIntervalType(String, String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- invalidDdofParameter(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidElementAtIndexError(int, int, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidEscapeChar(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidEscapeStringError(String, SqlBaseParser.PredicateContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidExecuteImmediateVariableType(DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidExpressionEncoderError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidExternalTypeError(String, DataType, Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidFieldName(Seq<String>, Seq<String>, Origin) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- invalidFieldName(Seq<String>, Seq<String>, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidFieldTypeForCorruptRecordError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidFileFormatForStoredAsError(SerdeInfo) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidFractionOfSecondError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidFromToUnitValueError(SqlBaseParser.IntervalValueContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidGroupingSetError(String, SqlBaseParser.GroupingAnalyticsContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidHintParameterError(String, Seq<Object>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidIdentifierError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidIgnoreNAParameter(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidIgnoreNullsParameter(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidIncludeTimestampValueError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidIndexOfZeroError(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidInputInCastToDatetimeError(double, DataType, QueryContext) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- invalidInputInCastToDatetimeError(double, DataType, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidInputInCastToDatetimeError(UTF8String, DataType, QueryContext) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- invalidInputInCastToDatetimeError(UTF8String, DataType, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidInputInCastToDatetimeErrorInternal(String, DataType, DataType, QueryContext) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- invalidInputInCastToNumberError(DataType, UTF8String, QueryContext) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- invalidInputInCastToNumberError(DataType, UTF8String, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidInputInConversionError(DataType, UTF8String, UTF8String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidInputSyntaxForBooleanError(UTF8String, QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidInsertIntoError(SqlBaseParser.InsertIntoContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidIntervalFormError(String, SqlBaseParser.MultiUnitsIntervalContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidIterateLabelUsageForCompound(Origin, String) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- invalidJdbcNumPartitionsError(int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidJdbcTxnIsolationLevelError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidJoinTypeInJoinWithError(JoinType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidJsonSchema(DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidKerberosConfigForHiveServer2Error() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidLateralJoinRelationError(SqlBaseParser.RelationPrimaryContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidLiteralForWindowDurationError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidLocationError(String, String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidNameForDropTempFunc(Seq<String>, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidNameForTableOrDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidNamespaceNameError(String[]) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidNumberFormatError(DataType, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidNumericLiteralRangeError(String, BigDecimal, BigDecimal, String, SqlBaseParser.NumberContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidNumParameter(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidOrderingForConstantValuePartitionColumnError(StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPandasUDFPlacementError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidParameter(String, String, String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPartitionColumnDataTypeError(StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPartitionColumnError(String, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPartitionColumnKeyInTableError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPartitionColumnTypeError(StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPartitionFilterError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidPartitionSpecError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPartitionSpecError(String, Seq<String>, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPartitionTransformationError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidPatternError(String, String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidPropertyKeyForSetQuotedConfigurationError(String, String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidPropertyValueForSetQuotedConfigurationError(String, String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidQueryAllParametersMustBeNamed(Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidQueryMixedQueryParameters() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidRandomSeedParameter(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidRegexGroupIndexError(String, int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidRepartitionExpressionsError(Seq<Object>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidReverseParameter(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidRowLevelOperationAssignments(Seq<Assignment>, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidSaveModeError(String) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- invalidSaveModeError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidSingleVariantColumn() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidSortOrderInUDTFOrderingColumnFromAnalyzeMethodHasAlias(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidStartIndexError(int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidStarUsageError(String, Seq<Star>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidStatementError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidStatementForExecuteInto(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidStringParameter(String, String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidTableFunctionIdentifierArgumentMissingParentheses(ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidTableValuedFunctionNameError(Seq<String>, SqlBaseParser.TableValuedFunctionContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidTimestampExprForTimeTravel(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidTimestampProvidedForStrategyError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidTimeTravelSpec(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidTimeTravelSpecError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidTimeZoneDisplacementValueError(SqlBaseParser.SetTimeZoneContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidUDFClassError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidUDTFSelectExpressionFromAnalyzeMethodNeedsAlias(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidUrlError(UTF8String, URISyntaxException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidUTF8StringError(UTF8String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidVariantCast(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidVariantGetPath(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidVariantMissingFieldError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidVariantNullableOrNotBinaryFieldError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidVariantWrongNumFieldsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidViewNameError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidWindowReferenceError(String, SqlBaseParser.WindowClauseContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- invalidWriterCommitMessageError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- invalidXmlSchema(DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invalidYearMonthField(byte, Seq<String>) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- inverse() - Method in class org.apache.spark.ml.feature.DCT
-
Indicates whether to perform the inverse DCT (true) or forward DCT (false).
- inverse(double[], int) - Static method in class org.apache.spark.mllib.linalg.CholeskyDecomposition
-
Computes the inverse of a real symmetric positive definite matrix A using the Cholesky factorization A = U**T*U.
- Inverse$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
- inverseDistributionFunctionMissingWithinGroupError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- invoke(Object, Method, Object[]) - Static method in class org.apache.spark.serializer.DummyInvocationHandler
- invokeWriteReplace(Object) - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- ioEncryptionKey() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
- IS_ACTIVE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- IS_ACTIVE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- IS_ACTIVE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- IS_BLACKLISTED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- IS_BLACKLISTED_FOR_STAGE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- IS_EXCLUDED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- IS_EXCLUDED_FOR_STAGE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- IS_SHUFFLE_PUSH_ENABLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- is_variant_null(Column) - Static method in class org.apache.spark.sql.functions
-
Check if a variant value is a variant null.
- is32BitDecimalType(DataType) - Static method in class org.apache.spark.sql.types.DecimalType
-
Returns if dt is a DecimalType that fits inside an int
- is64BitDecimalType(DataType) - Static method in class org.apache.spark.sql.types.DecimalType
-
Returns if dt is a DecimalType that fits inside a long
- isActive() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns
true
if this query is actively running. - isActive() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- isActive() - Method in class org.apache.spark.status.api.v1.ProcessSummary
- isActive() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- isAddIntercept() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Get if the algorithm uses addIntercept
- isAllowed(Enumeration.Value, Enumeration.Value) - Static method in class org.apache.spark.scheduler.TaskLocality
- isAllowExplicitInsert() - Method in class org.apache.spark.sql.connector.catalog.IdentityColumnSpec
- isBarrier() - Method in class org.apache.spark.storage.RDDInfo
- isBatchingEnabled(SparkConf, boolean) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- isBindCollision(Throwable) - Static method in class org.apache.spark.util.Utils
-
Return whether the exception is caused by an address-port collision when binding.
- isBlacklisted() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
Deprecated.use isExcluded instead. Since 3.1.0.
- isBlacklistedForStage() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
Deprecated.use isExcludedForStage instead. Since 3.1.0.
- isBroadcast() - Method in class org.apache.spark.storage.BlockId
- isBucket() - Method in class org.apache.spark.sql.catalog.Column
- isByteArrayDecimalType(DataType) - Static method in class org.apache.spark.sql.types.DecimalType
-
Returns if dt is a DecimalType that doesn't fit inside a long
- isCached() - Method in class org.apache.spark.storage.BlockStatus
- isCached() - Method in class org.apache.spark.storage.RDDInfo
- isCached(String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns true if the table is currently cached in-memory.
- isCached(String) - Method in class org.apache.spark.sql.SQLContext
-
Returns true if the table is currently cached in-memory.
- isCancelled() - Method in class org.apache.spark.ComplexFutureAction
- isCancelled() - Method in interface org.apache.spark.FutureAction
-
Returns whether the action has been cancelled.
- isCancelled() - Method in class org.apache.spark.SimpleFutureAction
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Return Some[true] iff
TRUNCATE TABLE
causes cascading default. - isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- isCascadingTruncateTable() - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.OracleDialect
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- isCascadingTruncateTable() - Method in class org.apache.spark.sql.jdbc.TeradataDialect
- isCheckpointed() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return whether this RDD has been checkpointed or not
- isCheckpointed() - Method in class org.apache.spark.graphx.Graph
-
Return whether this Graph has been checkpointed or not.
- isCheckpointed() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- isCheckpointed() - Method in class org.apache.spark.graphx.impl.GraphImpl
- isCheckpointed() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- isCheckpointed() - Method in class org.apache.spark.rdd.RDD
-
Return whether this RDD is checkpointed and materialized, either reliably or locally.
- isClientMode(SparkConf) - Static method in class org.apache.spark.util.Utils
- isCluster() - Method in class org.apache.spark.sql.catalog.Column
- isColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Indicates whether the values backing this matrix are arranged in column major order.
- isCompatible(BloomFilter) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Determines whether a given bloom filter is compatible with this bloom filter.
- isCompleted() - Method in class org.apache.spark.BarrierTaskContext
- isCompleted() - Method in class org.apache.spark.ComplexFutureAction
- isCompleted() - Method in interface org.apache.spark.FutureAction
-
Returns whether the action has already been completed with a value or an exception.
- isCompleted() - Method in class org.apache.spark.SimpleFutureAction
- isCompleted() - Method in class org.apache.spark.TaskContext
-
Returns true if the task has completed.
- isDaemon() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- isDataAvailable() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
- isDefined(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Checks whether a param is explicitly set or has a default value.
- isDeterministic() - Method in interface org.apache.spark.sql.connector.catalog.functions.BoundFunction
-
Returns whether this function result is deterministic.
- isDeterministic() - Method in interface org.apache.spark.sql.connector.catalog.procedures.BoundProcedure
-
Indicates whether this procedure is deterministic.
- isDistinct() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Avg
- isDistinct() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Count
- isDistinct() - Method in class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- isDistinct() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Sum
- isDistinct() - Method in class org.apache.spark.sql.connector.expressions.aggregate.UserDefinedAggregateFunc
- isDistributed() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
- isDistributed() - Method in class org.apache.spark.ml.clustering.LDAModel
-
Indicates whether this instance is of type
DistributedLDAModel
- isDistributed() - Method in class org.apache.spark.ml.clustering.LocalLDAModel
- isDriver() - Method in class org.apache.spark.storage.BlockManagerId
- isDynamicAllocationEnabled(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return whether dynamic allocation is enabled in the given conf.
- isEmpty() - Method in interface org.apache.spark.api.java.JavaRDDLike
- isEmpty() - Method in class org.apache.spark.rdd.RDD
- isEmpty() - Method in class org.apache.spark.sql.api.Dataset
-
Returns true if the
Dataset
is empty. - isEmpty() - Method in class org.apache.spark.sql.Dataset
- isEmpty() - Method in class org.apache.spark.sql.types.Metadata
-
Tests whether this Metadata is empty.
- isEmpty() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- isEncryptionEnabled(JavaSparkContext) - Static method in class org.apache.spark.api.r.RUtils
- isExcluded() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- isExcluded() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- isExcludedForStage() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- isExecuted() - Method in class org.apache.spark.sql.scripting.SingleStatementExec
-
Whether this statement has been executed during the interpretation phase.
- IsExecutorAlive(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.IsExecutorAlive
- IsExecutorAlive(String) - Constructor for class org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive
- IsExecutorAlive$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.IsExecutorAlive$
- IsExecutorAlive$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive$
- isExecutorStartupConf(String) - Static method in class org.apache.spark.SparkConf
-
Return whether the given config should be passed to an executor on start-up.
- isExperiment() - Method in class org.apache.spark.mllib.stat.test.BinarySample
- isFailed() - Method in class org.apache.spark.BarrierTaskContext
- isFailed() - Method in class org.apache.spark.TaskContext
-
Returns true if the task has failed.
- isFailed(Enumeration.Value) - Static method in class org.apache.spark.TaskState
- isFatalError(Throwable) - Static method in class org.apache.spark.util.Utils
-
Returns true if the given exception was fatal.
- isFile(Path) - Static method in class org.apache.spark.ml.image.SamplePathFilter
- isFileSplittable(Path, CompressionCodecFactory) - Static method in class org.apache.spark.util.Utils
-
Check whether the file of the path is splittable.
- isFinal() - Method in enum class org.apache.spark.launcher.SparkAppHandle.State
-
Whether this state is a final state, meaning the application is not running anymore once it's reached.
- isFinished(Enumeration.Value) - Static method in class org.apache.spark.TaskState
- isFunctionCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
- isG1GC() - Static method in class org.apache.spark.util.Utils
- isGlobalJaasConfigurationProvided() - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- isGlobalKrbDebugEnabled() - Static method in class org.apache.spark.util.SecurityUtils
- isIgnorableException(Throwable) - Method in interface org.apache.spark.util.ListenerBus
-
Allows bus implementations to prevent error logging for certain exceptions.
- isin(Object...) - Method in class org.apache.spark.sql.Column
-
A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments.
- isin(Seq<Object>) - Method in class org.apache.spark.sql.Column
-
A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments.
- isInCollection(Iterable<?>) - Method in class org.apache.spark.sql.Column
-
A boolean expression that is evaluated to true if the value of this expression is contained by the provided collection.
- isInCollection(Iterable<?>) - Method in class org.apache.spark.sql.Column
-
A boolean expression that is evaluated to true if the value of this expression is contained by the provided collection.
- isInDirectory(File, File) - Static method in class org.apache.spark.util.Utils
-
Return whether the specified file is a parent directory of the child file.
- isIndylambdaScalaClosure(SerializedLambda) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- isInitialized() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- isInitialValueFinal() - Method in class org.apache.spark.partial.PartialResult
- isInnerClassCtorCapturingOuter(int, String, String, String, String) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
-
Check if the callee of a call site is a inner class constructor.
- isInRunningSparkTask() - Static method in class org.apache.spark.util.Utils
-
Returns if the current codes are running in a Spark task, e.g., in executors.
- isInternal() - Method in class org.apache.spark.sql.scripting.CaseStatementExec
- isInternal() - Method in class org.apache.spark.sql.scripting.CompoundNestedStatementIteratorExec
- isInternal() - Method in interface org.apache.spark.sql.scripting.CompoundStatementExec
-
Whether the statement originates from the SQL script or is created during the interpretation.
- isInternal() - Method in class org.apache.spark.sql.scripting.IfElseStatementExec
- isInternal() - Method in class org.apache.spark.sql.scripting.IterateStatementExec
- isInternal() - Method in class org.apache.spark.sql.scripting.LeaveStatementExec
- isInternal() - Method in class org.apache.spark.sql.scripting.RepeatStatementExec
- isInternal() - Method in class org.apache.spark.sql.scripting.SingleStatementExec
- isInternal() - Method in class org.apache.spark.sql.scripting.WhileStatementExec
- isInternalError() - Method in interface org.apache.spark.SparkThrowable
- isInternalError(String) - Static method in class org.apache.spark.SparkThrowableHelper
- isInterrupted() - Method in class org.apache.spark.BarrierTaskContext
- isInterrupted() - Method in class org.apache.spark.TaskContext
-
Returns true if the task has been killed.
- isJavaVersionAtLeast21() - Static method in class org.apache.spark.util.Utils
-
Whether the underlying Java version is at least 21.
- isLambdaBodyCapturingOuter(Handle, String) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
-
Check if the handle represents a target method that is: - a STATIC method that implements a Scala lambda body in the indylambda style - captures the enclosing
this
, i.e. - isLambdaMetafactory(Handle) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
-
Check if the handle represents the LambdaMetafactory that indylambda Scala closures use for creating the lambda class and getting a closure instance.
- isLargerBetter() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- isLargerBetter() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- isLargerBetter() - Method in class org.apache.spark.ml.evaluation.Evaluator
-
Indicates whether the metric returned by
evaluate
should be maximized (true, default) or minimized (false). - isLargerBetter() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- isLargerBetter() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- isLargerBetter() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- isLargerBetter() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- isLeaf() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- isLeaf() - Method in class org.apache.spark.mllib.tree.model.Node
- isLeftChild(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Returns true if this is a left child.
- isLocal() - Method in class org.apache.spark.api.java.JavaSparkContext
- isLocal() - Method in class org.apache.spark.SparkContext
- isLocal() - Method in class org.apache.spark.sql.api.Dataset
-
Returns true if the
collect
andtake
methods can be run locally (without any Spark executors). - isLocal() - Method in class org.apache.spark.sql.Dataset
- isLocalMaster(SparkConf) - Static method in class org.apache.spark.util.Utils
- isLocalPushMergedBlockAddress(BlockManagerId) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
Returns true if the address is of a push-merged-local block.
- isLocalUri(String) - Static method in class org.apache.spark.util.Utils
-
Returns whether the URI is a "local:" URI.
- isMac() - Static method in class org.apache.spark.util.Utils
-
Whether the underlying operating system is Mac OS X.
- isMacOnAppleSilicon() - Static method in class org.apache.spark.util.Utils
-
Whether the underlying operating system is Mac OS X and processor is Apple Silicon.
- isModifiable(String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Indicates whether the configuration property with the given key is modifiable in the current session.
- isMulticlassClassification() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- isMulticlassWithCategoricalFeatures() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- isMultipleOf(Duration) - Method in class org.apache.spark.streaming.Duration
- isMultipleOf(Duration) - Method in class org.apache.spark.streaming.Time
- isnan(Column) - Static method in class org.apache.spark.sql.functions
-
Return true iff the column is NaN.
- isNaN() - Method in class org.apache.spark.sql.Column
-
True if the current expression is NaN.
- isNominal() - Method in class org.apache.spark.ml.attribute.Attribute
-
Tests whether this attribute is nominal, true for
NominalAttribute
andBinaryAttribute
. - isNominal() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- isNominal() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- isNominal() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- isNominal() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- isnotnull(Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if
col
is not null, or false otherwise. - isNotNull() - Method in class org.apache.spark.sql.Column
-
True if the current expression is NOT null.
- IsNotNull - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a non-null value. - IsNotNull(String) - Constructor for class org.apache.spark.sql.sources.IsNotNull
- isnull(Column) - Static method in class org.apache.spark.sql.functions
-
Return true iff the column is null.
- isNull() - Method in class org.apache.spark.sql.Column
-
True if the current expression is null.
- IsNull - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to null. - IsNull(String) - Constructor for class org.apache.spark.sql.sources.IsNull
- isNullable() - Method in interface org.apache.spark.sql.connector.catalog.MetadataColumn
- isNullable() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- isNullAt(int) - Method in interface org.apache.spark.sql.Row
-
Checks whether the value at position i is null.
- isNullAt(int) - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- isNullAt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- isNullAt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- isNullAt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- isNullAt(int) - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns whether the value at
rowId
is NULL. - isNumeric() - Method in class org.apache.spark.ml.attribute.Attribute
-
Tests whether this attribute is numeric, true for
NumericAttribute
andBinaryAttribute
. - isNumeric() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- isNumeric() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- isNumeric() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- isNumeric() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- isOneEntireResource(long) - Static method in class org.apache.spark.resource.ResourceAmountUtils
- isOpen() - Method in class org.apache.spark.storage.CountingWritableChannel
- isOrdinal() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- isotonic() - Method in class org.apache.spark.ml.regression.IsotonicRegression
- isotonic() - Method in interface org.apache.spark.ml.regression.IsotonicRegressionBase
-
Param for whether the output sequence should be isotonic/increasing (true) or antitonic/decreasing (false).
- isotonic() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- isotonic() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
- IsotonicRegression - Class in org.apache.spark.ml.regression
-
Isotonic regression.
- IsotonicRegression - Class in org.apache.spark.mllib.regression
-
Isotonic regression.
- IsotonicRegression() - Constructor for class org.apache.spark.ml.regression.IsotonicRegression
- IsotonicRegression() - Constructor for class org.apache.spark.mllib.regression.IsotonicRegression
-
Constructs IsotonicRegression instance with default parameter isotonic = true.
- IsotonicRegression(String) - Constructor for class org.apache.spark.ml.regression.IsotonicRegression
- IsotonicRegressionBase - Interface in org.apache.spark.ml.regression
-
Params for isotonic regression.
- IsotonicRegressionModel - Class in org.apache.spark.ml.regression
-
Model fitted by IsotonicRegression.
- IsotonicRegressionModel - Class in org.apache.spark.mllib.regression
-
Regression model for isotonic regression.
- IsotonicRegressionModel(double[], double[], boolean) - Constructor for class org.apache.spark.mllib.regression.IsotonicRegressionModel
- IsotonicRegressionModel(Iterable<Object>, Iterable<Object>, Boolean) - Constructor for class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
A Java-friendly constructor that takes two Iterable parameters and one Boolean parameter.
- isPartiallyPushed() - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownLimit
-
Whether the LIMIT is partially pushed or not.
- isPartiallyPushed() - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownTopN
-
Whether the top N is partially pushed or not.
- isPartition() - Method in class org.apache.spark.sql.catalog.Column
- isPresent() - Method in class org.apache.spark.api.java.Optional
- isPushBasedShuffleEnabled(SparkConf, boolean, boolean) - Static method in class org.apache.spark.util.Utils
-
Push based shuffle can only be enabled when below conditions are met: - the application is submitted to run in YARN mode - external shuffle service enabled - IO encryption disabled - serializer(such as KryoSerializer) supports relocation of serialized objects
- isPushMergedShuffleBlockAddress(BlockManagerId) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
Returns true if the address is for a push-merged block.
- isPythonVersionAvailable() - Static method in class org.apache.spark.TestUtils
- isRDD() - Method in class org.apache.spark.storage.BlockId
- isReady() - Method in interface org.apache.spark.scheduler.SchedulerBackend
- isReady() - Method in class org.apache.spark.sql.util.NumericHistogram
-
Returns true if this histogram object has been initialized by calling merge() or allocate().
- isRegistered() - Method in class org.apache.spark.util.AccumulatorV2
-
Returns true if this accumulator has been registered.
- isRemotePushMergedBlockAddress(BlockManagerId) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
Returns true if the address is of a remote push-merged block.
- isRemoved() - Method in interface org.apache.spark.sql.streaming.TestGroupState
-
Whether the state has been marked for removing
- isReRegister() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
- isResultNullable() - Method in interface org.apache.spark.sql.connector.catalog.functions.BoundFunction
-
Returns whether the values produced by this function may be null.
- isReverseOf(Ordering<?>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- isReverseOf(Ordering<?>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- isReverseOf(Ordering<?>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- isReverseOf(Ordering<?>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- isReverseOf(Ordering<?>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- isReverseOf(Ordering<?>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- isReverseOf(Ordering<?>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- isRInstalled() - Static method in class org.apache.spark.api.r.RUtils
-
Check if R is installed before running tests that use R commands.
- isRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Indicates whether the values backing this matrix are arranged in row major order.
- isSchedulable() - Method in interface org.apache.spark.scheduler.Schedulable
- isSessionCatalog(CatalogPlugin) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- isSet(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Checks whether a param is explicitly set.
- isShuffle() - Method in class org.apache.spark.storage.BlockId
- isShuffleChunk() - Method in class org.apache.spark.storage.BlockId
- isShufflePushEnabled() - Method in class org.apache.spark.status.api.v1.StageData
- isSortColumnValid(Seq<Tuple3<String, Object, Option<String>>>, String) - Method in interface org.apache.spark.ui.PagedTable
-
Check if given sort column is valid or not.
- isSparkPortConf(String) - Static method in class org.apache.spark.SparkConf
-
Return true if the given config matches either
spark.*.port
orspark.port.*
. - isSparkRInstalled() - Static method in class org.apache.spark.api.r.RUtils
-
Check if SparkR is installed before running tests that use SparkR.
- isStarted() - Method in class org.apache.spark.streaming.receiver.Receiver
-
Check if the receiver has started or not.
- isStopped() - Method in class org.apache.spark.SparkContext
- isStopped() - Method in class org.apache.spark.streaming.receiver.Receiver
-
Check if receiver has been marked for stopping.
- isStreaming() - Method in class org.apache.spark.sql.api.Dataset
-
Returns true if this Dataset contains one or more sources that continuously return data as it arrives.
- isStreaming() - Method in class org.apache.spark.sql.Dataset
- isStreamingDynamicAllocationEnabled(SparkConf) - Static method in class org.apache.spark.util.Utils
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns whether the database supports function.
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- isSupportedFunction(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.OracleDialect
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- isSupportedFunction(String) - Method in class org.apache.spark.sql.jdbc.TeradataDialect
- isTemporary() - Method in class org.apache.spark.sql.catalog.Function
- isTemporary() - Method in class org.apache.spark.sql.catalog.Table
- isTesting() - Method in interface org.apache.spark.util.SparkEnvUtils
-
Indicates whether Spark is currently running unit tests.
- isTesting() - Static method in class org.apache.spark.util.Utils
- isTimingOut() - Method in class org.apache.spark.streaming.State
-
Whether the state is timing out and going to be removed by the system after the current batch.
- isTransposed() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- isTransposed() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Flag that keeps track whether the matrix is transposed or not.
- isTransposed() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- isTransposed() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- isTransposed() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Flag that keeps track whether the matrix is transposed or not.
- isTransposed() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- isTriggerActive() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
- isUnTyped() - Method in class org.apache.spark.sql.Dataset
- isUpdated() - Method in interface org.apache.spark.sql.streaming.TestGroupState
-
Whether the state has been updated but not removed
- isValid() - Method in class org.apache.spark.ml.param.Param
- isValid() - Method in interface org.apache.spark.sql.streaming.ExpiredTimerInfo
-
Check if provided ExpiredTimerInfo is valid.
- isValid() - Method in class org.apache.spark.storage.StorageLevel
- isValidErrorClass(String) - Method in class org.apache.spark.ErrorClassesJsonReader
- isValidErrorClass(String) - Static method in class org.apache.spark.SparkThrowableHelper
- isWindows() - Static method in class org.apache.spark.util.Utils
-
Whether the underlying operating system is Windows.
- isZero() - Method in class org.apache.spark.sql.types.Decimal
- isZero() - Method in class org.apache.spark.sql.util.MapperRowCounter
-
Returns false if this accumulator has had any values added to it or the sum is non-zero.
- isZero() - Method in class org.apache.spark.streaming.Duration
- isZero() - Method in class org.apache.spark.util.AccumulatorV2
-
Returns if this accumulator is zero value or not.
- isZero() - Method in class org.apache.spark.util.CollectionAccumulator
-
Returns false if this accumulator instance has any values in it.
- isZero() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns false if this accumulator has had any values added to it or the sum is non-zero.
- isZero() - Method in class org.apache.spark.util.LongAccumulator
-
Returns false if this accumulator has had any values added to it or the sum is non-zero.
- item() - Method in class org.apache.spark.ml.recommendation.ALS.Rating
- itemCol() - Method in class org.apache.spark.ml.recommendation.ALS
- itemCol() - Method in class org.apache.spark.ml.recommendation.ALSModel
- itemCol() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
-
Param for the column name for item ids.
- itemFactors() - Method in class org.apache.spark.ml.recommendation.ALSModel
- items() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
- itemsCol() - Method in class org.apache.spark.ml.fpm.FPGrowth
- itemsCol() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- itemsCol() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
-
Items column name.
- itemSupport() - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
- IterateStatementExec - Class in org.apache.spark.sql.scripting
-
Executable node for ITERATE statement.
- IterateStatementExec(String) - Constructor for class org.apache.spark.sql.scripting.IterateStatementExec
- iterator() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns an iterator over all the elements of this vector.
- iterator() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns an iterator over all the elements of this vector.
- iterator() - Method in interface org.apache.spark.sql.streaming.MapState
-
Get the map associated with grouping key
- iterator() - Method in class org.apache.spark.sql.types.StructType
- iterator() - Method in class org.apache.spark.status.RDDPartitionSeq
- iterator(Partition, TaskContext) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
- iterator(Partition, TaskContext) - Method in class org.apache.spark.rdd.RDD
-
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
- IV_LENGTH_IN_BYTES() - Static method in class org.apache.spark.security.CryptoStreamUtils
- IVY_DEFAULT_EXCLUDES() - Static method in class org.apache.spark.util.MavenUtils
J
- j() - Method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
- JAR_IVY_SETTING_PATH_KEY() - Static method in class org.apache.spark.util.MavenUtils
- jarOfClass(Class<?>) - Static method in class org.apache.spark.api.java.JavaSparkContext
-
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
- jarOfClass(Class<?>) - Static method in class org.apache.spark.SparkContext
-
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
- jarOfClass(Class<?>) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to StreamingContext.
- jarOfClass(Class<?>) - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to StreamingContext.
- jarOfObject(Object) - Static method in class org.apache.spark.api.java.JavaSparkContext
-
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.
- jarOfObject(Object) - Static method in class org.apache.spark.SparkContext
-
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.
- jars() - Method in class org.apache.spark.api.java.JavaSparkContext
- jars() - Method in class org.apache.spark.SparkContext
- JAVA_HOME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- java_method(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Calls a method with reflection.
- JAVA_VERSION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- javaAntecedent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
Returns antecedent in a Java List.
- javaCategoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
Java-friendly version of
VectorIndexerModel.categoryMaps()
- javaConsequent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
Returns consequent in a Java List.
- JavaDoubleRDD - Class in org.apache.spark.api.java
- JavaDoubleRDD(RDD<Object>) - Constructor for class org.apache.spark.api.java.JavaDoubleRDD
- JavaDStream<T> - Class in org.apache.spark.streaming.api.java
-
A Java-friendly interface to
DStream
, the basic abstraction in Spark Streaming that represents a continuous stream of data. - JavaDStream(DStream<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.api.java.JavaDStream
- JavaDStreamLike<T,
This extends JavaDStreamLike<T, This, R>, R extends JavaRDDLike<T, R>> - Interface in org.apache.spark.streaming.api.java - JavaFutureAction<T> - Interface in org.apache.spark.api.java
- JavaHadoopRDD<K,
V> - Class in org.apache.spark.api.java - JavaHadoopRDD(HadoopRDD<K, V>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.api.java.JavaHadoopRDD
- javaHome() - Method in class org.apache.spark.status.api.v1.RuntimeInfo
- JavaInputDStream<T> - Class in org.apache.spark.streaming.api.java
-
A Java-friendly interface to
InputDStream
. - JavaInputDStream(InputDStream<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.api.java.JavaInputDStream
- javaItems() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
-
Returns items in a Java List.
- JavaIterableWrapperSerializer - Class in org.apache.spark.serializer
-
A Kryo serializer for serializing results returned by asJavaIterable.
- JavaIterableWrapperSerializer() - Constructor for class org.apache.spark.serializer.JavaIterableWrapperSerializer
- JavaMapWithStateDStream<KeyType,
ValueType, StateType, MappedType> - Class in org.apache.spark.streaming.api.java -
DStream representing the stream of data generated by
mapWithState
operation on aJavaPairDStream
. - JavaModuleOptions - Class in org.apache.spark.launcher
-
This helper class is used to place some JVM runtime options(eg: `--add-opens`) required by Spark when using Java 17.
- JavaModuleOptions() - Constructor for class org.apache.spark.launcher.JavaModuleOptions
- JavaNewHadoopRDD<K,
V> - Class in org.apache.spark.api.java - JavaNewHadoopRDD(NewHadoopRDD<K, V>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.api.java.JavaNewHadoopRDD
- javaOcvTypes() - Static method in class org.apache.spark.ml.image.ImageSchema
-
(Java-specific) OpenCV type mapping supported
- JavaPackage - Class in org.apache.spark.mllib
-
A dummy class as a workaround to show the package doc of
spark.mllib
in generated Java API docs. - JavaPairDStream<K,
V> - Class in org.apache.spark.streaming.api.java -
A Java-friendly interface to a DStream of key-value pairs, which provides extra methods like
reduceByKey
andjoin
. - JavaPairDStream(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.streaming.api.java.JavaPairDStream
- JavaPairInputDStream<K,
V> - Class in org.apache.spark.streaming.api.java -
A Java-friendly interface to
InputDStream
of key-value pairs. - JavaPairInputDStream(InputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.streaming.api.java.JavaPairInputDStream
- JavaPairRDD<K,
V> - Class in org.apache.spark.api.java - JavaPairRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.api.java.JavaPairRDD
- JavaPairReceiverInputDStream<K,
V> - Class in org.apache.spark.streaming.api.java -
A Java-friendly interface to
ReceiverInputDStream
, the abstract class for defining any input stream that receives data over the network. - JavaPairReceiverInputDStream(ReceiverInputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
- JavaParams - Class in org.apache.spark.ml.param
-
Java-friendly wrapper for
Params
. - JavaParams() - Constructor for class org.apache.spark.ml.param.JavaParams
- javaRDD() - Method in class org.apache.spark.sql.Dataset
-
Returns the content of the Dataset as a
JavaRDD
ofT
s. - JavaRDD<T> - Class in org.apache.spark.api.java
- JavaRDD(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.api.java.JavaRDD
- JavaRDDLike<T,
This extends JavaRDDLike<T, This>> - Interface in org.apache.spark.api.java -
Defines operations common to several Java RDD implementations.
- JavaReceiverInputDStream<T> - Class in org.apache.spark.streaming.api.java
-
A Java-friendly interface to
ReceiverInputDStream
, the abstract class for defining any input stream that receives data over the network. - JavaReceiverInputDStream(ReceiverInputDStream<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
- javaSequence() - Method in class org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
-
Returns sequence as a Java List of lists for Java users.
- javaSerialization(Class<T>) - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder that serializes objects of type T using generic Java serialization.
- javaSerialization(ClassTag<T>) - Static method in class org.apache.spark.sql.Encoders
-
(Scala-specific) Creates an encoder that serializes objects of type T using generic Java serialization.
- JavaSerializer - Class in org.apache.spark.serializer
-
:: DeveloperApi :: A Spark serializer that uses Java's built-in serialization.
- JavaSerializer(SparkConf) - Constructor for class org.apache.spark.serializer.JavaSerializer
- JavaSourceFromString(String, String) - Constructor for class org.apache.spark.util.SparkTestUtils.JavaSourceFromString
- JavaSparkContext - Class in org.apache.spark.api.java
-
A Java-friendly version of
SparkContext
that returnsJavaRDD
s and works with Java collections instead of Scala ones. - JavaSparkContext() - Constructor for class org.apache.spark.api.java.JavaSparkContext
-
Create a JavaSparkContext that loads settings from system properties (for instance, when launching with ./bin/spark-submit).
- JavaSparkContext(String, String) - Constructor for class org.apache.spark.api.java.JavaSparkContext
- JavaSparkContext(String, String, String, String) - Constructor for class org.apache.spark.api.java.JavaSparkContext
- JavaSparkContext(String, String, String, String[]) - Constructor for class org.apache.spark.api.java.JavaSparkContext
- JavaSparkContext(String, String, String, String[], Map<String, String>) - Constructor for class org.apache.spark.api.java.JavaSparkContext
- JavaSparkContext(String, String, SparkConf) - Constructor for class org.apache.spark.api.java.JavaSparkContext
- JavaSparkContext(SparkConf) - Constructor for class org.apache.spark.api.java.JavaSparkContext
- JavaSparkContext(SparkContext) - Constructor for class org.apache.spark.api.java.JavaSparkContext
- JavaSparkStatusTracker - Class in org.apache.spark.api.java
-
Low-level status reporting APIs for monitoring job and stage progress.
- JavaStreamingContext - Class in org.apache.spark.streaming.api.java
-
Deprecated.This is deprecated as of Spark 3.4.0. There are no longer updates to DStream and it's a legacy project. There is a newer and easier to use streaming engine in Spark called Structured Streaming. You should use Spark Structured Streaming for your streaming applications.
- JavaStreamingContext(String) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Recreate a JavaStreamingContext from a checkpoint file.
- JavaStreamingContext(String, String, Duration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a StreamingContext.
- JavaStreamingContext(String, String, Duration, String, String) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a StreamingContext.
- JavaStreamingContext(String, String, Duration, String, String[]) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a StreamingContext.
- JavaStreamingContext(String, String, Duration, String, String[], Map<String, String>) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a StreamingContext.
- JavaStreamingContext(String, Configuration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Re-creates a JavaStreamingContext from a checkpoint file.
- JavaStreamingContext(JavaSparkContext, Duration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a JavaStreamingContext using an existing JavaSparkContext.
- JavaStreamingContext(SparkConf, Duration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a JavaStreamingContext using a SparkConf configuration.
- JavaStreamingContext(StreamingContext) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
- JavaStreamingListenerEvent - Interface in org.apache.spark.streaming.api.java
-
Base trait for events related to JavaStreamingListener
- javaTopicAssignments() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- javaTopicDistributions() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Java-friendly version of
DistributedLDAModel.topicDistributions()
- javaTopTopicsPerDocument(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Java-friendly version of
DistributedLDAModel.topTopicsPerDocument(int)
- javaTreeWeights() - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
-
Weights used by the python wrappers.
- JavaUtils - Class in org.apache.spark.api.java
- JavaUtils() - Constructor for class org.apache.spark.api.java.JavaUtils
- JavaUtils.SerializableMapWrapper<A,
B> - Class in org.apache.spark.api.java - javaVersion() - Method in class org.apache.spark.status.api.v1.RuntimeInfo
- jdbc(String, String) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().jdbc()
. - jdbc(String, String, String[]) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().jdbc()
. - jdbc(String, String, String[], Properties) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Construct a
DataFrame
representing the database table accessible via JDBC URL url named table using connection properties. - jdbc(String, String, String[], Properties) - Method in class org.apache.spark.sql.DataFrameReader
- jdbc(String, String, String, long, long, int) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().jdbc()
. - jdbc(String, String, String, long, long, int, Properties) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Construct a
DataFrame
representing the database table accessible via JDBC URL url named table. - jdbc(String, String, String, long, long, int, Properties) - Method in class org.apache.spark.sql.DataFrameReader
- jdbc(String, String, Properties) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Construct a
DataFrame
representing the database table accessible via JDBC URL url named table and connection properties. - jdbc(String, String, Properties) - Method in class org.apache.spark.sql.DataFrameReader
- jdbc(String, String, Properties) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
to an external database table via JDBC. - JdbcConnectionProvider - Class in org.apache.spark.sql.jdbc
-
::DeveloperApi:: Connection provider which opens connection toward various databases (database specific instance needed).
- JdbcConnectionProvider() - Constructor for class org.apache.spark.sql.jdbc.JdbcConnectionProvider
- JdbcDialect - Class in org.apache.spark.sql.jdbc
-
:: DeveloperApi :: Encapsulates everything (extensions, workarounds, quirks) to handle the SQL dialect of a certain database or jdbc driver.
- JdbcDialect() - Constructor for class org.apache.spark.sql.jdbc.JdbcDialect
- JdbcDialects - Class in org.apache.spark.sql.jdbc
-
:: DeveloperApi :: Registry of dialects that apply to every new jdbc
org.apache.spark.sql.DataFrame
. - JdbcDialects() - Constructor for class org.apache.spark.sql.jdbc.JdbcDialects
- jdbcNullType() - Method in class org.apache.spark.sql.jdbc.JdbcType
- JdbcRDD<T> - Class in org.apache.spark.rdd
-
An RDD that executes a SQL query on a JDBC connection and reads results.
- JdbcRDD(SparkContext, Function0<Connection>, String, long, long, int, Function1<ResultSet, T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.JdbcRDD
- JdbcRDD.ConnectionFactory - Interface in org.apache.spark.rdd
- JdbcSQLQueryBuilder - Class in org.apache.spark.sql.jdbc
-
The builder to build a single SELECT query.
- JdbcSQLQueryBuilder(JdbcDialect, JDBCOptions) - Constructor for class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
- JdbcType - Class in org.apache.spark.sql.jdbc
-
:: DeveloperApi :: A database type definition coupled with the jdbc type needed to send null values to the database.
- JdbcType(String, int) - Constructor for class org.apache.spark.sql.jdbc.JdbcType
- JettyUtils - Class in org.apache.spark.ui
-
Utilities for launching a web server using Jetty's HTTP Server class
- JettyUtils() - Constructor for class org.apache.spark.ui.JettyUtils
- JettyUtils.ServletParams<T> - Class in org.apache.spark.ui
- JettyUtils.ServletParams$ - Class in org.apache.spark.ui
- JOB_DAG() - Static method in class org.apache.spark.ui.ToolTips
- JOB_EXECUTION_STATUS_FAILED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_FAILED = 3;
- JOB_EXECUTION_STATUS_FAILED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_FAILED = 3;
- JOB_EXECUTION_STATUS_RUNNING - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_RUNNING = 1;
- JOB_EXECUTION_STATUS_RUNNING_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_RUNNING = 1;
- JOB_EXECUTION_STATUS_SUCCEEDED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_SUCCEEDED = 2;
- JOB_EXECUTION_STATUS_SUCCEEDED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_SUCCEEDED = 2;
- JOB_EXECUTION_STATUS_UNKNOWN - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_UNKNOWN = 4;
- JOB_EXECUTION_STATUS_UNKNOWN_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_UNKNOWN = 4;
- JOB_EXECUTION_STATUS_UNSPECIFIED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_UNSPECIFIED = 0;
- JOB_EXECUTION_STATUS_UNSPECIFIED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
JOB_EXECUTION_STATUS_UNSPECIFIED = 0;
- JOB_GROUP_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- JOB_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- JOB_IDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- JOB_TAGS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- JOB_TIMELINE() - Static method in class org.apache.spark.ui.ToolTips
- JobData - Class in org.apache.spark.status.api.v1
- JobDataUtil - Class in org.apache.spark.ui.jobs
- JobDataUtil() - Constructor for class org.apache.spark.ui.jobs.JobDataUtil
- jobEndFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- jobEndToJson(SparkListenerJobEnd, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- JobExecutionStatus - Enum Class in org.apache.spark
- JobExecutionStatusSerializer - Class in org.apache.spark.status.protobuf
- JobExecutionStatusSerializer() - Constructor for class org.apache.spark.status.protobuf.JobExecutionStatusSerializer
- jobFailed(Exception) - Method in interface org.apache.spark.scheduler.JobListener
- JobGeneratorEvent - Interface in org.apache.spark.streaming.scheduler
-
Event classes for JobGenerator
- jobGroup() - Method in class org.apache.spark.status.api.v1.JobData
- jobId() - Method in class org.apache.spark.scheduler.SparkListenerJobEnd
- jobId() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
- jobId() - Method in interface org.apache.spark.SparkJobInfo
- jobId() - Method in class org.apache.spark.SparkJobInfoImpl
- jobId() - Method in class org.apache.spark.status.api.v1.JobData
- jobId() - Method in class org.apache.spark.status.LiveJob
- jobID() - Method in class org.apache.spark.TaskCommitDenied
- jobIds() - Method in interface org.apache.spark.api.java.JavaFutureAction
-
Returns the job IDs run by the underlying async operation.
- jobIds() - Method in class org.apache.spark.ComplexFutureAction
- jobIds() - Method in interface org.apache.spark.FutureAction
-
Returns the job IDs run by the underlying async operation.
- jobIds() - Method in class org.apache.spark.SimpleFutureAction
- jobIds() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- jobIds() - Method in class org.apache.spark.status.LiveStage
- JobListener - Interface in org.apache.spark.scheduler
-
Interface used to listen for job completion or failure events after submitting a job to the DAGScheduler.
- jobResult() - Method in class org.apache.spark.scheduler.SparkListenerJobEnd
- JobResult - Interface in org.apache.spark.scheduler
-
:: DeveloperApi :: A result of a job in the DAGScheduler.
- jobResultFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- jobResultToJson(JobResult, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- jobs() - Method in class org.apache.spark.status.LiveStage
- JOBS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- JobSchedulerEvent - Interface in org.apache.spark.streaming.scheduler
- jobStartFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- jobStartToJson(SparkListenerJobStart, JsonGenerator, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- JobSubmitter - Interface in org.apache.spark
-
Handle via which a "run" function passed to a
ComplexFutureAction
can submit jobs for execution. - JobSucceeded - Class in org.apache.spark.scheduler
- JobSucceeded() - Constructor for class org.apache.spark.scheduler.JobSucceeded
- jobTags() - Method in class org.apache.spark.status.api.v1.JobData
- join(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD containing all pairs of elements with matching keys in
this
andother
. - join(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD containing all pairs of elements with matching keys in
this
andother
. - join(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD containing all pairs of elements with matching keys in
this
andother
. - join(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD containing all pairs of elements with matching keys in
this
andother
. - join(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD containing all pairs of elements with matching keys in
this
andother
. - join(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD containing all pairs of elements with matching keys in
this
andother
. - join(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Join with another
DataFrame
. - join(Dataset, String) - Method in class org.apache.spark.sql.api.Dataset
-
Inner equi-join with another
DataFrame
using the given column. - join(Dataset, String[]) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Inner equi-join with another
DataFrame
using the given columns. - join(Dataset, String[], String) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Equi-join with another
DataFrame
using the given columns. - join(Dataset, String, String) - Method in class org.apache.spark.sql.api.Dataset
-
Equi-join with another
DataFrame
using the given column. - join(Dataset, Column) - Method in class org.apache.spark.sql.api.Dataset
-
Inner join with another
DataFrame
, using the given join expression. - join(Dataset, Column, String) - Method in class org.apache.spark.sql.api.Dataset
-
Join with another
DataFrame
, using the given join expression. - join(Dataset, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Inner equi-join with another
DataFrame
using the given columns. - join(Dataset, Seq<String>, String) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Equi-join with another
DataFrame
using the given columns. - join(Dataset<?>) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, String) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, String[]) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, String[], String) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, String, String) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, Column) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, Column, String) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- join(Dataset<?>, Seq<String>, String) - Method in class org.apache.spark.sql.Dataset
- join(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'join' between RDDs of
this
DStream andother
DStream. - join(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'join' between RDDs of
this
DStream andother
DStream. - join(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'join' between RDDs of
this
DStream andother
DStream. - join(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'join' between RDDs of
this
DStream andother
DStream. - join(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'join' between RDDs of
this
DStream andother
DStream. - join(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'join' between RDDs of
this
DStream andother
DStream. - joinConditionMissingOrTrivialError(Join, LogicalPlan, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- joinStrategyHintParameterNotSupportedError(Object) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- joinVertices(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD>, ClassTag<U>) - Method in class org.apache.spark.graphx.GraphOps
-
Join the vertices with an RDD and then apply a function from the vertex and RDD entry to a new vertex value.
- joinWith(Dataset, Column) - Method in class org.apache.spark.sql.api.Dataset
-
Using inner equi-join to join this Dataset returning a
Tuple2
for each pair wherecondition
evaluates to true. - joinWith(Dataset, Column, String) - Method in class org.apache.spark.sql.api.Dataset
-
Joins this Dataset returning a
Tuple2
for each pair wherecondition
evaluates to true. - joinWith(Dataset<U>, Column) - Method in class org.apache.spark.sql.Dataset
- joinWith(Dataset<U>, Column, String) - Method in class org.apache.spark.sql.Dataset
- json() - Method in class org.apache.spark.sql.connector.read.streaming.Offset
-
A JSON-serialized representation of an Offset that is used for saving offsets to the offset log.
- json() - Method in interface org.apache.spark.sql.Row
-
The compact JSON representation of this row.
- json() - Method in class org.apache.spark.sql.streaming.SinkProgress
-
The compact JSON representation of this progress.
- json() - Method in class org.apache.spark.sql.streaming.SourceProgress
-
The compact JSON representation of this progress.
- json() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
The compact JSON representation of this progress.
- json() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryIdleEvent
- json() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent
- json() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
- json() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
- json() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
The compact JSON representation of this progress.
- json() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
-
The compact JSON representation of this status.
- json() - Static method in class org.apache.spark.sql.types.BinaryType
- json() - Static method in class org.apache.spark.sql.types.BooleanType
- json() - Static method in class org.apache.spark.sql.types.ByteType
- json() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- json() - Method in class org.apache.spark.sql.types.DataType
-
The compact JSON representation of this data type.
- json() - Static method in class org.apache.spark.sql.types.DateType
- json() - Static method in class org.apache.spark.sql.types.DoubleType
- json() - Static method in class org.apache.spark.sql.types.FloatType
- json() - Static method in class org.apache.spark.sql.types.IntegerType
- json() - Static method in class org.apache.spark.sql.types.LongType
- json() - Method in class org.apache.spark.sql.types.Metadata
-
Converts to its JSON representation.
- json() - Static method in class org.apache.spark.sql.types.NullType
- json() - Static method in class org.apache.spark.sql.types.ShortType
- json() - Static method in class org.apache.spark.sql.types.StringType
- json() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- json() - Static method in class org.apache.spark.sql.types.TimestampType
- json() - Static method in class org.apache.spark.sql.types.VariantType
- json(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads a JSON file and returns the results as a
DataFrame
. - json(String) - Method in class org.apache.spark.sql.DataFrameReader
- json(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
in JSON format ( JSON Lines text format or newline-delimited JSON) at the specified path. - json(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads a JSON file stream and returns the results as a
DataFrame
. - json(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads JSON files and returns the results as a
DataFrame
. - json(String...) - Method in class org.apache.spark.sql.DataFrameReader
- json(JavaRDD<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Deprecated.Use json(Dataset[String]) instead. Since 2.2.0.
- json(RDD<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Deprecated.Use json(Dataset[String]) instead. Since 2.2.0.
- json(Dataset) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads a
Dataset[String]
storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame
. - json(Dataset<String>) - Method in class org.apache.spark.sql.DataFrameReader
- json(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads JSON files and returns the results as a
DataFrame
. - json(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- json_array_length(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of elements in the outermost JSON array.
- json_object_keys(Column) - Static method in class org.apache.spark.sql.functions
-
Returns all the keys of the outermost JSON object as an array.
- json_tuple(Column, String...) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for a json column according to the given field names.
- json_tuple(Column, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for a json column according to the given field names.
- jsonDecode(String) - Method in class org.apache.spark.ml.param.BooleanParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.DoubleArrayArrayParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.DoubleArrayParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.DoubleParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.FloatParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.IntArrayParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.IntParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.LongParam
- jsonDecode(String) - Method in class org.apache.spark.ml.param.Param
-
Decodes a param value from JSON.
- jsonDecode(String) - Method in class org.apache.spark.ml.param.StringArrayParam
- jsonEncode(boolean) - Method in class org.apache.spark.ml.param.BooleanParam
- jsonEncode(double) - Method in class org.apache.spark.ml.param.DoubleParam
- jsonEncode(double[]) - Method in class org.apache.spark.ml.param.DoubleArrayParam
- jsonEncode(double[][]) - Method in class org.apache.spark.ml.param.DoubleArrayArrayParam
- jsonEncode(float) - Method in class org.apache.spark.ml.param.FloatParam
- jsonEncode(int) - Method in class org.apache.spark.ml.param.IntParam
- jsonEncode(int[]) - Method in class org.apache.spark.ml.param.IntArrayParam
- jsonEncode(long) - Method in class org.apache.spark.ml.param.LongParam
- jsonEncode(String[]) - Method in class org.apache.spark.ml.param.StringArrayParam
- jsonEncode(T) - Method in class org.apache.spark.ml.param.Param
-
Encodes a param value into JSON, which can be decoded by `jsonDecode()`.
- jsonFile(String) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonFile(String, double) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonFile(String, StructType) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - JsonMatrixConverter - Class in org.apache.spark.ml.linalg
- JsonMatrixConverter() - Constructor for class org.apache.spark.ml.linalg.JsonMatrixConverter
- JsonProtocol - Class in org.apache.spark.util
-
Serializes SparkListener events to/from JSON.
- JsonProtocol() - Constructor for class org.apache.spark.util.JsonProtocol
- jsonRDD(JavaRDD<String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonRDD(JavaRDD<String>, double) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonRDD(JavaRDD<String>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonRDD(RDD<String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonRDD(RDD<String>, double) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonRDD(RDD<String>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().json()
. - jsonResponderToServlet(Function1<HttpServletRequest, JValue>) - Static method in class org.apache.spark.ui.JettyUtils
- JsonUtils - Interface in org.apache.spark.util
- jsonValue() - Method in interface org.apache.spark.sql.Row
-
JSON representation of the row.
- jsonValue() - Method in class org.apache.spark.sql.types.StringType
- JsonVectorConverter - Class in org.apache.spark.ml.linalg
- JsonVectorConverter() - Constructor for class org.apache.spark.ml.linalg.JsonVectorConverter
- jValueDecode(JValue) - Static method in class org.apache.spark.ml.param.DoubleParam
-
Decodes a param value from JValue.
- jValueDecode(JValue) - Static method in class org.apache.spark.ml.param.FloatParam
-
Decodes a param value from JValue.
- jValueEncode(double) - Static method in class org.apache.spark.ml.param.DoubleParam
-
Encodes a param value into JValue.
- jValueEncode(float) - Static method in class org.apache.spark.ml.param.FloatParam
-
Encodes a param value into JValue.
- JVM_GC_TIME() - Static method in class org.apache.spark.InternalAccumulator
- JVM_GC_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- JVM_GC_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- JVM_GC_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- JVM_GC_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- JVM_GC_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- jvmGcTime() - Method in class org.apache.spark.status.api.v1.StageData
- jvmGcTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- jvmGcTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- JVMHeapMemory - Class in org.apache.spark.metrics
- JVMHeapMemory() - Constructor for class org.apache.spark.metrics.JVMHeapMemory
- JVMOffHeapMemory - Class in org.apache.spark.metrics
- JVMOffHeapMemory() - Constructor for class org.apache.spark.metrics.JVMOffHeapMemory
- JWSFilter - Class in org.apache.spark.ui
-
A servlet filter that requires JWS, a cryptographically signed JSON Web Token, in the header.
- JWSFilter() - Constructor for class org.apache.spark.ui.JWSFilter
K
- k() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- k() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- k() - Method in interface org.apache.spark.ml.clustering.BisectingKMeansParams
-
The desired number of leaf clusters.
- k() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
- k() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- k() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- k() - Method in interface org.apache.spark.ml.clustering.GaussianMixtureParams
-
Number of independent Gaussians in the mixture model.
- k() - Method in class org.apache.spark.ml.clustering.KMeans
- k() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- k() - Method in class org.apache.spark.ml.clustering.KMeansModel
- k() - Method in interface org.apache.spark.ml.clustering.KMeansParams
-
The number of clusters to create (k).
- k() - Method in class org.apache.spark.ml.clustering.LDA
- k() - Method in class org.apache.spark.ml.clustering.LDAModel
- k() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Param for the number of topics (clusters) to infer.
- k() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- k() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
-
The number of clusters to create (k).
- k() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
-
param for ranking position value used in
"meanAveragePrecisionAtK"
,"precisionAtK"
,"ndcgAtK"
,"recallAtK"
. - k() - Method in class org.apache.spark.ml.feature.PCA
- k() - Method in class org.apache.spark.ml.feature.PCAModel
- k() - Method in interface org.apache.spark.ml.feature.PCAParams
-
The number of principal components.
- k() - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
- k() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- k() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
- k() - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Number of gaussians in mixture
- k() - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Total number of clusters.
- k() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Number of topics
- k() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- k() - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
- k() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
- k() - Method in class org.apache.spark.mllib.feature.PCA
- k() - Method in class org.apache.spark.mllib.feature.PCAModel
- K_MEANS_PARALLEL() - Static method in class org.apache.spark.mllib.clustering.KMeans
- KafkaRedactionUtil - Class in org.apache.spark.kafka010
- KafkaRedactionUtil() - Constructor for class org.apache.spark.kafka010.KafkaRedactionUtil
- KafkaTokenSparkConf - Class in org.apache.spark.kafka010
- KafkaTokenSparkConf() - Constructor for class org.apache.spark.kafka010.KafkaTokenSparkConf
- KafkaTokenUtil - Class in org.apache.spark.kafka010
- KafkaTokenUtil() - Constructor for class org.apache.spark.kafka010.KafkaTokenUtil
- kClassTag() - Method in class org.apache.spark.api.java.JavaHadoopRDD
- kClassTag() - Method in class org.apache.spark.api.java.JavaNewHadoopRDD
- kClassTag() - Method in class org.apache.spark.api.java.JavaPairRDD
- kClassTag() - Method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
- kClassTag() - Method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
- keepLastCheckpoint() - Method in class org.apache.spark.ml.clustering.LDA
- keepLastCheckpoint() - Method in class org.apache.spark.ml.clustering.LDAModel
- keepLastCheckpoint() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
For EM optimizer only:
LDAParams.optimizer()
= "em". - KernelDensity - Class in org.apache.spark.mllib.stat
-
Kernel density estimation.
- KernelDensity() - Constructor for class org.apache.spark.mllib.stat.KernelDensity
- key - Variable in class org.apache.spark.types.variant.Variant.ObjectField
- keyArray() - Method in class org.apache.spark.sql.vectorized.ColumnarMap
- keyAs(Encoder<L>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Returns a new
KeyValueGroupedDataset
where the type of the key has been mapped to the specified type. - keyAs(Encoder<L>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- keyBy(Function<T, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Creates tuples of the elements in this RDD by applying
f
. - keyBy(Function1<T, K>) - Method in class org.apache.spark.rdd.RDD
-
Creates tuples of the elements in this RDD by applying
f
. - KeyGroupedPartitioning - Class in org.apache.spark.sql.connector.read.partitioning
-
Represents a partitioning where rows are split across partitions based on the partition transform expressions returned by
KeyGroupedPartitioning.keys
. - KeyGroupedPartitioning(Expression[], int) - Constructor for class org.apache.spark.sql.connector.read.partitioning.KeyGroupedPartitioning
- keyOrdering() - Method in class org.apache.spark.ShuffleDependency
- keyPrefix() - Method in interface org.apache.spark.sql.connector.catalog.SessionConfigSupport
-
Key prefix of the session configs to propagate, which is usually the data source name.
- keys() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the keys of each tuple.
- keys() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the keys of each tuple.
- keys() - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Returns a
Dataset
that contains each unique key. - keys() - Method in class org.apache.spark.sql.connector.read.partitioning.KeyGroupedPartitioning
-
Returns the partition transform expressions for this partitioning.
- keys() - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- keys() - Method in interface org.apache.spark.sql.streaming.MapState
-
Get the list of keys present in map associated with grouping key
- keySet() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- keyType() - Method in class org.apache.spark.sql.types.MapType
- KeyValueGroupedDataset<K,
V> - Class in org.apache.spark.sql.api -
A
Dataset
has been logically grouped by a user specified grouping key. - KeyValueGroupedDataset<K,
V> - Class in org.apache.spark.sql -
A
Dataset
has been logically grouped by a user specified grouping key. - KeyValueGroupedDataset() - Constructor for class org.apache.spark.sql.api.KeyValueGroupedDataset
- keyValueInMapNotStringError(CreateMap) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- kFold(RDD<T>, int, int, ClassTag<T>) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Return a k element array of pairs of RDDs with the first element of each pair containing the training data, a complement of the validation data and the second element, the validation data, containing a unique 1/kth of the data.
- kFold(RDD<T>, int, long, ClassTag<T>) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Version of
kFold()
taking a Long seed. - kFold(Dataset<Row>, int, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Version of
kFold()
taking a fold column name. - kill() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Tries to kill the underlying application.
- KILL_TASKS_SUMMARY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- killAllTaskAttempts(int, boolean, String) - Method in interface org.apache.spark.scheduler.TaskScheduler
- killed() - Method in class org.apache.spark.scheduler.TaskInfo
- KILLED - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The application was killed.
- KILLED - Enum constant in enum class org.apache.spark.status.api.v1.TaskStatus
- KILLED() - Static method in class org.apache.spark.TaskState
- KILLED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- KILLED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- KILLED_TASKS_SUMMARY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- killedSummary() - Method in class org.apache.spark.status.LiveJob
- killedSummary() - Method in class org.apache.spark.status.LiveStage
- killedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- killedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- killedTasks() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- killedTasks() - Method in class org.apache.spark.status.LiveJob
- killedTasks() - Method in class org.apache.spark.status.LiveStage
- killedTasksSummary() - Method in class org.apache.spark.status.api.v1.JobData
- killedTasksSummary() - Method in class org.apache.spark.status.api.v1.StageData
- killExecutor(String) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Request that the cluster manager kill the specified executor.
- killExecutors(Seq<String>) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Request that the cluster manager kill the specified executors.
- KillExecutors(Seq<String>) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
- KillExecutors$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors$
- KillExecutorsOnHost(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
- KillExecutorsOnHost$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost$
- killTask(long, String, boolean, String) - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Requests that an executor kills a running task.
- KillTask - Class in org.apache.spark.scheduler.local
- KillTask(long, boolean, String) - Constructor for class org.apache.spark.scheduler.local.KillTask
- KillTask(long, String, boolean, String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
- KillTask$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask$
- killTaskAttempt(long, boolean, String) - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Kills a task attempt.
- killTaskAttempt(long, boolean, String) - Method in class org.apache.spark.SparkContext
-
Kill and reschedule the given task attempt.
- KinesisInitialPositions - Class in org.apache.spark.streaming.kinesis
- KinesisInitialPositions() - Constructor for class org.apache.spark.streaming.kinesis.KinesisInitialPositions
- KinesisInitialPositions.AtTimestamp - Class in org.apache.spark.streaming.kinesis
- KinesisInitialPositions.Latest - Class in org.apache.spark.streaming.kinesis
- KinesisInitialPositions.TrimHorizon - Class in org.apache.spark.streaming.kinesis
- KinesisUtilsPythonHelper - Class in org.apache.spark.streaming.kinesis
-
This is a helper class that wraps the methods in KinesisUtils into more Python-friendly class and function so that it can be easily instantiated and called from Python's KinesisUtils.
- KinesisUtilsPythonHelper() - Constructor for class org.apache.spark.streaming.kinesis.KinesisUtilsPythonHelper
- kManifest() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
- KMeans - Class in org.apache.spark.ml.clustering
-
K-means clustering with support for k-means|| initialization proposed by Bahmani et al.
- KMeans - Class in org.apache.spark.mllib.clustering
-
K-means clustering with a k-means++ like initialization mode (the k-means|| algorithm by Bahmani et al).
- KMeans() - Constructor for class org.apache.spark.ml.clustering.KMeans
- KMeans() - Constructor for class org.apache.spark.mllib.clustering.KMeans
-
Constructs a KMeans instance with default parameters: {k: 2, maxIterations: 20, initializationMode: "k-means||", initializationSteps: 2, epsilon: 1e-4, seed: random, distanceMeasure: "euclidean"}.
- KMeans(String) - Constructor for class org.apache.spark.ml.clustering.KMeans
- KMeansAggregator - Class in org.apache.spark.ml.clustering
-
KMeansAggregator computes the distances and updates the centers for blocks in sparse or dense matrix in an online fashion.
- KMeansAggregator(DenseMatrix, int, int, String) - Constructor for class org.apache.spark.ml.clustering.KMeansAggregator
- KMeansDataGenerator - Class in org.apache.spark.mllib.util
-
Generate test data for KMeans.
- KMeansDataGenerator() - Constructor for class org.apache.spark.mllib.util.KMeansDataGenerator
- KMeansModel - Class in org.apache.spark.ml.clustering
-
Model fitted by KMeans.
- KMeansModel - Class in org.apache.spark.mllib.clustering
-
A clustering model for K-means.
- KMeansModel(Iterable<Vector>) - Constructor for class org.apache.spark.mllib.clustering.KMeansModel
-
A Java-friendly constructor that takes an Iterable of Vectors.
- KMeansModel(Vector[]) - Constructor for class org.apache.spark.mllib.clustering.KMeansModel
- KMeansModel(Vector[], String, double, int) - Constructor for class org.apache.spark.mllib.clustering.KMeansModel
- KMeansModel.Cluster$ - Class in org.apache.spark.mllib.clustering
- KMeansModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.clustering
- KMeansModel.SaveLoadV2_0$ - Class in org.apache.spark.mllib.clustering
- KMeansParams - Interface in org.apache.spark.ml.clustering
-
Common params for KMeans and KMeansModel
- kMeansPlusPlus(int, VectorWithNorm[], double[], int, int) - Static method in class org.apache.spark.mllib.clustering.LocalKMeans
-
Run K-means++ on the weighted point set
points
. - KMeansSummary - Class in org.apache.spark.ml.clustering
-
Summary of KMeans.
- KnownSizeEstimation - Interface in org.apache.spark.util
-
A trait that allows a class to give
SizeEstimator
more accurate size estimation. - kolmogorovSmirnovTest(JavaDoubleRDD, String, double...) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of
kolmogorovSmirnovTest()
- kolmogorovSmirnovTest(JavaDoubleRDD, String, Seq<Object>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of
kolmogorovSmirnovTest()
- kolmogorovSmirnovTest(RDD<Object>, String, double...) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Convenience function to conduct a one-sample, two-sided Kolmogorov-Smirnov test for probability distribution equality.
- kolmogorovSmirnovTest(RDD<Object>, String, Seq<Object>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Convenience function to conduct a one-sample, two-sided Kolmogorov-Smirnov test for probability distribution equality.
- kolmogorovSmirnovTest(RDD<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct the two-sided Kolmogorov-Smirnov (KS) test for data sampled from a continuous distribution.
- KolmogorovSmirnovTest - Class in org.apache.spark.ml.stat
-
Conduct the two-sided Kolmogorov Smirnov (KS) test for data sampled from a continuous distribution.
- KolmogorovSmirnovTest - Class in org.apache.spark.mllib.stat.test
-
Conduct the two-sided Kolmogorov Smirnov (KS) test for data sampled from a continuous distribution.
- KolmogorovSmirnovTest() - Constructor for class org.apache.spark.ml.stat.KolmogorovSmirnovTest
- KolmogorovSmirnovTest() - Constructor for class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
- KolmogorovSmirnovTest.NullHypothesis$ - Class in org.apache.spark.mllib.stat.test
- KolmogorovSmirnovTestResult - Class in org.apache.spark.mllib.stat.test
-
Object containing the test results for the Kolmogorov-Smirnov test.
- kryo(Class<T>) - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder that serializes objects of type T using Kryo.
- kryo(ClassTag<T>) - Static method in class org.apache.spark.sql.Encoders
-
(Scala-specific) Creates an encoder that serializes objects of type T using Kryo.
- KryoRegistrator - Interface in org.apache.spark.serializer
-
Interface implemented by clients to register their classes with Kryo when using Kryo serialization.
- KryoSerializer - Class in org.apache.spark.serializer
-
A Spark serializer that uses the Kryo serialization library.
- KryoSerializer(SparkConf) - Constructor for class org.apache.spark.serializer.KryoSerializer
- KUBERNETES_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
- kurtosis(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the kurtosis of the values in a group.
- kurtosis(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the kurtosis of the values in a group.
- KVUtils - Class in org.apache.spark.status
- KVUtils() - Constructor for class org.apache.spark.status.KVUtils
L
- L1Updater - Class in org.apache.spark.mllib.optimization
-
Updater for L1 regularized problems.
- L1Updater() - Constructor for class org.apache.spark.mllib.optimization.L1Updater
- label() - Method in class org.apache.spark.ml.feature.LabeledPoint
- label() - Method in class org.apache.spark.mllib.regression.LabeledPoint
- label() - Method in class org.apache.spark.sql.scripting.IterateStatementExec
- label() - Method in class org.apache.spark.sql.scripting.LeaveStatementExec
- labelCol() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Field in "predictions" which gives the true label of each instance (if available).
- labelCol() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- labelCol() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- labelCol() - Method in class org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
- labelCol() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationSummaryImpl
- labelCol() - Method in class org.apache.spark.ml.classification.OneVsRest
- labelCol() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- labelCol() - Method in class org.apache.spark.ml.classification.RandomForestClassificationSummaryImpl
- labelCol() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- labelCol() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- labelCol() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- labelCol() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- labelCol() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- labelCol() - Method in class org.apache.spark.ml.feature.RFormula
- labelCol() - Method in class org.apache.spark.ml.feature.RFormulaModel
- labelCol() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- labelCol() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- labelCol() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- labelCol() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- labelCol() - Method in interface org.apache.spark.ml.param.shared.HasLabelCol
-
Param for label column name.
- labelCol() - Method in class org.apache.spark.ml.PredictionModel
- labelCol() - Method in class org.apache.spark.ml.Predictor
- labelCol() - Method in class org.apache.spark.ml.regression.IsotonicRegression
- labelCol() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- labelCol() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- labelDoesNotExist(Origin, String, String) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- LabeledPoint - Class in org.apache.spark.ml.feature
-
Class that represents the features and label of a data point.
- LabeledPoint - Class in org.apache.spark.mllib.regression
-
Class that represents the features and labels of a data point.
- LabeledPoint(double, Vector) - Constructor for class org.apache.spark.ml.feature.LabeledPoint
- LabeledPoint(double, Vector) - Constructor for class org.apache.spark.mllib.regression.LabeledPoint
- LabelPropagation - Class in org.apache.spark.graphx.lib
-
Label Propagation algorithm.
- LabelPropagation() - Constructor for class org.apache.spark.graphx.lib.LabelPropagation
- labels() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns the sequence of labels in ascending order.
- labels() - Method in class org.apache.spark.ml.feature.IndexToString
-
Optional param for array of labels specifying index-string mapping.
- labels() - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
Deprecated.`labels` is deprecated and will be removed in 3.1.0. Use `labelsArray` instead. Since 3.0.0.
- labels() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
- labels() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
- labels() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
- labels() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- labels() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
- labelsArray() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- labelsMismatch(Origin, String, String) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- labelType() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- labelType() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- labelType() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
-
The label type.
- lag(String, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows before the current row, andnull
if there is less thanoffset
rows before the current row. - lag(String, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows before the current row, anddefaultValue
if there is less thanoffset
rows before the current row. - lag(Column, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows before the current row, andnull
if there is less thanoffset
rows before the current row. - lag(Column, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows before the current row, anddefaultValue
if there is less thanoffset
rows before the current row. - lag(Column, int, Object, boolean) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows before the current row, anddefaultValue
if there is less thanoffset
rows before the current row. - LambdaMetafactoryClassName() - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- LambdaMetafactoryMethodDesc() - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- LambdaMetafactoryMethodName() - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- LAPACK - Class in org.apache.spark.mllib.linalg
-
LAPACK routines for MLlib's vectors and matrices.
- LAPACK() - Constructor for class org.apache.spark.mllib.linalg.LAPACK
- LassoModel - Class in org.apache.spark.mllib.regression
-
Regression model trained using Lasso.
- LassoModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.LassoModel
- LassoWithSGD - Class in org.apache.spark.mllib.regression
-
Train a regression model with L1-regularization using Stochastic Gradient Descent.
- last(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value of the column in a group.
- last(String, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value of the column in a group.
- last(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value in a group.
- last(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value in a group.
- last_day(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the last day of the month which the given date belongs to.
- LAST_UPDATED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- last_value(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value in a group.
- last_value(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value in a group.
- lastDir() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- lastError() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- lastError() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- lastErrorMessage() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- lastErrorMessage() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- lastErrorTime() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- lastErrorTime() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- lastProgress() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns the most recent
StreamingQueryProgress
update of this streaming query. - lastStageNameAndDescription(org.apache.spark.status.AppStatusStore, JobData) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- lastUpdate() - Method in class org.apache.spark.status.LiveRDDDistribution
- lastUpdated() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- lastWriteTime() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- lateralColumnAliasInAggFuncUnsupportedError(Seq<String>, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- lateralColumnAliasInAggWithWindowAndHavingUnsupportedError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- lateralColumnAliasInWindowUnsupportedError(Seq<String>, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- lateralJoinWithUsingJoinUnsupportedError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- lateralWithPivotInFromClauseNotAllowedError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- lateralWithUnpivotInFromClauseNotAllowedError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- Latest() - Constructor for class org.apache.spark.streaming.kinesis.KinesisInitialPositions.Latest
- LATEST_OFFSET_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- latestModel() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Return the latest model.
- latestModel() - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Return the latest model.
- latestOffset() - Method in interface org.apache.spark.sql.connector.read.streaming.MicroBatchStream
-
Returns the most recent offset available.
- latestOffset() - Method in class org.apache.spark.sql.streaming.SourceProgress
- latestOffset(Offset, ReadLimit) - Method in interface org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControl
-
Returns the most recent offset available given a read limit.
- latestOffsetNotCalledError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- launch() - Method in class org.apache.spark.launcher.SparkLauncher
-
Launches a sub-process that will start the configured Spark application.
- LAUNCH_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- LAUNCH_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- LAUNCH_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- LaunchedExecutor(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchedExecutor
- LaunchedExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchedExecutor$
- launching() - Method in class org.apache.spark.scheduler.TaskInfo
- LAUNCHING() - Static method in class org.apache.spark.TaskState
- LaunchTask(org.apache.spark.util.SerializableBuffer) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
- LaunchTask$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask$
- launchTime() - Method in class org.apache.spark.scheduler.TaskInfo
- launchTime() - Method in class org.apache.spark.status.api.v1.TaskData
- Layer - Interface in org.apache.spark.ml.ann
-
Trait that holds Layer properties, that are needed to instantiate it.
- LayerModel - Interface in org.apache.spark.ml.ann
-
Trait that holds Layer weights (or parameters).
- layerModels() - Method in interface org.apache.spark.ml.ann.TopologyModel
-
Array of layer models
- layers() - Method in interface org.apache.spark.ml.ann.TopologyModel
-
Array of layers
- layers() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- layers() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- layers() - Method in interface org.apache.spark.ml.classification.MultilayerPerceptronParams
-
Layer sizes including input size and output size.
- LBFGS - Class in org.apache.spark.mllib.optimization
-
Class used to solve an optimization problem using Limited-memory BFGS.
- LBFGS(Gradient, Updater) - Constructor for class org.apache.spark.mllib.optimization.LBFGS
- lcase(Column) - Static method in class org.apache.spark.sql.functions
-
Returns
str
with all characters changed to lowercase. - LDA - Class in org.apache.spark.ml.clustering
-
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
- LDA - Class in org.apache.spark.mllib.clustering
-
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
- LDA() - Constructor for class org.apache.spark.ml.clustering.LDA
- LDA() - Constructor for class org.apache.spark.mllib.clustering.LDA
-
Constructs a LDA instance with default parameters.
- LDA(String) - Constructor for class org.apache.spark.ml.clustering.LDA
- LDAModel - Class in org.apache.spark.ml.clustering
-
Model fitted by
LDA
. - LDAModel - Class in org.apache.spark.mllib.clustering
-
Latent Dirichlet Allocation (LDA) model.
- LDAOptimizer - Interface in org.apache.spark.mllib.clustering
-
An LDAOptimizer specifies which optimization/learning/inference algorithm to use, and it can hold optimizer-specific parameters for users to set.
- LDAParams - Interface in org.apache.spark.ml.clustering
- LDAUtils - Class in org.apache.spark.mllib.clustering
-
Utility methods for LDA.
- LDAUtils() - Constructor for class org.apache.spark.mllib.clustering.LDAUtils
- lead(String, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows after the current row, andnull
if there is less thanoffset
rows after the current row. - lead(String, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows after the current row, anddefaultValue
if there is less thanoffset
rows after the current row. - lead(Column, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows after the current row, andnull
if there is less thanoffset
rows after the current row. - lead(Column, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows after the current row, anddefaultValue
if there is less thanoffset
rows after the current row. - lead(Column, int, Object, boolean) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is
offset
rows after the current row, anddefaultValue
if there is less thanoffset
rows after the current row. - leafAttr() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
- leafCol() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- leafCol() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- leafCol() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- leafCol() - Method in class org.apache.spark.ml.classification.GBTClassifier
- leafCol() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- leafCol() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- leafCol() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- leafCol() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- leafCol() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- leafCol() - Method in class org.apache.spark.ml.regression.GBTRegressor
- leafCol() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- leafCol() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- leafCol() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
Leaf indices column name.
- leafIterator(Node) - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
- LeafNode - Class in org.apache.spark.ml.tree
-
Decision tree leaf node.
- LeafStatementExec - Interface in org.apache.spark.sql.scripting
-
Leaf node in the execution tree.
- learningDecay() - Method in class org.apache.spark.ml.clustering.LDA
- learningDecay() - Method in class org.apache.spark.ml.clustering.LDAModel
- learningDecay() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
For Online optimizer only:
LDAParams.optimizer()
= "online". - learningOffset() - Method in class org.apache.spark.ml.clustering.LDA
- learningOffset() - Method in class org.apache.spark.ml.clustering.LDAModel
- learningOffset() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
For Online optimizer only:
LDAParams.optimizer()
= "online". - learningRate() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- least(String, String...) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of column names, skipping null values.
- least(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of column names, skipping null values.
- least(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of values, skipping null values.
- least(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of values, skipping null values.
- LeastSquaresGradient - Class in org.apache.spark.mllib.optimization
-
Compute gradient and loss for a Least-squared loss function, as used in linear regression.
- LeastSquaresGradient() - Constructor for class org.apache.spark.mllib.optimization.LeastSquaresGradient
- LeaveStatementExec - Class in org.apache.spark.sql.scripting
-
Executable node for LeaveStatement.
- LeaveStatementExec(String) - Constructor for class org.apache.spark.sql.scripting.LeaveStatementExec
- left() - Method in class org.apache.spark.sql.connector.expressions.filter.And
- left() - Method in class org.apache.spark.sql.connector.expressions.filter.Or
- left() - Method in class org.apache.spark.sql.sources.And
- left() - Method in class org.apache.spark.sql.sources.Or
- left(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the leftmost
len
(len
can be string type) characters from the stringstr
, iflen
is less or equal than 0 the result is an empty string. - leftCategories() - Method in class org.apache.spark.ml.tree.CategoricalSplit
-
Get sorted categories which split to the left
- leftCategoriesOrThreshold() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
- leftChild() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- leftChild() - Method in class org.apache.spark.ml.tree.InternalNode
- leftChildIndex(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the index of the left child of this node.
- leftImpurity() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- leftJoin(RDD<Tuple2<Object, VD2>>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- leftJoin(RDD<Tuple2<Object, VD2>>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.VertexRDD
-
Left joins this VertexRDD with an RDD containing vertex attribute pairs.
- leftNode() - Method in class org.apache.spark.mllib.tree.model.Node
- leftNodeId() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- leftOuterJoin(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a left outer join of
this
andother
. - leftOuterJoin(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a left outer join of
this
andother
. - leftOuterJoin(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a left outer join of
this
andother
. - leftOuterJoin(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a left outer join of
this
andother
. - leftOuterJoin(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a left outer join of
this
andother
. - leftOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a left outer join of
this
andother
. - leftOuterJoin(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'left outer join' between RDDs of
this
DStream andother
DStream. - leftOuterJoin(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'left outer join' between RDDs of
this
DStream andother
DStream. - leftOuterJoin(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'left outer join' between RDDs of
this
DStream andother
DStream. - leftOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'left outer join' between RDDs of
this
DStream andother
DStream. - leftOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'left outer join' between RDDs of
this
DStream andother
DStream. - leftOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'left outer join' between RDDs of
this
DStream andother
DStream. - leftPredict() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- leftZipJoin(VertexRDD<VD2>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- leftZipJoin(VertexRDD<VD2>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.VertexRDD
-
Left joins this RDD with another VertexRDD with the same index.
- legacyCheckpointDirectoryExistsError(Path, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- legacyMetadataPathExistsError(Path, Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- legacyStoreAssignmentPolicyError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- len(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the character length of a given string or number of bytes of a binary string.
- length() - Method in class org.apache.spark.scheduler.SplitInfo
- length() - Method in interface org.apache.spark.sql.Row
-
Number of elements in the Row.
- length() - Method in class org.apache.spark.sql.types.CharType
- length() - Method in class org.apache.spark.sql.types.StructType
- length() - Method in class org.apache.spark.sql.types.VarcharType
- length() - Method in class org.apache.spark.status.RDDPartitionSeq
- length(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the character length of a given string or number of bytes of a binary string.
- leq(Object) - Method in class org.apache.spark.sql.Column
-
Less than or equal to.
- less(Duration) - Method in class org.apache.spark.streaming.Duration
- less(Time) - Method in class org.apache.spark.streaming.Time
- lessEq(Duration) - Method in class org.apache.spark.streaming.Duration
- lessEq(Time) - Method in class org.apache.spark.streaming.Time
- LessThan - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a value less thanvalue
. - LessThan(String, Object) - Constructor for class org.apache.spark.sql.sources.LessThan
- LessThanOrEqual - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a value less than or equal tovalue
. - LessThanOrEqual(String, Object) - Constructor for class org.apache.spark.sql.sources.LessThanOrEqual
- levenshtein(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Computes the Levenshtein distance of the two given string columns.
- levenshtein(Column, Column, int) - Static method in class org.apache.spark.sql.functions
-
Computes the Levenshtein distance of the two given string columns if it's less than or equal to a given threshold.
- libraryPathEnvName() - Static method in class org.apache.spark.util.Utils
-
Return the current system LD_LIBRARY_PATH name
- libraryPathEnvPrefix(Seq<String>) - Static method in class org.apache.spark.util.Utils
-
Return the prefix of a command that appends the given library paths to the system-specific library path environment variable.
- LibSVMDataSource - Class in org.apache.spark.ml.source.libsvm
-
libsvm
package implements Spark SQL data source API for loading LIBSVM data asDataFrame
. - LibSVMDataSource() - Constructor for class org.apache.spark.ml.source.libsvm.LibSVMDataSource
- lift() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
Returns the lift of the rule.
- like(String) - Method in class org.apache.spark.sql.Column
-
SQL like expression.
- like(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if str matches
pattern
withescapeChar
('\'), null if any arguments are null, false otherwise. - like(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if str matches
pattern
withescapeChar
, null if any arguments are null, false otherwise. - limit(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by taking the first
n
rows. - limit(int) - Method in class org.apache.spark.sql.Dataset
- line() - Method in exception org.apache.spark.sql.AnalysisException
- linear() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- linear() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- LinearDataGenerator - Class in org.apache.spark.mllib.util
-
Generate sample data used for Linear Data.
- LinearDataGenerator() - Constructor for class org.apache.spark.mllib.util.LinearDataGenerator
- LinearRegression - Class in org.apache.spark.ml.regression
-
Linear regression.
- LinearRegression() - Constructor for class org.apache.spark.ml.regression.LinearRegression
- LinearRegression(String) - Constructor for class org.apache.spark.ml.regression.LinearRegression
- LinearRegressionModel - Class in org.apache.spark.ml.regression
-
Model produced by
LinearRegression
. - LinearRegressionModel - Class in org.apache.spark.mllib.regression
-
Regression model trained using LinearRegression.
- LinearRegressionModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.LinearRegressionModel
- LinearRegressionParams - Interface in org.apache.spark.ml.regression
-
Params for linear regression.
- LinearRegressionSummary - Class in org.apache.spark.ml.regression
-
Linear regression results evaluated on a dataset.
- LinearRegressionTrainingSummary - Class in org.apache.spark.ml.regression
-
Linear regression training results.
- LinearRegressionWithSGD - Class in org.apache.spark.mllib.regression
-
Train a linear regression model with no regularization using Stochastic Gradient Descent.
- LinearSVC - Class in org.apache.spark.ml.classification
- LinearSVC() - Constructor for class org.apache.spark.ml.classification.LinearSVC
- LinearSVC(String) - Constructor for class org.apache.spark.ml.classification.LinearSVC
- LinearSVCModel - Class in org.apache.spark.ml.classification
-
Linear SVM Model trained by
LinearSVC
- LinearSVCParams - Interface in org.apache.spark.ml.classification
-
Params for linear SVM Classifier.
- LinearSVCSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for LinearSVC results for a given model.
- LinearSVCSummaryImpl - Class in org.apache.spark.ml.classification
-
LinearSVC results for a given model.
- LinearSVCSummaryImpl(Dataset<Row>, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- LinearSVCTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for LinearSVC training results.
- LinearSVCTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
LinearSVC training results.
- LinearSVCTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.LinearSVCTrainingSummaryImpl
- link() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- link() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Param for the name of link function which provides the relationship between the linear predictor and the mean of the distribution function.
- link() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
- Link$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
- linkPower() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- linkPower() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
- linkPower() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Param for the index in the power link function.
- linkPower() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- linkPredictionCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- linkPredictionCol() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Param for link prediction (linear predictor) column name.
- linkPredictionCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- listArchives() - Method in class org.apache.spark.SparkContext
-
:: Experimental :: Returns a list of archive paths that are added to resources.
- listCatalogs() - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of catalogs available in this session.
- listCatalogs() - Method in class org.apache.spark.sql.catalog.Catalog
- listCatalogs(String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of catalogs which name match the specify pattern and available in this session.
- listCatalogs(String) - Method in class org.apache.spark.sql.catalog.Catalog
- listColumns(String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of columns for the given table/view or temporary view.
- listColumns(String) - Method in class org.apache.spark.sql.catalog.Catalog
- listColumns(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of columns for the given table/view in the specified database under the Hive Metastore.
- listColumns(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
- listDatabases() - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of databases (namespaces) available within the current catalog.
- listDatabases() - Method in class org.apache.spark.sql.catalog.Catalog
- listDatabases(String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of databases (namespaces) which name match the specify pattern and available within the current catalog.
- listDatabases(String) - Method in class org.apache.spark.sql.catalog.Catalog
- listDirectory(File) - Static method in class org.apache.spark.TestUtils
-
Returns the list of files at 'path' recursively.
- listener() - Method in class org.apache.spark.CleanSparkListener
- listenerBus() - Method in interface org.apache.spark.ml.MLEvents
- ListenerBus<L,
E> - Interface in org.apache.spark.util -
An event bus which posts events to its listeners.
- listenerManager() - Method in class org.apache.spark.sql.SparkSession
-
An interface to register custom
QueryExecutionListener
s that listen for execution metrics. - listenerManager() - Method in class org.apache.spark.sql.SQLContext
-
An interface to register custom
QueryExecutionListener
s that listen for execution metrics. - listeners() - Method in interface org.apache.spark.util.ListenerBus
- listFiles() - Method in class org.apache.spark.SparkContext
-
Returns a list of file paths that are added to resources.
- listFiles(Path, Configuration, PathFilter) - Static method in class org.apache.spark.util.HadoopFSUtils
-
Lists a collection of paths recursively with a single API invocation.
- listFunctions() - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of functions registered in the current database (namespace).
- listFunctions() - Method in class org.apache.spark.sql.catalog.Catalog
- listFunctions(String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of functions registered in the specified database (namespace) (the name can be qualified with catalog).
- listFunctions(String) - Method in class org.apache.spark.sql.catalog.Catalog
- listFunctions(String[]) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- listFunctions(String[]) - Method in interface org.apache.spark.sql.connector.catalog.FunctionCatalog
-
List the functions in a namespace from the catalog.
- listFunctions(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of functions registered in the specified database (namespace) which name match the specify pattern (the name can be qualified with catalog).
- listFunctions(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
- listIndexes() - Method in interface org.apache.spark.sql.connector.catalog.index.SupportsIndex
-
Lists all the indexes in this table.
- listIndexes(Connection, Identifier, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Lists all the indexes in this table.
- listIndexes(Connection, Identifier, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- listIndexes(Connection, Identifier, JDBCOptions) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- listingTable(Seq<String>, Function1<T, Seq<Node>>, Iterable<T>, boolean, Option<String>, Seq<String>, boolean, boolean, Seq<Option<String>>) - Static method in class org.apache.spark.ui.UIUtils
-
Returns an HTML table constructed by generating a row for each object in a sequence.
- listJars() - Method in class org.apache.spark.SparkContext
-
Returns a list of jar files that are added to resources.
- listListeners() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
List all
StreamingQueryListener
s attached to thisStreamingQueryManager
. - listNamespaces() - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- listNamespaces() - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
List top-level namespaces from the catalog.
- listNamespaces(String[]) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- listNamespaces(String[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
List namespaces in a namespace.
- listPartitionIdentifiers(String[], InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
List the identifiers of all partitions that match to the ident by names.
- listResourceIds(SparkConf, String) - Static method in class org.apache.spark.resource.ResourceUtils
- listSchemas(Connection, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- listSchemas(Connection, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Lists all the schemas in this table.
- listSchemas(Connection, JDBCOptions) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- listSchemas(Connection, JDBCOptions) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- ListState<S> - Interface in org.apache.spark.sql.streaming
-
Interface used for arbitrary stateful operations with the v2 API to capture list value state.
- listTables() - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of tables/views in the current database (namespace).
- listTables() - Method in class org.apache.spark.sql.catalog.Catalog
- listTables(String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of tables/views in the specified database (namespace) (the name can be qualified with catalog).
- listTables(String) - Method in class org.apache.spark.sql.catalog.Catalog
- listTables(String[]) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- listTables(String[]) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
List the tables in a namespace from the catalog.
- listTables(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Returns a list of tables/views in the specified database (namespace) which name match the specify pattern (the name can be qualified with catalog).
- listTables(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
- listTimers() - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to list all the timers registered for given implicit grouping key Note: calling listTimers() within the
handleInputRows
method of the StatefulProcessor will return all the unprocessed registered timers, including the one being fired within the invocation ofhandleInputRows
. - listViews(String...) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
List the views in a namespace from the catalog.
- lit(Object) - Static method in class org.apache.spark.sql.functions
-
Creates a
Column
of literal value. - Lit - Class in org.apache.spark.sql.connector.expressions
-
Convenience extractor for any Literal.
- Lit() - Constructor for class org.apache.spark.sql.connector.expressions.Lit
- literal(String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- literal(T) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a literal from a value.
- literal(T) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- literal(T, DataType) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- Literal<T> - Interface in org.apache.spark.sql.connector.expressions
-
Represents a constant literal value in the public expression API.
- literalTypeUnsupportedError(Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- literalTypeUnsupportedForSourceTypeError(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- literalValueTypeUnsupportedError(String, Seq<String>, SqlBaseParser.TypeConstructorContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- LiveEntityHelpers - Class in org.apache.spark.status
- LiveEntityHelpers() - Constructor for class org.apache.spark.status.LiveEntityHelpers
- LiveExecutorStageSummary - Class in org.apache.spark.status
- LiveExecutorStageSummary(int, int, String) - Constructor for class org.apache.spark.status.LiveExecutorStageSummary
- LiveJob - Class in org.apache.spark.status
- LiveJob(int, String, Option<String>, Option<Date>, Seq<Object>, Option<String>, Seq<String>, int, Option<Object>) - Constructor for class org.apache.spark.status.LiveJob
- LiveRDD - Class in org.apache.spark.status
-
Tracker for data related to a persisted RDD.
- LiveRDD(RDDInfo, StorageLevel) - Constructor for class org.apache.spark.status.LiveRDD
- LiveRDDDistribution - Class in org.apache.spark.status
- LiveRDDDistribution(LiveExecutor) - Constructor for class org.apache.spark.status.LiveRDDDistribution
- LiveRDDPartition - Class in org.apache.spark.status
-
Data about a single partition of a cached RDD.
- LiveRDDPartition(String, StorageLevel) - Constructor for class org.apache.spark.status.LiveRDDPartition
- LiveResourceProfile - Class in org.apache.spark.status
- LiveResourceProfile(int, Map<String, ExecutorResourceRequest>, Map<String, TaskResourceRequest>, Option<Object>) - Constructor for class org.apache.spark.status.LiveResourceProfile
- LiveSpeculationStageSummary - Class in org.apache.spark.status
- LiveSpeculationStageSummary(int, int) - Constructor for class org.apache.spark.status.LiveSpeculationStageSummary
- LiveStage - Class in org.apache.spark.status
- LiveStage(StageInfo) - Constructor for class org.apache.spark.status.LiveStage
- LiveTask - Class in org.apache.spark.status
- LiveTask(TaskInfo, int, int, Option<Object>) - Constructor for class org.apache.spark.status.LiveTask
- ln(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given value.
- lo() - Method in interface org.apache.spark.sql.connector.read.colstats.HistogramBin
- load() - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads input in as a
DataFrame
, for data sources that don't require a path (e.g. - load() - Method in class org.apache.spark.sql.DataFrameReader
- load() - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads input data stream in as a
DataFrame
, for data streams that don't require a path (e.g. - load(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- load(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- load(String) - Static method in class org.apache.spark.ml.classification.FMClassificationModel
- load(String) - Static method in class org.apache.spark.ml.classification.FMClassifier
- load(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
- load(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
- load(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
- load(String) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
- load(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
- load(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
- load(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- load(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- load(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
- load(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
- load(String) - Static method in class org.apache.spark.ml.classification.OneVsRest
- load(String) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
- load(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- load(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
- load(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
- load(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- load(String) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
- load(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
- load(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- load(String) - Static method in class org.apache.spark.ml.clustering.KMeans
- load(String) - Static method in class org.apache.spark.ml.clustering.KMeansModel
- load(String) - Static method in class org.apache.spark.ml.clustering.LDA
- load(String) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
- load(String) - Static method in class org.apache.spark.ml.clustering.PowerIterationClustering
- load(String) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- load(String) - Static method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- load(String) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- load(String) - Static method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- load(String) - Static method in class org.apache.spark.ml.evaluation.RankingEvaluator
- load(String) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- load(String) - Static method in class org.apache.spark.ml.feature.Binarizer
- load(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- load(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- load(String) - Static method in class org.apache.spark.ml.feature.Bucketizer
- load(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- load(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- load(String) - Static method in class org.apache.spark.ml.feature.ColumnPruner
- load(String) - Static method in class org.apache.spark.ml.feature.CountVectorizer
- load(String) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
- load(String) - Static method in class org.apache.spark.ml.feature.DCT
- load(String) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
- load(String) - Static method in class org.apache.spark.ml.feature.FeatureHasher
- load(String) - Static method in class org.apache.spark.ml.feature.HashingTF
- load(String) - Static method in class org.apache.spark.ml.feature.IDF
- load(String) - Static method in class org.apache.spark.ml.feature.IDFModel
- load(String) - Static method in class org.apache.spark.ml.feature.Imputer
- load(String) - Static method in class org.apache.spark.ml.feature.ImputerModel
- load(String) - Static method in class org.apache.spark.ml.feature.IndexToString
- load(String) - Static method in class org.apache.spark.ml.feature.Interaction
- load(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
- load(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- load(String) - Static method in class org.apache.spark.ml.feature.MinHashLSH
- load(String) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
- load(String) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
- load(String) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
- load(String) - Static method in class org.apache.spark.ml.feature.NGram
- load(String) - Static method in class org.apache.spark.ml.feature.Normalizer
- load(String) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
- load(String) - Static method in class org.apache.spark.ml.feature.OneHotEncoderModel
- load(String) - Static method in class org.apache.spark.ml.feature.PCA
- load(String) - Static method in class org.apache.spark.ml.feature.PCAModel
- load(String) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
- load(String) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
- load(String) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
- load(String) - Static method in class org.apache.spark.ml.feature.RFormula
- load(String) - Static method in class org.apache.spark.ml.feature.RFormulaModel
- load(String) - Static method in class org.apache.spark.ml.feature.RobustScaler
- load(String) - Static method in class org.apache.spark.ml.feature.RobustScalerModel
- load(String) - Static method in class org.apache.spark.ml.feature.SQLTransformer
- load(String) - Static method in class org.apache.spark.ml.feature.StandardScaler
- load(String) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
- load(String) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
- load(String) - Static method in class org.apache.spark.ml.feature.StringIndexer
- load(String) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
- load(String) - Static method in class org.apache.spark.ml.feature.Tokenizer
- load(String) - Static method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- load(String) - Static method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- load(String) - Static method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- load(String) - Static method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- load(String) - Static method in class org.apache.spark.ml.feature.VectorAssembler
- load(String) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- load(String) - Static method in class org.apache.spark.ml.feature.VectorIndexer
- load(String) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
- load(String) - Static method in class org.apache.spark.ml.feature.VectorSizeHint
- load(String) - Static method in class org.apache.spark.ml.feature.VectorSlicer
- load(String) - Static method in class org.apache.spark.ml.feature.Word2Vec
- load(String) - Static method in class org.apache.spark.ml.feature.Word2VecModel
- load(String) - Static method in class org.apache.spark.ml.fpm.FPGrowth
- load(String) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
- load(String) - Static method in class org.apache.spark.ml.Pipeline
- load(String) - Static method in class org.apache.spark.ml.PipelineModel
- load(String) - Static method in class org.apache.spark.ml.r.RWrappers
- load(String) - Static method in class org.apache.spark.ml.recommendation.ALS
- load(String) - Static method in class org.apache.spark.ml.recommendation.ALSModel
- load(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- load(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- load(String) - Static method in class org.apache.spark.ml.regression.FMRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.FMRegressor
- load(String) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.GBTRegressor
- load(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- load(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
- load(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.LinearRegression
- load(String) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- load(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
- load(String) - Static method in class org.apache.spark.ml.tuning.CrossValidator
- load(String) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
- load(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
- load(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- load(String) - Method in interface org.apache.spark.ml.util.MLReadable
-
Reads an ML instance from the input path, a shortcut of
read.load(path)
. - load(String) - Method in class org.apache.spark.ml.util.MLReader
-
Loads the ML component from the input path.
- load(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads input in as a
DataFrame
, for data sources that require a path (e.g. - load(String) - Method in class org.apache.spark.sql.DataFrameReader
- load(String) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().load(path)
. - load(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads input in as a
DataFrame
, for data streams that read from some path. - load(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads input in as a
DataFrame
, for data sources that support multiple paths. - load(String...) - Method in class org.apache.spark.sql.DataFrameReader
- load(String, String) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().format(source).load(path)
. - load(String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().format(source).options(options).load()
. - load(String, SparkContext, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Deprecated.use load with SparkSession. Since 4.0.0.
- load(String, SQLConf) - Static method in class org.apache.spark.sql.connector.catalog.Catalogs
-
Load and configure a catalog by name.
- load(String, SparkSession, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
- load(String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().format(source).schema(schema).options(options).load()
. - load(String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().format(source).schema(schema).options(options).load()
. - load(String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().format(source).options(options).load()
. - load(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel
- load(SparkContext, String) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
- load(SparkContext, String) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.SVMModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.KMeansModel
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.LocalLDAModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
- load(SparkContext, String) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.feature.Word2VecModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.fpm.FPGrowthModel
- load(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.fpm.PrefixSpanModel
- load(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Load a model from the given path.
- load(SparkContext, String) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.LassoModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
- load(SparkContext, String) - Method in interface org.apache.spark.mllib.util.Loader
-
Load a model from the given path.
- load(SparkContext, String, String, int) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
- load(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads input in as a
DataFrame
, for data sources that support multiple paths. - load(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- loadClass(String, boolean) - Method in class org.apache.spark.util.ChildFirstURLClassLoader
- loadClass(String, boolean) - Method in class org.apache.spark.util.ParentClassLoader
- loadData(SparkContext, String, String) - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
Helper method for loading GLM classification model data.
- loadData(SparkContext, String, String, int) - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
Helper method for loading GLM regression model data.
- loadDataInputPathNotExistError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- loadDataNotSupportedForDatasourceTablesError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- loadDataNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- loadDataPartitionSizeNotMatchNumPartitionColumnsError(String, int, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- loadDataTargetTableNotPartitionedButPartitionSpecWasProvidedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- loadDataWithoutPartitionSpecProvidedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- loadDefaultSparkProperties(SparkConf, String) - Static method in class org.apache.spark.util.Utils
-
Load default Spark properties from the given file.
- loadDefaultStopWords(String) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
Loads the default stop words for the given language.
- Loader<M extends Saveable> - Interface in org.apache.spark.mllib.util
-
Trait for classes which can load models and transformers from files.
- loadExtensions(Class<T>, Seq<String>, SparkConf) - Static method in class org.apache.spark.util.Utils
-
Create instances of extension classes.
- loadFunction(CatalogPlugin, Identifier) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- loadFunction(Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- loadFunction(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.FunctionCatalog
-
Load a function by
identifier
from the catalog. - loadHiveClientCausesNoClassDefFoundError(NoClassDefFoundError, Seq<URL>, String, InvocationTargetException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- loadImpl(String, SparkSession, String, String) - Static method in class org.apache.spark.ml.tree.EnsembleModelReadWrite
-
Helper method for loading a tree ensemble from disk.
- loadImpl(Dataset<Row>, Item, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
- loadImpl(Dataset<Row>, Item, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
- LoadInstanceEnd<T> - Class in org.apache.spark.ml
-
Event fired after
MLReader.load
. - LoadInstanceEnd() - Constructor for class org.apache.spark.ml.LoadInstanceEnd
- LoadInstanceStart<T> - Class in org.apache.spark.ml
-
Event fired before
MLReader.load
. - LoadInstanceStart(String) - Constructor for class org.apache.spark.ml.LoadInstanceStart
- loadIvySettings(String, Option<String>, Option<String>, PrintStream) - Static method in class org.apache.spark.util.MavenUtils
-
Load Ivy settings from a given filename, using supplied resolvers
- loadLabeledPoints(SparkContext, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled points saved using
RDD[LabeledPoint].saveAsTextFile
with the default number of partitions. - loadLabeledPoints(SparkContext, String, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled points saved using
RDD[LabeledPoint].saveAsTextFile
. - loadLibSVMFile(SparkContext, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads binary labeled data in the LIBSVM format into an RDD[LabeledPoint], with number of features determined automatically and the default number of partitions.
- loadLibSVMFile(SparkContext, String, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint], with the default number of partitions.
- loadLibSVMFile(SparkContext, String, int, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint].
- loadNamespaceMetadata(String[]) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- loadNamespaceMetadata(String[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
Load metadata properties for a namespace.
- loadPartitionMetadata(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Retrieve the partition metadata of the existing partition.
- loadProcedure(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.ProcedureCatalog
-
Load a procedure by
identifier
from the catalog. - loadRelation(CatalogPlugin, Identifier) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- loadTable(CatalogPlugin, Identifier, Option<TimeTravelSpec>, Option<String>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- loadTable(Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- loadTable(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Load table metadata by
identifier
from the catalog. - loadTable(Identifier, long) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- loadTable(Identifier, long) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Load table metadata at a specific time by
identifier
from the catalog. - loadTable(Identifier, String) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- loadTable(Identifier, String) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Load table metadata of a specific version by
identifier
from the catalog. - loadTable(Identifier, Set<TableWritePrivilege>) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Load table metadata by
identifier
from the catalog. - loadTreeNodes(String, org.apache.spark.ml.util.DefaultParamsReader.Metadata, SparkSession) - Static method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
-
Load a decision tree from a file.
- loadVectors(SparkContext, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads vectors saved using
RDD[Vector].saveAsTextFile
with the default number of partitions. - loadVectors(SparkContext, String, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads vectors saved using
RDD[Vector].saveAsTextFile
. - loadView(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Load view metadata by
ident
from the catalog. - LOCAL_BLOCKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- LOCAL_BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- LOCAL_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- LOCAL_CLUSTER_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
- LOCAL_MERGED_BLOCKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- LOCAL_MERGED_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- LOCAL_MERGED_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- LOCAL_MERGED_BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- LOCAL_MERGED_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- LOCAL_MERGED_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- LOCAL_MERGED_CHUNKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- LOCAL_MERGED_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- LOCAL_MERGED_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- LOCAL_N_FAILURES_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
- LOCAL_N_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
- LOCAL_SCHEME() - Static method in class org.apache.spark.util.Utils
-
Scheme used for files that are locally available on worker nodes in the cluster.
- localBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- localBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- localBytesRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- localCanonicalHostName() - Static method in class org.apache.spark.util.Utils
-
Get the local machine's FQDN.
- localCheckpoint() - Method in class org.apache.spark.rdd.RDD
-
Mark this RDD for local checkpointing using Spark's existing caching layer.
- localCheckpoint() - Method in class org.apache.spark.sql.api.Dataset
-
Eagerly locally checkpoints a Dataset and return the new Dataset.
- localCheckpoint() - Method in class org.apache.spark.sql.Dataset
- localCheckpoint(boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Locally checkpoints a Dataset and return the new Dataset.
- localCheckpoint(boolean) - Method in class org.apache.spark.sql.Dataset
- LOCALDATE() - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder that serializes instances of the
java.time.LocalDate
class to the internal representation of nullable Catalyst's DateType. - LOCALDATETIME() - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder that serializes instances of the
java.time.LocalDateTime
class to the internal representation of nullable Catalyst's TimestampNTZType. - localDirs() - Method in class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
- localDirs() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
- locale() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
Locale of the input for case insensitive matching.
- localHostName() - Static method in class org.apache.spark.util.Utils
-
Get the local machine's hostname.
- localHostNameForURI() - Static method in class org.apache.spark.util.Utils
-
Get the local machine's URI.
- LOCALITY() - Static method in class org.apache.spark.status.TaskIndexNames
- LOCALITY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- localitySummary() - Method in class org.apache.spark.status.LiveStage
- LocalKMeans - Class in org.apache.spark.mllib.clustering
-
An utility object to run K-means locally.
- LocalKMeans() - Constructor for class org.apache.spark.mllib.clustering.LocalKMeans
- LocalLDAModel - Class in org.apache.spark.ml.clustering
-
Local (non-distributed) model fitted by
LDA
. - LocalLDAModel - Class in org.apache.spark.mllib.clustering
-
Local LDA model.
- localMergedBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- localMergedBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- localMergedBytesRead() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- localMergedBytesRead() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- localMergedChunksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- localMergedChunksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- LocalScan - Interface in org.apache.spark.sql.connector.read
-
A special Scan which will happen on Driver locally instead of Executors.
- localSeqToDatasetHolder(Seq<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLImplicits
-
Creates a
Dataset
from a local Seq. - localSparkRPackagePath() - Static method in class org.apache.spark.api.r.RUtils
-
Get the SparkR package path in the local spark distribution.
- localtimestamp() - Static method in class org.apache.spark.sql.functions
-
Returns the current timestamp without time zone at the start of query evaluation as a timestamp without time zone column.
- locate(String, Column) - Static method in class org.apache.spark.sql.functions
-
Locate the position of the first occurrence of substr.
- locate(String, Column, int) - Static method in class org.apache.spark.sql.functions
-
Locate the position of the first occurrence of substr in a string column, after position pos.
- location() - Method in interface org.apache.spark.scheduler.MapStatus
-
Location where this task output is.
- location() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- location() - Method in class org.apache.spark.ui.storage.ExecutorStreamSummary
- locationAlreadyExists(TableIdentifier, Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- locations() - Method in class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
- locationUri() - Method in class org.apache.spark.sql.catalog.Database
- lockName() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- lockOwnerName() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- log(double, String) - Static method in class org.apache.spark.sql.functions
-
Returns the first argument-base logarithm of the second argument.
- log(double, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the first argument-base logarithm of the second argument.
- log(String) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given column.
- log(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given value.
- log(Function0<Parsers.Parser<T>>, String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- LOG_SCHEMA() - Static method in class org.apache.spark.util.LogUtils
-
Schema for structured Spark logs.
- Log$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
- log10(String) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given value in base 10.
- log10(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given value in base 10.
- log1p(String) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given column plus one.
- log1p(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given value plus one.
- log1pExp(double) - Static method in class org.apache.spark.ml.impl.Utils
-
When
x
is positive and large, computingmath.log(1 + math.exp(x))
will lead to arithmetic overflow. - log2(String) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given value in base 2.
- log2(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given column in base 2.
- logDeprecationWarning(String) - Static method in class org.apache.spark.SparkConf
-
Logs a warning message if the given config key is deprecated.
- logEvent() - Method in interface org.apache.spark.ml.MLEvent
- logEvent() - Method in interface org.apache.spark.scheduler.SparkListenerEvent
- logEvent(MLEvent) - Method in interface org.apache.spark.ml.MLEvents
-
Log
MLEvent
to send. - LOGGING_INTERVAL() - Static method in class org.apache.spark.scheduler.AsyncEventQueue
- LogicalDistributions - Class in org.apache.spark.sql.connector.distributions
- LogicalDistributions() - Constructor for class org.apache.spark.sql.connector.distributions.LogicalDistributions
- LogicalExpressions - Class in org.apache.spark.sql.connector.expressions
-
Helper methods for working with the logical expressions API.
- LogicalExpressions() - Constructor for class org.apache.spark.sql.connector.expressions.LogicalExpressions
- logicalPlanForViewNotAnalyzedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- LogicalWriteInfo - Interface in org.apache.spark.sql.connector.write
-
This interface contains logical write information that data sources can use when generating a
WriteBuilder
. - LogisticGradient - Class in org.apache.spark.mllib.optimization
-
Compute gradient and loss for a multinomial logistic loss function, as used in multi-class classification (it is also used in binary logistic regression).
- LogisticGradient() - Constructor for class org.apache.spark.mllib.optimization.LogisticGradient
- LogisticGradient(int) - Constructor for class org.apache.spark.mllib.optimization.LogisticGradient
- LogisticRegression - Class in org.apache.spark.ml.classification
-
Logistic regression.
- LogisticRegression() - Constructor for class org.apache.spark.ml.classification.LogisticRegression
- LogisticRegression(String) - Constructor for class org.apache.spark.ml.classification.LogisticRegression
- LogisticRegressionDataGenerator - Class in org.apache.spark.mllib.util
-
Generate test data for LogisticRegression.
- LogisticRegressionDataGenerator() - Constructor for class org.apache.spark.mllib.util.LogisticRegressionDataGenerator
- LogisticRegressionModel - Class in org.apache.spark.ml.classification
-
Model produced by
LogisticRegression
. - LogisticRegressionModel - Class in org.apache.spark.mllib.classification
-
Classification model trained using Multinomial/Binary Logistic Regression.
- LogisticRegressionModel(Vector, double) - Constructor for class org.apache.spark.mllib.classification.LogisticRegressionModel
-
Constructs a
LogisticRegressionModel
with weights and intercept for binary classification. - LogisticRegressionModel(Vector, double, int, int) - Constructor for class org.apache.spark.mllib.classification.LogisticRegressionModel
- LogisticRegressionParams - Interface in org.apache.spark.ml.classification
-
Params for logistic regression.
- LogisticRegressionSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for logistic regression results for a given model.
- LogisticRegressionSummaryImpl - Class in org.apache.spark.ml.classification
-
Multiclass logistic regression results for a given model.
- LogisticRegressionSummaryImpl(Dataset<Row>, String, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
- LogisticRegressionTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for multiclass logistic regression training results.
- LogisticRegressionTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Multiclass logistic regression training results.
- LogisticRegressionTrainingSummaryImpl(Dataset<Row>, String, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.LogisticRegressionTrainingSummaryImpl
- LogisticRegressionWithLBFGS - Class in org.apache.spark.mllib.classification
-
Train a classification model for Multinomial/Binary Logistic Regression using Limited-memory BFGS.
- LogisticRegressionWithLBFGS() - Constructor for class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
- LogisticRegressionWithSGD - Class in org.apache.spark.mllib.classification
-
Train a classification model for Binary Logistic Regression using Stochastic Gradient Descent.
- Logit$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
- logLevel() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
- logLevel() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorLogLevel
- logLevel() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorsLogLevel
- logLikelihood() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
- logLikelihood() - Method in class org.apache.spark.ml.clustering.GaussianMixtureSummary
- logLikelihood() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- logLikelihood() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
- logLikelihood(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Java-friendly version of
logLikelihood
- logLikelihood(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Calculates a lower bound on the log likelihood of the entire corpus.
- logLikelihood(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Calculates a lower bound on the log likelihood of the entire corpus.
- logLoss(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns the log-loss, aka logistic loss or cross-entropy loss.
- LogLoss - Class in org.apache.spark.mllib.tree.loss
-
Class for log loss calculation (for classification).
- LogLoss() - Constructor for class org.apache.spark.mllib.tree.loss.LogLoss
- LogNormalGenerator - Class in org.apache.spark.mllib.random
-
Generates i.i.d.
- LogNormalGenerator(double, double) - Constructor for class org.apache.spark.mllib.random.LogNormalGenerator
- logNormalGraph(SparkContext, int, int, double, double, long) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
Generate a graph whose vertex out degree distribution is log normal.
- logNormalJavaRDD(JavaSparkContext, double, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaRDD
with the default number of partitions and the default seed. - logNormalJavaRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaRDD
with the default seed. - logNormalJavaRDD(JavaSparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.logNormalRDD
. - logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaVectorRDD
with the default number of partitions and the default seed. - logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaVectorRDD
with the default seed. - logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.logNormalVectorRDD
. - logNormalRDD(SparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples from the log normal distribution with the input mean and standard deviation - logNormalVectorRDD(SparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing
i.i.d.
samples drawn from a log normal distribution. - logpdf(Vector) - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
-
Returns the log-density of this multivariate Gaussian at given point, x
- logpdf(Vector) - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
-
Returns the log-density of this multivariate Gaussian at given point, x
- logPerplexity(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Java-friendly version of
logPerplexity
- logPerplexity(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Calculate an upper bound on perplexity.
- logPerplexity(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Calculate an upper bound on perplexity.
- logPrior() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
- logPrior() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- logResourceInfo(String, Map<String, ResourceInformation>) - Static method in class org.apache.spark.resource.ResourceUtils
- logStartFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- logStartToJson(SparkListenerLogStart, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- LogStringContext(StringContext) - Static method in class org.apache.spark.graphx.GraphLoader
- LogStringContext(StringContext) - Static method in class org.apache.spark.graphx.lib.PageRank
- LogStringContext(StringContext) - Static method in class org.apache.spark.graphx.Pregel
- LogStringContext(StringContext) - Static method in class org.apache.spark.graphx.util.GraphGenerators
- LogStringContext(StringContext) - Static method in class org.apache.spark.kafka010.KafkaRedactionUtil
- LogStringContext(StringContext) - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- LogStringContext(StringContext) - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- LogStringContext(StringContext) - Static method in class org.apache.spark.mapred.SparkHadoopMapRedUtil
- LogStringContext(StringContext) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- LogStringContext(StringContext) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
- LogStringContext(StringContext) - Static method in class org.apache.spark.ml.r.RWrapperUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.ml.recommendation.ALS
- LogStringContext(StringContext) - Static method in class org.apache.spark.ml.stat.Summarizer
- LogStringContext(StringContext) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
- LogStringContext(StringContext) - Static method in class org.apache.spark.ml.tree.impl.RandomForest
- LogStringContext(StringContext) - Static method in class org.apache.spark.ml.util.DatasetUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.clustering.LocalKMeans
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.clustering.PowerIterationClustering
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.fpm.PrefixSpan
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.linalg.BLAS
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.optimization.GradientDescent
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.optimization.LBFGS
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.tree.DecisionTree
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.tree.GradientBoostedTrees
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.tree.RandomForest
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.util.DataValidators
- LogStringContext(StringContext) - Static method in class org.apache.spark.mllib.util.MLUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.rdd.HadoopRDD
- LogStringContext(StringContext) - Static method in class org.apache.spark.resource.ResourceProfile
- LogStringContext(StringContext) - Static method in class org.apache.spark.resource.ResourceUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.scheduler.StatsReportListener
- LogStringContext(StringContext) - Static method in class org.apache.spark.security.CryptoStreamUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
- LogStringContext(StringContext) - Static method in class org.apache.spark.serializer.SerializationDebugger
- LogStringContext(StringContext) - Static method in class org.apache.spark.serializer.SerializerHelper
- LogStringContext(StringContext) - Static method in class org.apache.spark.SparkConf
- LogStringContext(StringContext) - Static method in class org.apache.spark.SparkContext
- LogStringContext(StringContext) - Static method in class org.apache.spark.SparkEnv
- LogStringContext(StringContext) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.sql.artifact.ArtifactManager
- LogStringContext(StringContext) - Static method in class org.apache.spark.sql.avro.AvroUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.sql.avro.SchemaConverters
- LogStringContext(StringContext) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- LogStringContext(StringContext) - Static method in class org.apache.spark.sql.SparkSession
- LogStringContext(StringContext) - Static method in class org.apache.spark.sql.types.UDTRegistration
- LogStringContext(StringContext) - Static method in class org.apache.spark.status.KVUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.storage.StorageUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.streaming.CheckpointReader
- LogStringContext(StringContext) - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
- LogStringContext(StringContext) - Static method in class org.apache.spark.streaming.util.RawTextSender
- LogStringContext(StringContext) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.ui.JettyUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.ui.UIUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.AccumulatorContext
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.ClosureCleaner
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.DependencyUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.HadoopFSUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.MavenUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.ShutdownHookManager
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.SignalUtils
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.SizeEstimator
- LogStringContext(StringContext) - Static method in class org.apache.spark.util.Utils
- logTuningParams(org.apache.spark.ml.util.Instrumentation) - Method in interface org.apache.spark.ml.tuning.ValidatorParams
-
Instrumentation logging for tuning params including the inner estimator and evaluator info.
- logUncaughtExceptions(Function0<T>) - Static method in class org.apache.spark.util.Utils
-
Execute the given block, logging and re-throwing any uncaught exception.
- logUrlInfo() - Method in class org.apache.spark.scheduler.MiscellaneousProcessDetails
- logUrlMap() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- logUrls() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- LogUtils - Class in org.apache.spark.util
-
:: : DeveloperApi :: Utils for querying Spark logs with Spark SQL.
- LogUtils() - Constructor for class org.apache.spark.util.LogUtils
- LONG - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- LONG() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable long type.
- LONG_STR - Static variable in class org.apache.spark.types.variant.VariantUtil
- longAccumulator() - Method in class org.apache.spark.SparkContext
-
Create and register a long accumulator, which starts with 0 and accumulates inputs by
add
. - longAccumulator(String) - Method in class org.apache.spark.SparkContext
-
Create and register a long accumulator, which starts with 0 and accumulates inputs by
add
. - LongAccumulator - Class in org.apache.spark.util
-
An
accumulator
for computing sum, count, and average of 64-bit integers. - LongAccumulator() - Constructor for class org.apache.spark.util.LongAccumulator
- LongAccumulatorSource - Class in org.apache.spark.metrics.source
- LongAccumulatorSource() - Constructor for class org.apache.spark.metrics.source.LongAccumulatorSource
- longColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
- LongExactNumeric - Class in org.apache.spark.sql.types
- LongExactNumeric() - Constructor for class org.apache.spark.sql.types.LongExactNumeric
- LongParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Long]
for Java. - LongParam(String, String, String) - Constructor for class org.apache.spark.ml.param.LongParam
- LongParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.LongParam
- LongParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.LongParam
- LongParam(Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.LongParam
- LongType - Class in org.apache.spark.sql.types
-
The data type representing
Long
values. - LongType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the LongType object.
- LongType() - Constructor for class org.apache.spark.sql.types.LongType
- LongTypeExpression - Class in org.apache.spark.sql.types
- LongTypeExpression() - Constructor for class org.apache.spark.sql.types.LongTypeExpression
- lookup(K) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return the list of values in the RDD for key
key
. - lookup(K) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return the list of values in the RDD for key
key
. - LookupCatalog - Interface in org.apache.spark.sql.connector.catalog
-
A trait to encapsulate catalog lookup function and helpful extractors.
- LookupCatalog.AsTableIdentifier - Class in org.apache.spark.sql.connector.catalog
-
Extract legacy table identifier from a multi-part identifier.
- LookupCatalog.AsTableIdentifier$ - Class in org.apache.spark.sql.connector.catalog
-
Extract legacy table identifier from a multi-part identifier.
- LookupCatalog.CatalogAndIdentifier - Class in org.apache.spark.sql.connector.catalog
-
Extract catalog and identifier from a multi-part name with the current catalog if needed.
- LookupCatalog.CatalogAndIdentifier$ - Class in org.apache.spark.sql.connector.catalog
-
Extract catalog and identifier from a multi-part name with the current catalog if needed.
- LookupCatalog.CatalogAndNamespace - Class in org.apache.spark.sql.connector.catalog
-
Extract catalog and namespace from a multi-part name with the current catalog if needed.
- LookupCatalog.CatalogAndNamespace$ - Class in org.apache.spark.sql.connector.catalog
-
Extract catalog and namespace from a multi-part name with the current catalog if needed.
- LookupCatalog.NonSessionCatalogAndIdentifier - Class in org.apache.spark.sql.connector.catalog
-
Extract non-session catalog and identifier from a multi-part identifier.
- LookupCatalog.NonSessionCatalogAndIdentifier$ - Class in org.apache.spark.sql.connector.catalog
-
Extract non-session catalog and identifier from a multi-part identifier.
- LookupCatalog.SessionCatalogAndIdentifier - Class in org.apache.spark.sql.connector.catalog
-
Extract session catalog and identifier from a multi-part identifier.
- LookupCatalog.SessionCatalogAndIdentifier$ - Class in org.apache.spark.sql.connector.catalog
-
Extract session catalog and identifier from a multi-part identifier.
- lookupRpcTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the default Spark timeout to use for RPC remote endpoint lookup.
- loss() - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
The current loss value of this aggregator.
- loss() - Method in interface org.apache.spark.ml.param.shared.HasLoss
-
Param for the loss function to be optimized.
- loss() - Method in class org.apache.spark.ml.regression.LinearRegression
- loss() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- loss() - Method in interface org.apache.spark.ml.regression.LinearRegressionParams
-
The loss function to be optimized.
- loss() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- loss(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>) - Method in interface org.apache.spark.ml.ann.LossFunction
-
Returns the value of loss function.
- Loss - Interface in org.apache.spark.mllib.tree.loss
-
Trait for adding "pluggable" loss functions for the gradient boosting algorithm.
- Losses - Class in org.apache.spark.mllib.tree.loss
- Losses() - Constructor for class org.apache.spark.mllib.tree.loss.Losses
- LossFunction - Interface in org.apache.spark.ml.ann
-
Trait for loss function
- LossReasonPending - Class in org.apache.spark.scheduler
-
A loss reason that means we don't yet know why the executor exited.
- LossReasonPending() - Constructor for class org.apache.spark.scheduler.LossReasonPending
- lossSum() - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
- lossType() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- lossType() - Method in class org.apache.spark.ml.classification.GBTClassifier
- lossType() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- lossType() - Method in class org.apache.spark.ml.regression.GBTRegressor
- lossType() - Method in interface org.apache.spark.ml.tree.GBTClassifierParams
-
Loss function which GBT tries to minimize.
- lossType() - Method in interface org.apache.spark.ml.tree.GBTRegressorParams
-
Loss function which GBT tries to minimize.
- LOST - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The Spark Submit JVM exited with a unknown status.
- LOST() - Static method in class org.apache.spark.TaskState
- low() - Method in class org.apache.spark.partial.BoundedDouble
- lower() - Method in class org.apache.spark.ml.feature.RobustScaler
- lower() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- lower() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
-
Lower quantile to calculate quantile range, shared by all features Default: 0.25
- lower(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a string column to lower case.
- lowerBoundsOnCoefficients() - Method in class org.apache.spark.ml.classification.LogisticRegression
- lowerBoundsOnCoefficients() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- lowerBoundsOnCoefficients() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
The lower bounds on coefficients if fitting under bound constrained optimization.
- lowerBoundsOnIntercepts() - Method in class org.apache.spark.ml.classification.LogisticRegression
- lowerBoundsOnIntercepts() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- lowerBoundsOnIntercepts() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
The lower bounds on intercepts if fitting under bound constrained optimization.
- lowerCaseName() - Method in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- LowPrioritySQLImplicits - Interface in org.apache.spark.sql
-
Lower priority implicit methods for converting Scala objects into
Dataset
s. - lpad(Column, int, byte[]) - Static method in class org.apache.spark.sql.functions
-
Left-pad the binary column with pad to a byte length of len.
- lpad(Column, int, String) - Static method in class org.apache.spark.sql.functions
-
Left-pad the string column with pad to a length of len.
- LSHParams - Interface in org.apache.spark.ml.feature
-
Params for
LSH
. - lt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is less than upperBound
- lt(Object) - Method in class org.apache.spark.sql.Column
-
Less than.
- lt(T, T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- lt(T, T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- lt(T, T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- lt(T, T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- lt(T, T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- lt(T, T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- lt(T, T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- lteq(T, T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- lteq(T, T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- lteq(T, T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- lteq(T, T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- lteq(T, T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- lteq(T, T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- lteq(T, T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- ltEq(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is less than or equal to upperBound
- ltrim(Column) - Static method in class org.apache.spark.sql.functions
-
Trim the spaces from left end for the specified string value.
- ltrim(Column, String) - Static method in class org.apache.spark.sql.functions
-
Trim the specified character string from left end for the specified string column.
- LZ4CompressionCodec - Class in org.apache.spark.io
-
:: DeveloperApi :: LZ4 implementation of
CompressionCodec
. - LZ4CompressionCodec(SparkConf) - Constructor for class org.apache.spark.io.LZ4CompressionCodec
- LZFCompressionCodec - Class in org.apache.spark.io
-
:: DeveloperApi :: LZF implementation of
CompressionCodec
. - LZFCompressionCodec(SparkConf) - Constructor for class org.apache.spark.io.LZFCompressionCodec
M
- MAGIC_METHOD_NAME - Static variable in interface org.apache.spark.sql.connector.catalog.functions.ScalarFunction
- main(String[]) - Static method in class org.apache.spark.ml.param.shared.SharedParamsCodeGen
- main(String[]) - Static method in class org.apache.spark.mllib.util.KMeansDataGenerator
- main(String[]) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
- main(String[]) - Static method in class org.apache.spark.mllib.util.LogisticRegressionDataGenerator
- main(String[]) - Static method in class org.apache.spark.mllib.util.MFDataGenerator
- main(String[]) - Static method in class org.apache.spark.mllib.util.SVMDataGenerator
- main(String[]) - Static method in class org.apache.spark.streaming.util.RawTextSender
- main(String[]) - Static method in class org.apache.spark.ui.UIWorkloadGenerator
- main(String[]) - Method in interface org.apache.spark.util.CommandLineUtils
- majorMinorPatchVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Extracts the major, minor and patch parts from the input
version
. - majorMinorVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Given a Spark version string, return the (major version number, minor version number).
- majorVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Given a Spark version string, return the major version number.
- make_date(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
- make_dt_interval() - Static method in class org.apache.spark.sql.functions
-
Make DayTimeIntervalType duration.
- make_dt_interval(Column) - Static method in class org.apache.spark.sql.functions
-
Make DayTimeIntervalType duration from days.
- make_dt_interval(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make DayTimeIntervalType duration from days and hours.
- make_dt_interval(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make DayTimeIntervalType duration from days, hours and mins.
- make_dt_interval(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make DayTimeIntervalType duration from days, hours, mins and secs.
- make_interval() - Static method in class org.apache.spark.sql.functions
-
Make interval.
- make_interval(Column) - Static method in class org.apache.spark.sql.functions
-
Make interval from years.
- make_interval(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make interval from years and months.
- make_interval(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make interval from years, months and weeks.
- make_interval(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make interval from years, months, weeks and days.
- make_interval(Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make interval from years, months, weeks, days and hours.
- make_interval(Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make interval from years, months, weeks, days, hours and mins.
- make_interval(Column, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make interval from years, months, weeks, days, hours, mins and secs.
- make_timestamp(Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Create timestamp from years, months, days, hours, mins and secs fields.
- make_timestamp(Column, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Create timestamp from years, months, days, hours, mins, secs and timezone fields.
- make_timestamp_ltz(Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Create the current timestamp with local time zone from years, months, days, hours, mins and secs fields.
- make_timestamp_ltz(Column, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Create the current timestamp with local time zone from years, months, days, hours, mins, secs and timezone fields.
- make_timestamp_ntz(Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Create local date-time from years, months, days, hours, mins, secs fields.
- make_ym_interval() - Static method in class org.apache.spark.sql.functions
-
Make year-month interval.
- make_ym_interval(Column) - Static method in class org.apache.spark.sql.functions
-
Make year-month interval from years.
- make_ym_interval(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Make year-month interval from years, months.
- makeBinarySearch(Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.util.CollectionsUtils
- makeDescription(String, String, boolean) - Static method in class org.apache.spark.ui.UIUtils
-
Returns HTML rendering of a job or stage description.
- makeDriverRef(String, SparkConf, RpcEnv) - Static method in class org.apache.spark.util.RpcUtils
-
Retrieve a
RpcEndpointRef
which is located in the driver via its name. - makeHref(boolean, String, String) - Static method in class org.apache.spark.ui.UIUtils
-
Return the correct Href after checking if master is running in the reverse proxy mode or not.
- makeNegative(TaskMetrics) - Static method in class org.apache.spark.status.LiveEntityHelpers
-
Convert all the metric values to negative as well as handle zero values.
- makeProgressBar(int, int, int, int, Map<String, Object>, int) - Static method in class org.apache.spark.ui.UIUtils
- makeRDD(Seq<Tuple2<T, Seq<String>>>, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Distribute a local Scala collection to form an RDD, with one or more location preferences (hostnames of Spark nodes) for each object.
- makeRDD(Seq<T>, int, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Distribute a local Scala collection to form an RDD.
- malformedCharacterCoding(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- malformedCSVRecordError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- malformedJSONError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- malformedProtobufMessageDetectedInMessageParsingError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- malformedRecordsDetectedInRecordParsingError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- malformedRecordsDetectedInSchemaInferenceError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- malformedRecordsDetectedInSchemaInferenceError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- malformedVariant() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- map(Function<T, R>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to all elements of this RDD.
- map(Function<T, U>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream.
- map(MapFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Returns a new Dataset that contains the result of applying
func
to each element. - map(MapFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
- map(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new map column.
- map(DataType, DataType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type map. - map(MapType) - Method in class org.apache.spark.sql.ColumnName
- map(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new map column.
- map(Function1<Object, Object>) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Map the values of this matrix using a function.
- map(Function1<Object, Object>) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Map the values of this matrix using a function.
- map(Function1<R, T>) - Method in class org.apache.spark.partial.PartialResult
-
Transform this PartialResult into a PartialResult of type T.
- map(Function1<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset that contains the result of applying
func
to each element. - map(Function1<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
- map(Function1<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by applying a function to all elements of this RDD.
- map(Function1<T, U>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream by applying a function to all elements of this DStream.
- map_concat(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the union of all the given maps.
- map_concat(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the union of all the given maps.
- map_contains_key(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns true if the map contains the key.
- map_entries(Column) - Static method in class org.apache.spark.sql.functions
-
Returns an unordered array of all entries in the given map.
- map_filter(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Returns a map whose key-value pairs satisfy a predicate.
- map_from_arrays(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new map column.
- map_from_entries(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a map created from the given array of entries.
- map_keys(Column) - Static method in class org.apache.spark.sql.functions
-
Returns an unordered array containing the keys of the map.
- map_values(Column) - Static method in class org.apache.spark.sql.functions
-
Returns an unordered array containing the values of the map.
- map_zip_with(Column, Column, Function3<Column, Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Merge two given maps, key-wise into a single map using a function.
- mapAsSerializableJavaMap(Map<A, B>) - Static method in class org.apache.spark.api.java.JavaUtils
- mapDataKeyArrayLengthDiffersFromValueArrayLengthError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- mapEdgePartitions(Function2<Object, EdgePartition<ED, VD>, EdgePartition<ED2, VD2>>, ClassTag<ED2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- mapEdges(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute in the graph using the map function.
- mapEdges(Function2<Object, Iterator<Edge<ED>>, Iterator<ED2>>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute using the map function, passing it a whole partition at a time.
- mapEdges(Function2<Object, Iterator<Edge<ED>>, Iterator<ED2>>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- mapFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
-------------------------------- * Util JSON deserialization methods |
- MapFunction<T,
U> - Interface in org.apache.spark.api.java.function -
Base interface for a map function used in Dataset's map function.
- mapGroups(MapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data.
- mapGroups(MapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapGroups(Function2<K, Iterator<V>, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data.
- mapGroups(Function2<K, Iterator<V>, U>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- MapGroupsFunction<K,
V, R> - Interface in org.apache.spark.api.java.function -
Base interface for a map function used in GroupedDataset's mapGroup function.
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>, GroupStateTimeout) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>, GroupStateTimeout) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>, GroupStateTimeout, KeyValueGroupedDataset) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>, GroupStateTimeout, KeyValueGroupedDataset<K, S>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapGroupsWithState(GroupStateTimeout, KeyValueGroupedDataset, Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- mapGroupsWithState(GroupStateTimeout, KeyValueGroupedDataset<K, S>, Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapGroupsWithState(GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- mapGroupsWithState(GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapGroupsWithState(Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
- mapGroupsWithState(Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- MapGroupsWithStateFunction<K,
V, S, R> - Interface in org.apache.spark.api.java.function -
::Experimental:: Base interface for a map function used in
KeyValueGroupedDataset.mapGroupsWithState(MapGroupsWithStateFunction, org.apache.spark.sql.Encoder, org.apache.spark.sql.Encoder)
- mapId() - Method in class org.apache.spark.FetchFailed
- mapId() - Method in interface org.apache.spark.scheduler.MapStatus
-
The unique ID of this shuffle map task, if spark.shuffle.useOldFetchProtocol enabled we use partitionId of the task or taskContext.taskAttemptId is used.
- mapId() - Method in class org.apache.spark.storage.ShuffleBlockBatchId
- mapId() - Method in class org.apache.spark.storage.ShuffleBlockId
- mapId() - Method in class org.apache.spark.storage.ShuffleChecksumBlockId
- mapId() - Method in class org.apache.spark.storage.ShuffleDataBlockId
- mapId() - Method in class org.apache.spark.storage.ShuffleIndexBlockId
- mapIndex() - Method in class org.apache.spark.FetchFailed
- mapIndex() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ShufflePushCompletion
- mapIndex() - Method in class org.apache.spark.storage.ShufflePushBlockId
- MapOutputCommitMessage - Class in org.apache.spark.shuffle.api.metadata
-
:: Private :: Represents the result of writing map outputs for a shuffle map task.
- MapOutputMetadata - Interface in org.apache.spark.shuffle.api.metadata
-
:: Private :: An opaque metadata tag for registering the result of committing the output of a shuffle map task.
- mapOutputTracker() - Method in class org.apache.spark.SparkEnv
- MapOutputTrackerMasterMessage - Interface in org.apache.spark
- MapOutputTrackerMessage - Interface in org.apache.spark
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.
- mapPartitions(FlatMapFunction<Iterator<T>, U>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitions(MapPartitionsFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Returns a new Dataset that contains the result of applying
f
to each partition. - mapPartitions(MapPartitionsFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
- mapPartitions(Function1<Iterator<T>, Iterator<S>>, boolean, ClassTag<S>) - Method in class org.apache.spark.rdd.RDDBarrier
-
:: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage.
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, Encoder<U>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset that contains the result of applying
func
to each partition. - mapPartitions(Function1<Iterator<T>, Iterator<U>>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
- MapPartitionsFunction<T,
U> - Interface in org.apache.spark.api.java.function -
Base interface for function used in Dataset's mapPartitions.
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsWithEvaluator(PartitionEvaluatorFactory<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by applying an evaluator to each partition of this RDD.
- mapPartitionsWithEvaluator(PartitionEvaluatorFactory<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDDBarrier
-
Return a new RDD by applying an evaluator to each partition of the wrapped RDD.
- mapPartitionsWithIndex(Function2<Integer, Iterator<T>, Iterator<R>>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<S>>, boolean, ClassTag<S>) - Method in class org.apache.spark.rdd.RDDBarrier
-
:: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, while tracking the index of the original partition.
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<R>>, boolean) - Method in class org.apache.spark.api.java.JavaHadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<R>>, boolean) - Method in class org.apache.spark.api.java.JavaNewHadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.HadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.NewHadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- MappedPoolMemory - Class in org.apache.spark.metrics
- MappedPoolMemory() - Constructor for class org.apache.spark.metrics.MappedPoolMemory
- mapper() - Method in interface org.apache.spark.util.JsonUtils
- MapperRowCounter - Class in org.apache.spark.sql.util
-
An AccumulatorV2 counter for collecting a list of (mapper index, row count).
- MapperRowCounter() - Constructor for class org.apache.spark.sql.util.MapperRowCounter
- mapredInputFormat() - Method in class org.apache.spark.scheduler.InputFormatInfo
- mapreduceInputFormat() - Method in class org.apache.spark.scheduler.InputFormatInfo
- mapSideCombine() - Method in class org.apache.spark.ShuffleDependency
- mapSizeExceedArraySizeWhenZipMapError(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- MapState<K,
V> - Interface in org.apache.spark.sql.streaming -
Interface used for arbitrary stateful operations with the v2 API to capture map value state.
- MapStatus - Interface in org.apache.spark.scheduler
-
Result returned by a ShuffleMapTask to a scheduler.
- mapStatuses() - Method in class org.apache.spark.ShuffleStatus
-
MapStatus for each partition.
- mapStatusesDeleted() - Method in class org.apache.spark.ShuffleStatus
-
Keep the previous deleted MapStatus for recovery.
- mapToDouble(DoubleFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to all elements of this RDD.
- mapToPair(PairFunction<T, K2, V2>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to all elements of this RDD.
- mapToPair(PairFunction<T, K2, V2>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream.
- mapToSeq(KVStoreView<T>, Function1<T, B>) - Static method in class org.apache.spark.status.KVUtils
-
Maps all values of KVStoreView to new values using a transformation function.
- mapTriplets(Function1<EdgeTriplet<VD, ED>, ED2>, TripletFields, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute using the map function, passing it the adjacent vertex attributes as well.
- mapTriplets(Function1<EdgeTriplet<VD, ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute using the map function, passing it the adjacent vertex attributes as well.
- mapTriplets(Function2<Object, Iterator<EdgeTriplet<VD, ED>>, Iterator<ED2>>, TripletFields, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute a partition at a time using the map function, passing it the adjacent vertex attributes as well.
- mapTriplets(Function2<Object, Iterator<EdgeTriplet<VD, ED>>, Iterator<ED2>>, TripletFields, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- MapType - Class in org.apache.spark.sql.types
-
The data type for Maps.
- MapType() - Constructor for class org.apache.spark.sql.types.MapType
-
No-arg constructor for kryo.
- MapType(DataType, DataType, boolean) - Constructor for class org.apache.spark.sql.types.MapType
- mapValues(Function<V, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.
- mapValues(Function<V, U>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.
- mapValues(MapFunction<V, W>, Encoder<W>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Returns a new
KeyValueGroupedDataset
where the given functionfunc
has been applied to the data. - mapValues(MapFunction<V, W>, Encoder<W>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapValues(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.EdgeRDD
-
Map the values in an edge partitioning preserving the structure but changing the values.
- mapValues(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- mapValues(Function1<V, U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.
- mapValues(Function1<V, U>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.
- mapValues(Function1<V, W>, Encoder<W>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
Returns a new
KeyValueGroupedDataset
where the given functionfunc
has been applied to the data. - mapValues(Function1<V, W>, Encoder<W>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- mapValues(Function1<VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- mapValues(Function1<VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Maps each vertex attribute, preserving the index.
- mapValues(Function2<Object, VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- mapValues(Function2<Object, VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Maps each vertex attribute, additionally supplying the vertex ID.
- mapVertices(Function2<Object, VD, VD2>, ClassTag<VD2>, $eq$colon$eq<VD, VD2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each vertex attribute in the graph using the map function.
- mapVertices(Function2<Object, VD, VD2>, ClassTag<VD2>, $eq$colon$eq<VD, VD2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- mapWithState(StateSpec<K, V, StateType, MappedType>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a
JavaMapWithStateDStream
by applying a function to every key-value element ofthis
stream, while maintaining some state data for each unique key. - mapWithState(StateSpec<K, V, StateType, MappedType>, ClassTag<StateType>, ClassTag<MappedType>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a
MapWithStateDStream
by applying a function to every key-value element ofthis
stream, while maintaining some state data for each unique key. - MapWithStateDStream<KeyType,
ValueType, StateType, MappedType> - Class in org.apache.spark.streaming.dstream -
DStream representing the stream of data generated by
mapWithState
operation on apair DStream
. - MapWithStateDStream(StreamingContext, ClassTag<MappedType>) - Constructor for class org.apache.spark.streaming.dstream.MapWithStateDStream
- mark(int) - Method in class org.apache.spark.storage.BufferReleasingInputStream
- MarkRDDBlockAsVisible(RDDBlockId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.MarkRDDBlockAsVisible
- MarkRDDBlockAsVisible$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.MarkRDDBlockAsVisible$
- markSupported() - Method in class org.apache.spark.storage.BufferReleasingInputStream
- mask(Graph<VD2, ED2>, ClassTag<VD2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Restricts the graph to only the vertices and edges that are also in
other
, but keeps the attributes from this graph. - mask(Graph<VD2, ED2>, ClassTag<VD2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- mask(Column) - Static method in class org.apache.spark.sql.functions
-
Masks the given string value.
- mask(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Masks the given string value.
- mask(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Masks the given string value.
- mask(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Masks the given string value.
- mask(Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Masks the given string value.
- master() - Method in class org.apache.spark.api.java.JavaSparkContext
- master() - Method in class org.apache.spark.SparkContext
- master(String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets the Spark master URL to connect to, such as "local" to run locally, "local[4]" to run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone cluster.
- MASTER() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
- matchedFields() - Method in class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
-
The fields which have matching equivalents in both Avro and Catalyst schemas.
- Matrices - Class in org.apache.spark.ml.linalg
-
Factory methods for
Matrix
. - Matrices - Class in org.apache.spark.mllib.linalg
-
Factory methods for
Matrix
. - Matrices() - Constructor for class org.apache.spark.ml.linalg.Matrices
- Matrices() - Constructor for class org.apache.spark.mllib.linalg.Matrices
- Matrix - Interface in org.apache.spark.ml.linalg
-
Trait for a local matrix.
- Matrix - Interface in org.apache.spark.mllib.linalg
-
Trait for a local matrix.
- MatrixEntry - Class in org.apache.spark.mllib.linalg.distributed
-
Represents an entry in a distributed matrix.
- MatrixEntry(long, long, double) - Constructor for class org.apache.spark.mllib.linalg.distributed.MatrixEntry
- MatrixFactorizationModel - Class in org.apache.spark.mllib.recommendation
-
Model representing the result of matrix factorization.
- MatrixFactorizationModel(int, RDD<Tuple2<Object, double[]>>, RDD<Tuple2<Object, double[]>>) - Constructor for class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
- MatrixFactorizationModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.recommendation
- MatrixImplicits - Class in org.apache.spark.mllib.linalg
- MatrixImplicits() - Constructor for class org.apache.spark.mllib.linalg.MatrixImplicits
- MatrixType() - Static method in class org.apache.spark.ml.linalg.SQLDataTypes
-
Data type for
Matrix
. - MavenCoordinate$() - Constructor for class org.apache.spark.util.MavenUtils.MavenCoordinate$
- MavenUtils - Class in org.apache.spark.util
-
Provides utility functions to be used inside SparkSubmit.
- MavenUtils() - Constructor for class org.apache.spark.util.MavenUtils
- MavenUtils.MavenCoordinate$ - Class in org.apache.spark.util
- max() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Returns the maximum element from this RDD as defined by the default comparator natural order.
- max() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- max() - Method in class org.apache.spark.ml.feature.MinMaxScaler
- max() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- max() - Method in interface org.apache.spark.ml.feature.MinMaxScalerParams
-
upper bound after transformation, shared by all features Default: 1.0
- max() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Maximum value of each dimension.
- max() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Maximum value of each column.
- max() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
- max() - Method in class org.apache.spark.util.StatCounter
- max(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
- max(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the maximum value of the column in a group.
- max(String...) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the max value for each numeric columns for each group.
- max(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- max(Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the maximum element from this RDD as defined by the specified Comparator[T].
- max(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- max(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the maximum value of the expression in a group.
- max(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- max(Duration) - Method in class org.apache.spark.streaming.Duration
- max(Time) - Method in class org.apache.spark.streaming.Time
- max(Seq<String>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the max value for each numeric columns for each group.
- max(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- max(Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the max of this RDD as defined by the implicit Ordering[T].
- max(U, U) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- max(U, U) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- max(U, U) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- max(U, U) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- max(U, U) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- max(U, U) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- max(U, U) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- Max - Class in org.apache.spark.sql.connector.expressions.aggregate
-
An aggregate function that returns the maximum value in a group.
- Max(Expression) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Max
- MAX() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- max_by(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the value associated with the maximum value of ord.
- MAX_CORES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- MAX_DECIMAL16_PRECISION - Static variable in class org.apache.spark.types.variant.VariantUtil
- MAX_DECIMAL4_PRECISION - Static variable in class org.apache.spark.types.variant.VariantUtil
- MAX_DECIMAL8_PRECISION - Static variable in class org.apache.spark.types.variant.VariantUtil
- MAX_DIR_CREATION_ATTEMPTS() - Static method in class org.apache.spark.util.Utils
- MAX_FEATURES_FOR_NORMAL_SOLVER() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
When using
LinearRegression.solver
== "normal", the solver must limit the number of features to at most this number. - MAX_INT_DIGITS() - Static method in class org.apache.spark.sql.types.Decimal
-
Maximum number of decimal digits an Int can represent
- MAX_LONG_DIGITS() - Static method in class org.apache.spark.sql.types.Decimal
-
Maximum number of decimal digits a Long can represent
- MAX_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- MAX_PRECISION() - Static method in class org.apache.spark.sql.types.DecimalType
- MAX_SCALE() - Static method in class org.apache.spark.sql.types.DecimalType
- MAX_SHORT_STR_SIZE - Static variable in class org.apache.spark.types.variant.VariantUtil
- MAX_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- maxAbs() - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- MaxAbsScaler - Class in org.apache.spark.ml.feature
-
Rescale each feature individually to range [-1, 1] by dividing through the largest maximum absolute value in each feature.
- MaxAbsScaler() - Constructor for class org.apache.spark.ml.feature.MaxAbsScaler
- MaxAbsScaler(String) - Constructor for class org.apache.spark.ml.feature.MaxAbsScaler
- MaxAbsScalerModel - Class in org.apache.spark.ml.feature
-
Model fitted by
MaxAbsScaler
. - MaxAbsScalerParams - Interface in org.apache.spark.ml.feature
-
Params for
MaxAbsScaler
andMaxAbsScalerModel
. - maxBins() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- maxBins() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- maxBins() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- maxBins() - Method in class org.apache.spark.ml.classification.GBTClassifier
- maxBins() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- maxBins() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- maxBins() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- maxBins() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- maxBins() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- maxBins() - Method in class org.apache.spark.ml.regression.GBTRegressor
- maxBins() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- maxBins() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- maxBins() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
Maximum number of bins used for discretizing continuous features and for choosing how to split on features at each node.
- maxBins() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.classification.LinearSVC
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.classification.LogisticRegression
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.clustering.KMeans
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.clustering.KMeansModel
- maxBlockSizeInMB() - Method in interface org.apache.spark.ml.param.shared.HasMaxBlockSizeInMB
-
Param for Maximum memory in MB for stacking input data into blocks.
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.regression.LinearRegression
- maxBlockSizeInMB() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- maxBufferSizeMb() - Method in class org.apache.spark.serializer.KryoSerializer
- maxBytes() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxBytes
-
Maximum total size of files to scan.
- maxBytes(long) - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
- maxCategories() - Method in class org.apache.spark.ml.feature.VectorIndexer
- maxCategories() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- maxCategories() - Method in interface org.apache.spark.ml.feature.VectorIndexerParams
-
Threshold for the number of values a categorical feature can take.
- maxCores() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
- maxDepth() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- maxDepth() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- maxDepth() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- maxDepth() - Method in class org.apache.spark.ml.classification.GBTClassifier
- maxDepth() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- maxDepth() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- maxDepth() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- maxDepth() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- maxDepth() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- maxDepth() - Method in class org.apache.spark.ml.regression.GBTRegressor
- maxDepth() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- maxDepth() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- maxDepth() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
Maximum depth of the tree (nonnegative).
- maxDepth() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- maxDF() - Method in class org.apache.spark.ml.feature.CountVectorizer
- maxDF() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- maxDF() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Specifies the maximum number of different documents a term could appear in to be included in the vocabulary.
- maxFiles() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxFiles
-
Approximate maximum rows to scan.
- maxFiles(int) - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
- maxId() - Static method in class org.apache.spark.ErrorMessageFormat
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- maxId() - Static method in class org.apache.spark.rdd.CheckpointState
- maxId() - Static method in class org.apache.spark.rdd.DeterministicLevel
- maxId() - Static method in class org.apache.spark.RequestMethod
- maxId() - Static method in class org.apache.spark.scheduler.SchedulingMode
- maxId() - Static method in class org.apache.spark.scheduler.TaskLocality
- maxId() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- maxId() - Static method in class org.apache.spark.TaskState
- maxIter() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- maxIter() - Method in class org.apache.spark.ml.classification.FMClassifier
- maxIter() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- maxIter() - Method in class org.apache.spark.ml.classification.GBTClassifier
- maxIter() - Method in class org.apache.spark.ml.classification.LinearSVC
- maxIter() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- maxIter() - Method in class org.apache.spark.ml.classification.LogisticRegression
- maxIter() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- maxIter() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- maxIter() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- maxIter() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- maxIter() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- maxIter() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- maxIter() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- maxIter() - Method in class org.apache.spark.ml.clustering.KMeans
- maxIter() - Method in class org.apache.spark.ml.clustering.KMeansModel
- maxIter() - Method in class org.apache.spark.ml.clustering.LDA
- maxIter() - Method in class org.apache.spark.ml.clustering.LDAModel
- maxIter() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- maxIter() - Method in class org.apache.spark.ml.feature.Word2Vec
- maxIter() - Method in class org.apache.spark.ml.feature.Word2VecModel
- maxIter() - Method in interface org.apache.spark.ml.param.shared.HasMaxIter
-
Param for maximum number of iterations (>= 0).
- maxIter() - Method in class org.apache.spark.ml.recommendation.ALS
- maxIter() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- maxIter() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- maxIter() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- maxIter() - Method in class org.apache.spark.ml.regression.FMRegressor
- maxIter() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- maxIter() - Method in class org.apache.spark.ml.regression.GBTRegressor
- maxIter() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- maxIter() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- maxIter() - Method in class org.apache.spark.ml.regression.LinearRegression
- maxIter() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- maxIters() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- maxLen() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
- maxLocalProjDBSize() - Method in class org.apache.spark.ml.fpm.PrefixSpan
-
Param for the maximum number of items (including delimiters used in the internal storage format) allowed in a projected database before local processing (default:
32000000
). - maxMem() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- maxMemory() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- maxMemoryInMB() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- maxMemoryInMB() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- maxMemoryInMB() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- maxMemoryInMB() - Method in class org.apache.spark.ml.classification.GBTClassifier
- maxMemoryInMB() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- maxMemoryInMB() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- maxMemoryInMB() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- maxMemoryInMB() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- maxMemoryInMB() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- maxMemoryInMB() - Method in class org.apache.spark.ml.regression.GBTRegressor
- maxMemoryInMB() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- maxMemoryInMB() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- maxMemoryInMB() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
Maximum memory in MB allocated to histogram aggregation.
- maxMemoryInMB() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- maxMessageSizeBytes(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the configured max message size for messages in bytes.
- maxNodesInLevel(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the maximum number of nodes which can be in the given level of the tree.
- maxNumConcurrentTasks(ResourceProfile) - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get the max number of tasks that can be concurrent launched based on the ResourceProfile could be used, even if some of them are being used at the moment.
- maxOffHeapMem() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- maxOffHeapMemSize() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
- maxOnHeapMem() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- maxOnHeapMemSize() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
- maxPatternLength() - Method in class org.apache.spark.ml.fpm.PrefixSpan
-
Param for the maximal pattern length (default:
10
). - maxPrecisionForBytes(int) - Static method in class org.apache.spark.sql.types.Decimal
- maxReplicas() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
- maxRows() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxRows
-
Approximate maximum rows to scan.
- maxRows(long) - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
- maxSentenceLength() - Method in class org.apache.spark.ml.feature.Word2Vec
- maxSentenceLength() - Method in interface org.apache.spark.ml.feature.Word2VecBase
-
Sets the maximum length (in words) of each sentence in the input data.
- maxSentenceLength() - Method in class org.apache.spark.ml.feature.Word2VecModel
- maxSplitFeatureIndex() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
-
Trace down the tree, and return the largest feature index used in any split.
- maxTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- maxTasksPerExecutor() - Method in class org.apache.spark.status.LiveResourceProfile
- maxTriggerDelayMs() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMinRows
-
Approximate maximum trigger delay.
- maxVal() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- md5(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the MD5 digest of a binary column and returns the value as a 32 character hex string.
- mean() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the mean of this RDD's elements.
- mean() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- mean() - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
- mean() - Method in class org.apache.spark.mllib.feature.StandardScalerModel
- mean() - Method in class org.apache.spark.mllib.random.ExponentialGenerator
- mean() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
- mean() - Method in class org.apache.spark.mllib.random.PoissonGenerator
- mean() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Sample mean of each dimension.
- mean() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Sample mean vector.
- mean() - Method in class org.apache.spark.partial.BoundedDouble
- mean() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the mean of this RDD's elements.
- mean() - Method in class org.apache.spark.util.StatCounter
- mean(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- mean(String...) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the average value for each numeric columns for each group.
- mean(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- mean(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- mean(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- mean(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- mean(Seq<String>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the average value for each numeric columns for each group.
- mean(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- meanAbsoluteError() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns the mean absolute error, which is a risk function corresponding to the expected value of the absolute error loss or l1-norm loss.
- meanAbsoluteError() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns the mean absolute error, which is a risk function corresponding to the expected value of the absolute error loss or l1-norm loss.
- meanApprox(long) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Approximate operation to return the mean within a timeout.
- meanApprox(long, double) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Approximate operation to return the mean within a timeout.
- meanApprox(long, Double) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return the approximate mean of the elements in this RDD.
- meanAveragePrecision() - Method in class org.apache.spark.mllib.evaluation.RankingMetrics
- meanAveragePrecisionAt(int) - Method in class org.apache.spark.mllib.evaluation.RankingMetrics
-
Returns the mean average precision (MAP) at ranking position k of all the queries.
- means() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
- means() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
- meanSquaredError() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns the mean squared error, which is a risk function corresponding to the expected value of the squared error loss or quadratic loss.
- meanSquaredError() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns the mean squared error, which is a risk function corresponding to the expected value of the squared error loss or quadratic loss.
- median() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- median(long[], boolean) - Static method in class org.apache.spark.util.Utils
-
Return the median number of a long array
- median(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the median of the values in a group.
- megabytesToString(long) - Static method in class org.apache.spark.util.Utils
-
Convert a quantity in megabytes to a human-readable string such as "4.0 MiB".
- melt(Column[], String, String) - Method in class org.apache.spark.sql.api.Dataset
-
Unpivot a DataFrame from wide format to long format, optionally leaving identifier columns set.
- melt(Column[], String, String) - Method in class org.apache.spark.sql.Dataset
- melt(Column[], Column[], String, String) - Method in class org.apache.spark.sql.api.Dataset
-
Unpivot a DataFrame from wide format to long format, optionally leaving identifier columns set.
- melt(Column[], Column[], String, String) - Method in class org.apache.spark.sql.Dataset
- MEM_SIZE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- MEM_SPILL() - Static method in class org.apache.spark.status.TaskIndexNames
- memory(String) - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
Specify heap memory.
- MEMORY() - Static method in class org.apache.spark.resource.ResourceProfile
-
built-in executor resource: memory
- MEMORY_AND_DISK - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_AND_DISK - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_AND_DISK() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_AND_DISK_2 - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_AND_DISK_2 - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_AND_DISK_2() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_AND_DISK_SER - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_AND_DISK_SER - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_AND_DISK_SER() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_AND_DISK_SER_2 - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_AND_DISK_SER_2 - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_AND_DISK_SER_2() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_BYTES_SPILLED() - Static method in class org.apache.spark.InternalAccumulator
- MEMORY_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- MEMORY_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- MEMORY_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- MEMORY_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- MEMORY_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- MEMORY_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- MEMORY_BYTES_SPILLED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- MEMORY_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- MEMORY_ONLY - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_ONLY - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_ONLY() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_ONLY_2 - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_ONLY_2 - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_ONLY_2() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_ONLY_SER - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_ONLY_SER - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_ONLY_SER() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_ONLY_SER_2 - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- MEMORY_ONLY_SER_2 - Static variable in class org.apache.spark.api.java.StorageLevels
- MEMORY_ONLY_SER_2() - Static method in class org.apache.spark.storage.StorageLevel
- MEMORY_PER_EXECUTOR_MB_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- MEMORY_REMAINING_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- MEMORY_USED_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- MEMORY_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- MEMORY_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- MEMORY_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- MEMORY_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.StageData
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- memoryCost(int, int) - Static method in class org.apache.spark.mllib.feature.PCAUtil
- MemoryEntry<T> - Interface in org.apache.spark.storage.memory
- MemoryEntryBuilder<T> - Interface in org.apache.spark.storage.memory
- memoryManager() - Method in class org.apache.spark.SparkEnv
- memoryMetrics() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- MemoryMetrics - Class in org.apache.spark.status.api.v1
- memoryMode() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
- memoryMode() - Method in interface org.apache.spark.storage.memory.MemoryEntry
- memoryMode() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
- memoryOverhead(String) - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
Specify overhead memory.
- MemoryParam - Class in org.apache.spark.util
-
An extractor object for parsing JVM memory strings, such as "10g", into an Int representing the number of megabytes.
- MemoryParam() - Constructor for class org.apache.spark.util.MemoryParam
- memoryPerExecutorMB() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
- memoryRemaining() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- memoryStringToMb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a Java memory parameter passed to -Xmx (such as 300m or 1g) to a number of mebibytes.
- memoryUsed() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- memoryUsed() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- memoryUsed() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
- memoryUsed() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- memoryUsed() - Method in class org.apache.spark.status.LiveRDD
- memoryUsed() - Method in class org.apache.spark.status.LiveRDDDistribution
- memoryUsed() - Method in class org.apache.spark.status.LiveRDDPartition
- memoryUsedBytes() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- memSize() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- memSize() - Method in class org.apache.spark.storage.BlockStatus
- memSize() - Method in class org.apache.spark.storage.BlockUpdatedInfo
- memSize() - Method in class org.apache.spark.storage.RDDInfo
- merge() - Method in class org.apache.spark.sql.MergeIntoWriter
-
Executes the merge operation.
- merge(double) - Method in class org.apache.spark.util.StatCounter
-
Add a value into this StatCounter, updating the internal statistics.
- merge(int, U) - Method in interface org.apache.spark.partial.ApproximateEvaluator
- merge(Agg) - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
Merge two aggregators.
- merge(BUF, BUF) - Method in class org.apache.spark.sql.expressions.Aggregator
-
Merge two intermediate values.
- merge(IDF.DocumentFrequencyAggregator) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
Merges another.
- merge(MultivariateOnlineSummarizer) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Merge another MultivariateOnlineSummarizer, and update the statistical summary.
- merge(MutableAggregationBuffer, Row) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Merges two aggregation buffers and stores the updated buffer values back to
buffer1
. - merge(NumericHistogram) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Takes a histogram and merges it with the current histogram object.
- merge(AccumulatorV2<IN, OUT>) - Method in class org.apache.spark.util.AccumulatorV2
-
Merges another same-type accumulator into this one and update its state, i.e.
- merge(AccumulatorV2<Double, Double>) - Method in class org.apache.spark.util.DoubleAccumulator
- merge(AccumulatorV2<Long, Long>) - Method in class org.apache.spark.util.LongAccumulator
- merge(AccumulatorV2<Long, List<Tuple2<Integer, Long>>>) - Method in class org.apache.spark.sql.util.MapperRowCounter
- merge(AccumulatorV2<T, List<T>>) - Method in class org.apache.spark.util.CollectionAccumulator
- merge(StatCounter) - Method in class org.apache.spark.util.StatCounter
-
Merge another StatCounter into this one, adding up the internal statistics.
- merge(IterableOnce<Object>) - Method in class org.apache.spark.util.StatCounter
-
Add multiple values into this StatCounter, updating the internal statistics.
- merge(S, S) - Method in interface org.apache.spark.sql.connector.catalog.functions.AggregateFunction
-
Merge two partial aggregation states.
- MERGE - Enum constant in enum class org.apache.spark.sql.connector.write.RowLevelOperation.Command
- mergeCardinalityViolationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- mergeCluster(StoreTypes.SparkPlanGraphClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- mergeCombiners() - Method in class org.apache.spark.Aggregator
- MERGED_FETCH_FALLBACK_COUNT() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- MERGED_FETCH_FALLBACK_COUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- MERGED_FETCH_FALLBACK_COUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- mergedFetchFallbackCount() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- mergedFetchFallbackCount() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- mergeExecutorMetricsDistributions(StoreTypes.ExecutorMetricsDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- mergeFileLists(Seq<String>) - Static method in class org.apache.spark.util.DependencyUtils
-
Merge a sequence of comma-separated file lists, some of which may be null to indicate no files, into a single comma-separated string.
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- mergeFrom(CodedInputStream, ExtensionRegistryLite) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- mergeFrom(Message) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- mergeFrom(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- mergeFrom(StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- mergeFrom(StoreTypes.ApplicationEnvironmentInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- mergeFrom(StoreTypes.ApplicationEnvironmentInfoWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- mergeFrom(StoreTypes.ApplicationInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- mergeFrom(StoreTypes.ApplicationInfoWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- mergeFrom(StoreTypes.AppSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- mergeFrom(StoreTypes.CachedQuantile) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- mergeFrom(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- mergeFrom(StoreTypes.ExecutorMetricsDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- mergeFrom(StoreTypes.ExecutorPeakMetricsDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- mergeFrom(StoreTypes.ExecutorResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- mergeFrom(StoreTypes.ExecutorStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- mergeFrom(StoreTypes.ExecutorStageSummaryWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- mergeFrom(StoreTypes.ExecutorSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- mergeFrom(StoreTypes.ExecutorSummaryWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- mergeFrom(StoreTypes.InputMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- mergeFrom(StoreTypes.InputMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- mergeFrom(StoreTypes.JobData) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- mergeFrom(StoreTypes.JobDataWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- mergeFrom(StoreTypes.MemoryMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- mergeFrom(StoreTypes.OutputMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- mergeFrom(StoreTypes.OutputMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- mergeFrom(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- mergeFrom(StoreTypes.PoolData) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- mergeFrom(StoreTypes.ProcessSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- mergeFrom(StoreTypes.ProcessSummaryWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- mergeFrom(StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- mergeFrom(StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- mergeFrom(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- mergeFrom(StoreTypes.RDDOperationGraphWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- mergeFrom(StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- mergeFrom(StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- mergeFrom(StoreTypes.RDDStorageInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- mergeFrom(StoreTypes.RDDStorageInfoWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- mergeFrom(StoreTypes.ResourceInformation) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- mergeFrom(StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- mergeFrom(StoreTypes.ResourceProfileWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- mergeFrom(StoreTypes.RuntimeInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- mergeFrom(StoreTypes.ShufflePushReadMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- mergeFrom(StoreTypes.ShufflePushReadMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- mergeFrom(StoreTypes.ShuffleReadMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- mergeFrom(StoreTypes.ShuffleReadMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- mergeFrom(StoreTypes.ShuffleWriteMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- mergeFrom(StoreTypes.ShuffleWriteMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- mergeFrom(StoreTypes.SinkProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- mergeFrom(StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- mergeFrom(StoreTypes.SparkPlanGraphClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- mergeFrom(StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- mergeFrom(StoreTypes.SparkPlanGraphNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- mergeFrom(StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- mergeFrom(StoreTypes.SparkPlanGraphWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- mergeFrom(StoreTypes.SpeculationStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- mergeFrom(StoreTypes.SpeculationStageSummaryWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- mergeFrom(StoreTypes.SQLExecutionUIData) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- mergeFrom(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- mergeFrom(StoreTypes.StageData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- mergeFrom(StoreTypes.StageDataWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- mergeFrom(StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- mergeFrom(StoreTypes.StreamBlockData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- mergeFrom(StoreTypes.StreamingQueryData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- mergeFrom(StoreTypes.StreamingQueryProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- mergeFrom(StoreTypes.StreamingQueryProgressWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- mergeFrom(StoreTypes.TaskData) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- mergeFrom(StoreTypes.TaskDataWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- mergeFrom(StoreTypes.TaskMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- mergeFrom(StoreTypes.TaskMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- mergeFrom(StoreTypes.TaskResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- mergeInfo(StoreTypes.ApplicationEnvironmentInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- mergeInfo(StoreTypes.ApplicationInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- mergeInfo(StoreTypes.ExecutorStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- mergeInfo(StoreTypes.ExecutorSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- mergeInfo(StoreTypes.JobData) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- mergeInfo(StoreTypes.ProcessSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- mergeInfo(StoreTypes.RDDStorageInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- mergeInfo(StoreTypes.SpeculationStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- mergeInfo(StoreTypes.StageData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- mergeInPlace(BloomFilter) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Combines this bloom filter with another bloom filter by performing a bitwise OR of the underlying data.
- mergeInPlace(CountMinSketch) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Merges another
CountMinSketch
with this one in place. - mergeInputMetrics(StoreTypes.InputMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- mergeInputMetrics(StoreTypes.InputMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- mergeInto(String, Column) - Method in class org.apache.spark.sql.api.Dataset
-
Merges a set of updates, insertions, and deletions based on a source table into a target table.
- mergeInto(String, Column) - Method in class org.apache.spark.sql.Dataset
- mergeIntoWriter() - Method in class org.apache.spark.sql.WhenMatched
- mergeIntoWriter() - Method in class org.apache.spark.sql.WhenNotMatched
- mergeIntoWriter() - Method in class org.apache.spark.sql.WhenNotMatchedBySource
- MergeIntoWriter<T> - Class in org.apache.spark.sql
-
MergeIntoWriter
provides methods to define and execute merge actions based on specified conditions. - MergeIntoWriter() - Constructor for class org.apache.spark.sql.MergeIntoWriter
- mergeMemoryMetrics(StoreTypes.MemoryMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- mergeNode(StoreTypes.SparkPlanGraphNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- mergeOffsets(PartitionOffset[]) - Method in interface org.apache.spark.sql.connector.read.streaming.ContinuousStream
-
Merge partitioned offsets coming from
ContinuousPartitionReader
instances for each partition to a single global offset. - mergeOutputMetrics(StoreTypes.OutputMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- mergeOutputMetrics(StoreTypes.OutputMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- mergePeakExecutorMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- mergePeakMemoryMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- mergePeakMemoryMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- mergePeakMemoryMetrics(StoreTypes.ExecutorPeakMetricsDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- mergeProgress(StoreTypes.StreamingQueryProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- mergeRootCluster(StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- mergeRpInfo(StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- mergeRuntime(StoreTypes.RuntimeInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- mergeShufflePushReadMetrics(StoreTypes.ShufflePushReadMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- mergeShufflePushReadMetricsDist(StoreTypes.ShufflePushReadMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- mergeShuffleReadMetrics(StoreTypes.ShuffleReadMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- mergeShuffleReadMetrics(StoreTypes.ShuffleReadMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- mergeShuffleWriteMetrics(StoreTypes.ShuffleWriteMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- mergeShuffleWriteMetrics(StoreTypes.ShuffleWriteMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- mergeSink(StoreTypes.SinkProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- mergeSpeculationSummary(StoreTypes.SpeculationStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- mergeStatementWithoutWhenClauseError(SqlBaseParser.MergeIntoTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- mergeStatuses() - Method in class org.apache.spark.ShuffleStatus
-
MergeStatus for each shuffle partition when push-based shuffle is enabled.
- mergeTaskMetrics(StoreTypes.TaskMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- mergeTaskMetricsDistributions(StoreTypes.TaskMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- mergeUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- mergeUnsupportedByWindowFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- mergeValue() - Method in class org.apache.spark.Aggregator
- message() - Method in class org.apache.spark.ErrorInfo
- message() - Method in class org.apache.spark.ErrorSubInfo
- message() - Method in class org.apache.spark.FetchFailed
- message() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
- message() - Static method in class org.apache.spark.scheduler.ExecutorKilled
- message() - Static method in class org.apache.spark.scheduler.LossReasonPending
- message() - Method in exception org.apache.spark.sql.AnalysisException
- message() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- message() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
- messageParameters() - Method in exception org.apache.spark.sql.AnalysisException
- messageTemplate() - Method in class org.apache.spark.ErrorInfo
- messageTemplate() - Method in class org.apache.spark.ErrorSubInfo
- MetaAlgorithmReadWrite - Class in org.apache.spark.ml.util
-
Default Meta-Algorithm read and write implementation.
- MetaAlgorithmReadWrite() - Constructor for class org.apache.spark.ml.util.MetaAlgorithmReadWrite
- metadata() - Method in class org.apache.spark.sql.types.StructField
- metadata() - Method in class org.apache.spark.streaming.scheduler.StreamInputInfo
- Metadata - Class in org.apache.spark.sql.types
-
Metadata is a wrapper over Map[String, Any] that limits the value type to simple ones: Boolean, Long, Double, String, Metadata, Array[Boolean], Array[Long], Array[Double], Array[String], and Array[Metadata].
- METADATA_KEY_DESCRIPTION() - Static method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
The key for description in
StreamInputInfo.metadata
. - MetadataBuilder - Class in org.apache.spark.sql.types
-
Builder for
Metadata
. - MetadataBuilder() - Constructor for class org.apache.spark.sql.types.MetadataBuilder
- metadataColumn(String) - Method in class org.apache.spark.sql.api.Dataset
-
Selects a metadata column based on its logical column name, and returns it as a
Column
. - metadataColumn(String) - Method in class org.apache.spark.sql.Dataset
- MetadataColumn - Interface in org.apache.spark.sql.connector.catalog
-
Interface for a metadata column.
- metadataColumns() - Method in interface org.apache.spark.sql.connector.catalog.SupportsMetadataColumns
-
Metadata columns that are supported by this
Table
. - metadataDescription() - Method in class org.apache.spark.streaming.scheduler.StreamInputInfo
- metadataInJSON() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns the column metadata in JSON format.
- metadataSchema() - Method in interface org.apache.spark.sql.connector.write.LogicalWriteInfo
-
the schema of the input metadata from Spark to data source.
- MetadataUtils - Class in org.apache.spark.ml.util
-
Helper utilities for algorithms using ML metadata
- MetadataUtils() - Constructor for class org.apache.spark.ml.util.MetadataUtils
- method() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
- Method(String, Function2<Object, Object, Object>) - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest.Method
- Method$() - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest.Method$
- methodCalledInAnalyzerNotAllowedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- MethodIdentifier<T> - Class in org.apache.spark.util
-
Helper class to identify a method.
- MethodIdentifier(Class<T>, String, String) - Constructor for class org.apache.spark.util.MethodIdentifier
- methodName() - Method in interface org.apache.spark.mllib.stat.test.StreamingTestMethod
- methodName() - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
- methodName() - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
- methodNotDeclaredError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- methodNotFoundError(Class<?>, String, Seq<Class<?>>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- methodNotImplementedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- Metric - Class in org.apache.spark.status.api.v1.sql
- METRIC_COMPILATION_TIME() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the time it took to compile source code text (in milliseconds).
- METRIC_FILE_CACHE_HITS() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of files served from the file status cache instead of discovered.
- METRIC_FILES_DISCOVERED() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of files discovered off of the filesystem by InMemoryFileIndex.
- METRIC_GENERATED_CLASS_BYTECODE_SIZE() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the bytecode size of each class generated by CodeGenerator.
- METRIC_GENERATED_METHOD_BYTECODE_SIZE() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the bytecode size of each method in classes generated by CodeGenerator.
- METRIC_HIVE_CLIENT_CALLS() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of Hive client calls (e.g.
- METRIC_PARALLEL_LISTING_JOB_COUNT() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of Spark jobs launched for parallel file listing.
- METRIC_PARTITIONS_FETCHED() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of partition metadata entries fetched via the client api.
- METRIC_SOURCE_CODE_SIZE() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the length of source code text compiled by CodeGenerator (in characters).
- METRIC_TYPE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- METRIC_VALUES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- METRIC_VALUES_IS_NULL_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- metricLabel() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
The class whose metric will be computed in
"truePositiveRateByLabel"
,"falsePositiveRateByLabel"
,"precisionByLabel"
,"recallByLabel"
,"fMeasureByLabel"
. - metricLabel() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
-
param for the class whose metric will be computed in
"precisionByLabel"
,"recallByLabel"
,"f1MeasureByLabel"
. - metricName() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
param for metric name in evaluation (supports
"areaUnderROC"
(default),"areaUnderPR"
) - metricName() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
-
param for metric name in evaluation (supports
"silhouette"
(default)) - metricName() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
param for metric name in evaluation (supports
"f1"
(default),"accuracy"
,"weightedPrecision"
,"weightedRecall"
,"weightedTruePositiveRate"
,"weightedFalsePositiveRate"
,"weightedFMeasure"
,"truePositiveRateByLabel"
,"falsePositiveRateByLabel"
,"precisionByLabel"
,"recallByLabel"
,"fMeasureByLabel"
,"logLoss"
,"hammingLoss"
) - metricName() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
-
param for metric name in evaluation (supports
"f1Measure"
(default),"subsetAccuracy"
,"accuracy"
,"hammingLoss"
,"precision"
,"recall"
,"precisionByLabel"
,"recallByLabel"
,"f1MeasureByLabel"
,"microPrecision"
,"microRecall"
,"microF1Measure"
) - metricName() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
-
param for metric name in evaluation (supports
"meanAveragePrecision"
(default),"meanAveragePrecisionAtK"
,"precisionAtK"
,"ndcgAtK"
,"recallAtK"
) - metricName() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
Param for metric name in evaluation.
- metricPeaks() - Method in class org.apache.spark.TaskKilled
- metricRegistry() - Method in interface org.apache.spark.api.plugin.PluginContext
-
Registry where to register metrics published by the plugin associated with this context.
- metricRegistry() - Method in class org.apache.spark.metrics.source.DoubleAccumulatorSource
- metricRegistry() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
- metricRegistry() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
- metricRegistry() - Method in interface org.apache.spark.metrics.source.Source
- metrics() - Method in interface org.apache.spark.sql.connector.read.streaming.ReportsSinkMetrics
-
Returns the metrics reported by the sink for this micro-batch
- metrics() - Method in class org.apache.spark.sql.streaming.SinkProgress
- metrics() - Method in class org.apache.spark.sql.streaming.SourceProgress
- metrics() - Method in class org.apache.spark.status.api.v1.sql.Node
- metrics() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- metrics() - Method in class org.apache.spark.status.LiveStage
- metrics(String...) - Static method in class org.apache.spark.ml.stat.Summarizer
-
Given a list of metrics, provides a builder that it turns computes metrics from a column.
- metrics(Optional<Offset>) - Method in interface org.apache.spark.sql.connector.read.streaming.ReportsSourceMetrics
-
Returns the metrics reported by the streaming source with respect to the latest consumed offset.
- metrics(Seq<String>) - Static method in class org.apache.spark.ml.stat.Summarizer
-
Given a list of metrics, provides a builder that it turns computes metrics from a column.
- METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
- METRICS_PROPERTIES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- metricsProperties() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
- metricsSystem() - Method in class org.apache.spark.SparkEnv
- MetricsSystemInstances - Class in org.apache.spark.metrics
- MetricsSystemInstances() - Constructor for class org.apache.spark.metrics.MetricsSystemInstances
- MFDataGenerator - Class in org.apache.spark.mllib.util
-
Generate RDD(s) containing data for Matrix Factorization.
- MFDataGenerator() - Constructor for class org.apache.spark.mllib.util.MFDataGenerator
- MICRO_BATCH_READ - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table supports reads in micro-batch streaming execution mode.
- MicroBatchStream - Interface in org.apache.spark.sql.connector.read.streaming
-
A
SparkDataStream
for streaming queries with micro-batch mode. - microBatchUnsupportedByDataSourceError(String, String, Table) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- microF1Measure() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
- microPrecision() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
- microRecall() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
- microseconds - Variable in class org.apache.spark.unsafe.types.CalendarInterval
- mightContain(Object) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Returns
true
if the element might have been put in this Bloom filter,false
if this is definitely not the case. - mightContainBinary(byte[]) - Method in class org.apache.spark.util.sketch.BloomFilter
-
A specialized variant of
BloomFilter.mightContain(Object)
that only tests byte array items. - mightContainLong(long) - Method in class org.apache.spark.util.sketch.BloomFilter
-
A specialized variant of
BloomFilter.mightContain(Object)
that only testslong
items. - mightContainString(String) - Method in class org.apache.spark.util.sketch.BloomFilter
-
A specialized variant of
BloomFilter.mightContain(Object)
that only testsString
items. - milliseconds() - Method in class org.apache.spark.streaming.Duration
- milliseconds() - Method in class org.apache.spark.streaming.Time
- milliseconds(long) - Static method in class org.apache.spark.streaming.Durations
- Milliseconds - Class in org.apache.spark.streaming
-
Helper object that creates instance of
Duration
representing a given number of milliseconds. - Milliseconds() - Constructor for class org.apache.spark.streaming.Milliseconds
- millisToString(long) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
Reformat a time interval in milliseconds to a prettier format for output
- min() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Returns the minimum element from this RDD as defined by the default comparator natural order.
- min() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- min() - Method in class org.apache.spark.ml.feature.MinMaxScaler
- min() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- min() - Method in interface org.apache.spark.ml.feature.MinMaxScalerParams
-
lower bound after transformation, shared by all features Default: 0.0
- min() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Minimum value of each dimension.
- min() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Minimum value of each column.
- min() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
- min() - Method in class org.apache.spark.util.StatCounter
- min(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the minimum value of the column in a group.
- min(String...) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the min value for each numeric column for each group.
- min(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- min(Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the minimum element from this RDD as defined by the specified Comparator[T].
- min(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- min(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the minimum value of the expression in a group.
- min(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- min(Duration) - Method in class org.apache.spark.streaming.Duration
- min(Time) - Method in class org.apache.spark.streaming.Time
- min(Seq<String>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the min value for each numeric column for each group.
- min(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- min(Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the min of this RDD as defined by the implicit Ordering[T].
- min(U, U) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- min(U, U) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- min(U, U) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- min(U, U) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- min(U, U) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- min(U, U) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- min(U, U) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- Min - Class in org.apache.spark.sql.connector.expressions.aggregate
-
An aggregate function that returns the minimum value in a group.
- Min(Expression) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Min
- MIN() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- min_by(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the value associated with the minimum value of ord.
- minBytesForPrecision() - Static method in class org.apache.spark.sql.types.Decimal
- minConfidence() - Method in class org.apache.spark.ml.fpm.FPGrowth
- minConfidence() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- minConfidence() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
-
Minimal confidence for generating Association Rule.
- minCount() - Method in class org.apache.spark.ml.feature.Word2Vec
- minCount() - Method in interface org.apache.spark.ml.feature.Word2VecBase
-
The minimum number of times a token must appear to be included in the word2vec model's vocabulary.
- minCount() - Method in class org.apache.spark.ml.feature.Word2VecModel
- minDF() - Method in class org.apache.spark.ml.feature.CountVectorizer
- minDF() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- minDF() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Specifies the minimum number of different documents a term must appear in to be included in the vocabulary.
- minDivisibleClusterSize() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- minDivisibleClusterSize() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- minDivisibleClusterSize() - Method in interface org.apache.spark.ml.clustering.BisectingKMeansParams
-
The minimum number of points (if greater than or equal to 1.0) or the minimum proportion of points (if less than 1.0) of a divisible cluster (default: 1.0).
- minDocFreq() - Method in class org.apache.spark.ml.feature.IDF
- minDocFreq() - Method in interface org.apache.spark.ml.feature.IDFBase
-
The minimum number of documents in which a term should appear.
- minDocFreq() - Method in class org.apache.spark.ml.feature.IDFModel
- minDocFreq() - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
- minDocFreq() - Method in class org.apache.spark.mllib.feature.IDF
- MinHashLSH - Class in org.apache.spark.ml.feature
-
LSH class for Jaccard distance.
- MinHashLSH() - Constructor for class org.apache.spark.ml.feature.MinHashLSH
- MinHashLSH(String) - Constructor for class org.apache.spark.ml.feature.MinHashLSH
- MinHashLSHModel - Class in org.apache.spark.ml.feature
-
Model produced by
MinHashLSH
, where multiple hash functions are stored. - miniBatchFraction() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- miniBatchFraction() - Method in class org.apache.spark.ml.classification.FMClassifier
- miniBatchFraction() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
-
Param for mini-batch fraction, must be in range (0, 1]
- miniBatchFraction() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- miniBatchFraction() - Method in class org.apache.spark.ml.regression.FMRegressor
- MINIMAL() - Static method in class org.apache.spark.ErrorMessageFormat
- MINIMUM_ADJUSTED_SCALE() - Static method in class org.apache.spark.sql.types.DecimalType
- minimumPythonSupportedVersion() - Static method in class org.apache.spark.TestUtils
- minInfoGain() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- minInfoGain() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- minInfoGain() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- minInfoGain() - Method in class org.apache.spark.ml.classification.GBTClassifier
- minInfoGain() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- minInfoGain() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- minInfoGain() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- minInfoGain() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- minInfoGain() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- minInfoGain() - Method in class org.apache.spark.ml.regression.GBTRegressor
- minInfoGain() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- minInfoGain() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- minInfoGain() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
Minimum information gain for a split to be considered at a tree node.
- minInfoGain() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- minInstancesPerNode() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- minInstancesPerNode() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- minInstancesPerNode() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- minInstancesPerNode() - Method in class org.apache.spark.ml.classification.GBTClassifier
- minInstancesPerNode() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- minInstancesPerNode() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- minInstancesPerNode() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- minInstancesPerNode() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- minInstancesPerNode() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- minInstancesPerNode() - Method in class org.apache.spark.ml.regression.GBTRegressor
- minInstancesPerNode() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- minInstancesPerNode() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- minInstancesPerNode() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
Minimum number of instances each child must have after split.
- minInstancesPerNode() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- MinMax() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- MinMaxScaler - Class in org.apache.spark.ml.feature
-
Rescale each feature individually to a common range [min, max] linearly using column summary statistics, which is also known as min-max normalization or Rescaling.
- MinMaxScaler() - Constructor for class org.apache.spark.ml.feature.MinMaxScaler
- MinMaxScaler(String) - Constructor for class org.apache.spark.ml.feature.MinMaxScaler
- MinMaxScalerModel - Class in org.apache.spark.ml.feature
-
Model fitted by
MinMaxScaler
. - MinMaxScalerParams - Interface in org.apache.spark.ml.feature
-
Params for
MinMaxScaler
andMinMaxScalerModel
. - minorVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Given a Spark version string, return the minor version number.
- minRows() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMinRows
-
Approximate minimum rows to scan.
- minRows(long, long) - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
- minSamplingRate() - Static method in class org.apache.spark.util.random.BinomialBounds
- minShare() - Method in interface org.apache.spark.scheduler.Schedulable
- minSupport() - Method in class org.apache.spark.ml.fpm.FPGrowth
- minSupport() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- minSupport() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
-
Minimal support level of the frequent pattern.
- minSupport() - Method in class org.apache.spark.ml.fpm.PrefixSpan
-
Param for the minimal support level (default:
0.1
). - minTF() - Method in class org.apache.spark.ml.feature.CountVectorizer
- minTF() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- minTF() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Filter to ignore rare words in a document.
- minTokenLength() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Minimum token length, greater than or equal to 0.
- minus(byte, byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- minus(double, double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- minus(double, double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- minus(float, float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- minus(float, float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- minus(int, int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- minus(long, long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- minus(short, short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- minus(Object) - Method in class org.apache.spark.sql.Column
-
Subtraction.
- minus(VertexRDD<VD>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- minus(VertexRDD<VD>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each VertexId present in both
this
andother
, minus will act as a set difference operation returning only those unique VertexId's present inthis
. - minus(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- minus(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each VertexId present in both
this
andother
, minus will act as a set difference operation returning only those unique VertexId's present inthis
. - minus(Decimal, Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- minus(Decimal, Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- minus(Duration) - Method in class org.apache.spark.streaming.Duration
- minus(Duration) - Method in class org.apache.spark.streaming.Time
- minus(Time) - Method in class org.apache.spark.streaming.Time
- minute(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the minutes as an integer from a given date/timestamp/string.
- MINUTE() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- minutes() - Static method in class org.apache.spark.scheduler.StatsReportListener
- minutes(long) - Static method in class org.apache.spark.streaming.Durations
- Minutes - Class in org.apache.spark.streaming
-
Helper object that creates instance of
Duration
representing a given number of minutes. - Minutes() - Constructor for class org.apache.spark.streaming.Minutes
- minVal() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.classification.GBTClassifier
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.regression.GBTRegressor
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- minWeightFractionPerNode() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- minWeightFractionPerNode() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
Minimum fraction of the weighted sample count that each child must have after split.
- minWeightFractionPerNode() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- MiscellaneousProcessAdded(long, String, MiscellaneousProcessDetails) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.MiscellaneousProcessAdded
- MiscellaneousProcessAdded$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.MiscellaneousProcessAdded$
- MiscellaneousProcessDetails - Class in org.apache.spark.scheduler
-
:: DeveloperApi :: Stores information about an Miscellaneous Process to pass from the scheduler to SparkListeners.
- MiscellaneousProcessDetails(String, int, Map<String, String>) - Constructor for class org.apache.spark.scheduler.MiscellaneousProcessDetails
- mismatchedTableBucketingError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- mismatchedTableClusteringError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- mismatchedTableColumnNumberError(String, CatalogTable, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- mismatchedTableFormatError(String, Class<?>, Class<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- mismatchedTableLocationError(TableIdentifier, CatalogTable, CatalogTable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- mismatchedTablePartitionColumnError(String, Seq<String>, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- missingCatalogAbilityError(CatalogPlugin, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- missingDatabaseLocationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- missingJdbcTableNameAndQueryError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- missingStaticPartitionColumn(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- missingValue() - Method in class org.apache.spark.ml.feature.Imputer
- missingValue() - Method in class org.apache.spark.ml.feature.ImputerModel
- missingValue() - Method in interface org.apache.spark.ml.feature.ImputerParams
-
The placeholder for the missing values.
- mixedIntervalUnitsError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- mixedRefsInAggFunc(String, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- mkList() - Static method in class org.apache.spark.ml.feature.RFormulaParser
- mkNumericOps(T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- mkNumericOps(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- mkNumericOps(T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- mkNumericOps(T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- mkNumericOps(T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- mkNumericOps(T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- mkNumericOps(T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- mkOrderingOps(T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- mkOrderingOps(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- mkOrderingOps(T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- mkOrderingOps(T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- mkOrderingOps(T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- mkOrderingOps(T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- mkOrderingOps(T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- mkString() - Method in interface org.apache.spark.sql.Row
-
Displays all elements of this sequence in a string (without a separator).
- mkString(String) - Method in interface org.apache.spark.sql.Row
-
Displays all elements of this sequence in a string using a separator string.
- mkString(String, String, String) - Method in interface org.apache.spark.sql.Row
-
Displays all elements of this traversable or iterator in a string using start, end, and separator strings.
- mkString(String, String, String) - Method in class org.apache.spark.status.api.v1.StackTrace
- ML_ATTR() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- mlDenseMatrixToMLlibDenseMatrix(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
- mlDenseVectorToMLlibDenseVector(DenseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
- MLEvent - Interface in org.apache.spark.ml
-
Event emitted by ML operations.
- MLEvents - Interface in org.apache.spark.ml
-
A small trait that defines some methods to send
MLEvent
. - MLFormatRegister - Interface in org.apache.spark.ml.util
-
ML export formats for should implement this trait so that users can specify a shortname rather than the fully qualified class name of the exporter.
- mllibDenseMatrixToMLDenseMatrix(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
- mllibDenseVectorToMLDenseVector(DenseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
- mllibMatrixToMLMatrix(Matrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
- mllibSparseMatrixToMLSparseMatrix(SparseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
- mllibSparseVectorToMLSparseVector(SparseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
- mllibVectorToMLVector(Vector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
- mlMatrixToMLlibMatrix(Matrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
- MLPairRDDFunctions<K,
V> - Class in org.apache.spark.mllib.rdd -
Machine learning specific Pair RDD functions.
- MLPairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.mllib.rdd.MLPairRDDFunctions
- MLReadable<T> - Interface in org.apache.spark.ml.util
-
Trait for objects that provide
MLReader
. - MLReader<T> - Class in org.apache.spark.ml.util
-
Abstract class for utility classes that can load ML instances.
- MLReader() - Constructor for class org.apache.spark.ml.util.MLReader
- mlSparseMatrixToMLlibSparseMatrix(SparseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
- mlSparseVectorToMLlibSparseVector(SparseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
- MLUtils - Class in org.apache.spark.mllib.util
-
Helper methods to load, save and pre-process data used in MLLib.
- MLUtils() - Constructor for class org.apache.spark.mllib.util.MLUtils
- mlVectorToMLlibVector(Vector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
- MLWritable - Interface in org.apache.spark.ml.util
-
Trait for classes that provide
MLWriter
. - MLWriter - Class in org.apache.spark.ml.util
-
Abstract class for utility classes that can save ML instances in Spark's internal format.
- MLWriter() - Constructor for class org.apache.spark.ml.util.MLWriter
- MLWriterFormat - Interface in org.apache.spark.ml.util
-
Abstract class to be implemented by objects that provide ML exportability.
- mod(Object) - Method in class org.apache.spark.sql.Column
-
Modulo (a.k.a.
- mode() - Method in interface org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter
-
Returns the mode of this parameter.
- mode(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Specifies the behavior when data or table already exists.
- mode(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the most frequent value in a group.
- mode(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the most frequent value in a group.
- mode(SaveMode) - Method in class org.apache.spark.sql.DataFrameWriter
-
Specifies the behavior when data or table already exists.
- model() - Method in class org.apache.spark.ml.FitEnd
- model(long) - Method in interface org.apache.spark.ml.ann.Topology
- model(Vector) - Method in interface org.apache.spark.ml.ann.Topology
- Model<M extends Model<M>> - Class in org.apache.spark.ml
-
A fitted model, i.e., a
Transformer
produced by anEstimator
. - Model() - Constructor for class org.apache.spark.ml.Model
- models() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- modelType() - Method in class org.apache.spark.ml.classification.NaiveBayes
- modelType() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- modelType() - Method in interface org.apache.spark.ml.classification.NaiveBayesParams
-
The model type which is a string (case-sensitive).
- modelType() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
- modelType() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
- MODIFIED_CONFIGS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- modifiesSecurityContext(Driver, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcConnectionProvider
-
Checks if this connection provider instance needs to modify global security configuration to handle authentication and thus should synchronize access to the security configuration while the given driver is initiating a connection with the given options.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.input$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.output$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.shuffleRead$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.shuffleWrite$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.feature.Word2VecModel.Data$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.recommendation.ALS.InBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.recommendation.ALS.Rating$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.recommendation.ALS.RatingBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.KMeansModel.Cluster$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.PrefixSpan.Postfix$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.stat.test.ChiSqTest.Method$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.resource.ResourceProfile.DefaultProfileExecutorResources$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.resource.ResourceProfile.ExecutorResourcesOrDefaults$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.DecommissionExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.DecommissionExecutorsOnHost$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ExecutorDecommissioning$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ExecutorDecommissionSigReceived$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.IsExecutorAlive$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchedExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.MiscellaneousProcessAdded$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveDelegationTokens$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ShufflePushCompletion$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.TaskThreadDump$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorLogLevel$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorsLogLevel$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.avro.AvroUtils.AvroMatchedField$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.avro.SchemaConverters.SchemaType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.connector.catalog.LookupCatalog.NonSessionCatalogAndIdentifier$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.connector.catalog.LookupCatalog.SessionCatalogAndIdentifier$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.functions.partitioning$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.CubeType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.GroupByType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.GroupingSetsType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.PivotType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.RollupType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryIdleEvent$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.Decimal.DecimalIsFractional$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.DecimalType.Fixed$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.FloatType.FloatAsIfIntegral$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.DecommissionBlockManager$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.DecommissionBlockManagers$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetLocations$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetPeers$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetRDDBlockVisibility$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetReplicateInfoForRDDBlocks$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetShufflePushMergerLocations$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetStorageStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.MarkRDDBlockAsVisible$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveRdd$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveShufflePushMergerLocation$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.TriggerHeapHistogram$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockTaskInfo$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockVisibility$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.JettyUtils.ServletParams$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.util.MavenUtils.MavenCoordinate$
-
Static reference to the singleton instance of this Scala object.
- monitors() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- monotonically_increasing_id() - Static method in class org.apache.spark.sql.functions
-
A column expression that generates monotonically increasing 64-bit integers.
- monotonicallyIncreasingId() - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use monotonically_increasing_id(). Since 2.0.0.
- month(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the month as an integer from a given date/timestamp/string.
- MONTH() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- monthname(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the three-letter abbreviated month name from a given date/timestamp/string.
- months - Variable in class org.apache.spark.unsafe.types.CalendarInterval
- months(String) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a monthly transform for a timestamp or date column.
- months(Column) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) A transform for timestamps and dates to partition data into months.
- months(Column) - Method in class org.apache.spark.sql.functions.partitioning$
-
(Scala-specific) A transform for timestamps and dates to partition data into months.
- months(NamedReference) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- months_between(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns number of months between dates
start
andend
. - months_between(Column, Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Returns number of months between dates
end
andstart
. - moreThanOneFromToUnitInIntervalLiteralError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- moreThanOneGeneratorError(Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- msDurationToString(long) - Static method in class org.apache.spark.util.Utils
-
Returns a human-readable string representing a duration such as "35ms"
- MsSqlServerDialect - Class in org.apache.spark.sql.jdbc
- MsSqlServerDialect() - Constructor for class org.apache.spark.sql.jdbc.MsSqlServerDialect
- MsSqlServerDialect.MsSqlServerSQLBuilder - Class in org.apache.spark.sql.jdbc
- MsSqlServerDialect.MsSqlServerSQLQueryBuilder - Class in org.apache.spark.sql.jdbc
- MsSqlServerSQLBuilder() - Constructor for class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLBuilder
- MsSqlServerSQLQueryBuilder(JdbcDialect, JDBCOptions) - Constructor for class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLQueryBuilder
- mu() - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
- multiActionAlterError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- MulticlassClassificationEvaluator - Class in org.apache.spark.ml.evaluation
-
Evaluator for multiclass classification, which expects input columns: prediction, label, weight (optional) and probability (only for logLoss).
- MulticlassClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- MulticlassClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- MulticlassMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for multiclass classification.
- MulticlassMetrics(RDD<? extends Product>) - Constructor for class org.apache.spark.mllib.evaluation.MulticlassMetrics
- multiFailuresInStageMaterializationError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- MultilabelClassificationEvaluator - Class in org.apache.spark.ml.evaluation
-
:: Experimental :: Evaluator for multi-label classification, which expects two input columns: prediction and label.
- MultilabelClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- MultilabelClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- MultilabelMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for multilabel classification.
- MultilabelMetrics(RDD<Tuple2<double[], double[]>>) - Constructor for class org.apache.spark.mllib.evaluation.MultilabelMetrics
- multiLabelValidator(int) - Static method in class org.apache.spark.mllib.util.DataValidators
-
Function to check if labels used for k class multi-label classification are in the range of {0, 1, ..., k - 1}.
- MultilayerPerceptronClassificationModel - Class in org.apache.spark.ml.classification
-
Classification model based on the Multilayer Perceptron.
- MultilayerPerceptronClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for MultilayerPerceptronClassification results for a given model.
- MultilayerPerceptronClassificationSummaryImpl - Class in org.apache.spark.ml.classification
-
MultilayerPerceptronClassification results for a given model.
- MultilayerPerceptronClassificationSummaryImpl(Dataset<Row>, String, String, String) - Constructor for class org.apache.spark.ml.classification.MultilayerPerceptronClassificationSummaryImpl
- MultilayerPerceptronClassificationTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for MultilayerPerceptronClassification training results.
- MultilayerPerceptronClassificationTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
MultilayerPerceptronClassification training results.
- MultilayerPerceptronClassificationTrainingSummaryImpl(Dataset<Row>, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.MultilayerPerceptronClassificationTrainingSummaryImpl
- MultilayerPerceptronClassifier - Class in org.apache.spark.ml.classification
-
Classifier trainer based on the Multilayer Perceptron.
- MultilayerPerceptronClassifier() - Constructor for class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- MultilayerPerceptronClassifier(String) - Constructor for class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- MultilayerPerceptronParams - Interface in org.apache.spark.ml.classification
-
Params for Multilayer Perceptron.
- MultipartIdentifierHelper(Seq<String>) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
- multiplePartitionColumnValuesSpecifiedError(StructField, Map<String, String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- multiplePathsSpecifiedError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- multipleRowScalarSubqueryError(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- multiply(Object) - Method in class org.apache.spark.sql.Column
-
Multiplication of this expression and another expression.
- multiply(DenseMatrix) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Convenience method for
Matrix
-DenseMatrix
multiplication. - multiply(DenseVector) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Convenience method for
Matrix
-DenseVector
multiplication. - multiply(Vector) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Convenience method for
Matrix
-Vector
multiplication. - multiply(DenseMatrix) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convenience method for
Matrix
-DenseMatrix
multiplication. - multiply(DenseVector) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convenience method for
Matrix
-DenseVector
multiplication. - multiply(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- multiply(BlockMatrix, int) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- multiply(Matrix) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Multiply this matrix by a local matrix on the right.
- multiply(Matrix) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Multiply this matrix by a local matrix on the right.
- multiply(Vector) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convenience method for
Matrix
-Vector
multiplication. - multiStreamingQueriesUsingPathConcurrentlyError(String, FileAlreadyExistsException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- multiTimeWindowExpressionsNotSupportedError(TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- MultivariateGaussian - Class in org.apache.spark.ml.stat.distribution
-
This class provides basic functionality for a Multivariate Gaussian (Normal) Distribution.
- MultivariateGaussian - Class in org.apache.spark.mllib.stat.distribution
-
This class provides basic functionality for a Multivariate Gaussian (Normal) Distribution.
- MultivariateGaussian(Vector, Matrix) - Constructor for class org.apache.spark.ml.stat.distribution.MultivariateGaussian
- MultivariateGaussian(Vector, Matrix) - Constructor for class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
- MultivariateOnlineSummarizer - Class in org.apache.spark.mllib.stat
-
MultivariateOnlineSummarizer implements
MultivariateStatisticalSummary
to compute the mean, variance, minimum, maximum, counts, and nonzero counts for instances in sparse or dense vector format in an online fashion. - MultivariateOnlineSummarizer() - Constructor for class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
- MultivariateStatisticalSummary - Interface in org.apache.spark.mllib.stat
-
Trait for multivariate statistical summary of a data matrix.
- mustOverrideOneMethodError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- mustSpecifyCheckpointDirError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- MutableAggregationBuffer - Class in org.apache.spark.sql.expressions
-
A
Row
representing a mutable aggregation buffer. - MutableAggregationBuffer() - Constructor for class org.apache.spark.sql.expressions.MutableAggregationBuffer
- MutablePair<T1,
T2> - Class in org.apache.spark.util -
:: DeveloperApi :: A tuple of 2 elements.
- MutablePair() - Constructor for class org.apache.spark.util.MutablePair
-
No-arg constructor for serialization
- MutablePair(T1, T2) - Constructor for class org.apache.spark.util.MutablePair
- MutableURLClassLoader - Class in org.apache.spark.util
-
URL class loader that exposes the `addURL` method in URLClassLoader.
- MutableURLClassLoader(URL[], ClassLoader) - Constructor for class org.apache.spark.util.MutableURLClassLoader
- myName() - Method in class org.apache.spark.util.InnerClosureFinder
- MySQLDialect - Class in org.apache.spark.sql.jdbc
- MySQLDialect() - Constructor for class org.apache.spark.sql.jdbc.MySQLDialect
- MySQLDialect.MySQLSQLBuilder - Class in org.apache.spark.sql.jdbc
- MySQLDialect.MySQLSQLQueryBuilder - Class in org.apache.spark.sql.jdbc
- MySQLSQLBuilder() - Constructor for class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- MySQLSQLQueryBuilder(JdbcDialect, JDBCOptions) - Constructor for class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLQueryBuilder
N
- n() - Method in class org.apache.spark.ml.feature.NGram
-
Minimum n-gram length, greater than or equal to 1.
- n() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- na() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a
DataFrameNaFunctions
for working with missing data. - na() - Method in class org.apache.spark.sql.Dataset
- NaiveBayes - Class in org.apache.spark.ml.classification
-
Naive Bayes Classifiers.
- NaiveBayes - Class in org.apache.spark.mllib.classification
-
Trains a Naive Bayes model given an RDD of
(label, features)
pairs. - NaiveBayes() - Constructor for class org.apache.spark.ml.classification.NaiveBayes
- NaiveBayes() - Constructor for class org.apache.spark.mllib.classification.NaiveBayes
- NaiveBayes(double) - Constructor for class org.apache.spark.mllib.classification.NaiveBayes
- NaiveBayes(String) - Constructor for class org.apache.spark.ml.classification.NaiveBayes
- NaiveBayesModel - Class in org.apache.spark.ml.classification
-
Model produced by
NaiveBayes
- NaiveBayesModel - Class in org.apache.spark.mllib.classification
-
Model for Naive Bayes Classifiers.
- NaiveBayesModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.classification
- NaiveBayesModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.classification
-
Model data for model import/export
- NaiveBayesModel.SaveLoadV1_0$.Data$ - Class in org.apache.spark.mllib.classification
- NaiveBayesModel.SaveLoadV2_0$ - Class in org.apache.spark.mllib.classification
- NaiveBayesModel.SaveLoadV2_0$.Data - Class in org.apache.spark.mllib.classification
-
Model data for model import/export
- NaiveBayesModel.SaveLoadV2_0$.Data$ - Class in org.apache.spark.mllib.classification
- NaiveBayesParams - Interface in org.apache.spark.ml.classification
-
Params for Naive Bayes Classifiers.
- name() - Method in interface org.apache.spark.api.java.JavaRDDLike
- name() - Method in class org.apache.spark.ml.attribute.Attribute
-
Name of the attribute.
- name() - Method in class org.apache.spark.ml.attribute.AttributeGroup
- name() - Method in class org.apache.spark.ml.attribute.AttributeType
- name() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- name() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- name() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- name() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- name() - Method in class org.apache.spark.ml.param.Param
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
- name() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
- name() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.Method
- name() - Method in class org.apache.spark.rdd.RDD
-
A friendly name for this RDD
- name() - Method in class org.apache.spark.resource.ResourceInformation
- name() - Method in class org.apache.spark.resource.ResourceInformationJson
- name() - Method in class org.apache.spark.scheduler.AccumulableInfo
- name() - Method in class org.apache.spark.scheduler.AsyncEventQueue
- name() - Method in interface org.apache.spark.scheduler.Schedulable
- name() - Method in class org.apache.spark.scheduler.StageInfo
- name() - Method in interface org.apache.spark.SparkStageInfo
- name() - Method in class org.apache.spark.SparkStageInfoImpl
- name() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns the user-specified name of the query, or null if not specified.
- name() - Method in class org.apache.spark.sql.catalog.CatalogMetadata
- name() - Method in class org.apache.spark.sql.catalog.Column
- name() - Method in class org.apache.spark.sql.catalog.Database
- name() - Method in class org.apache.spark.sql.catalog.Function
- name() - Method in class org.apache.spark.sql.catalog.Table
- name() - Method in interface org.apache.spark.sql.connector.catalog.CatalogPlugin
-
Called to get this catalog's name.
- name() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns the name of this table column.
- name() - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- name() - Method in interface org.apache.spark.sql.connector.catalog.functions.Function
-
A name to identify this function.
- name() - Method in interface org.apache.spark.sql.connector.catalog.Identifier
- name() - Method in interface org.apache.spark.sql.connector.catalog.MetadataColumn
-
The name of this metadata column.
- name() - Method in interface org.apache.spark.sql.connector.catalog.procedures.Procedure
-
Returns the name of this procedure.
- name() - Method in interface org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter
-
Returns the name of this parameter.
- name() - Method in interface org.apache.spark.sql.connector.catalog.Table
-
A name to identify this table.
- name() - Method in interface org.apache.spark.sql.connector.catalog.View
-
A name to identify this view.
- name() - Method in class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- name() - Method in class org.apache.spark.sql.connector.expressions.aggregate.UserDefinedAggregateFunc
- name() - Method in class org.apache.spark.sql.connector.expressions.ClusterByTransform
- name() - Method in class org.apache.spark.sql.connector.expressions.GeneralScalarExpression
- name() - Method in interface org.apache.spark.sql.connector.expressions.Transform
-
Returns the transform function name.
- name() - Method in class org.apache.spark.sql.connector.expressions.UserDefinedScalarFunc
- name() - Method in interface org.apache.spark.sql.connector.metric.CustomMetric
-
Returns the name of custom metric.
- name() - Method in interface org.apache.spark.sql.connector.metric.CustomTaskMetric
-
Returns the name of custom task metric.
- name() - Method in class org.apache.spark.sql.jdbc.JdbcConnectionProvider
-
Name of the service to provide JDBC connections.
- name() - Method in class org.apache.spark.sql.Observation
- name() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
- name() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- name() - Method in class org.apache.spark.sql.types.StructField
- name() - Method in class org.apache.spark.status.api.v1.AccumulableInfo
- name() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
- name() - Method in class org.apache.spark.status.api.v1.JobData
- name() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- name() - Method in class org.apache.spark.status.api.v1.sql.Metric
- name() - Method in class org.apache.spark.status.api.v1.StageData
- name() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- name() - Method in class org.apache.spark.storage.BlockId
-
A globally unique identifier for this Block.
- name() - Method in class org.apache.spark.storage.BroadcastBlockId
- name() - Method in class org.apache.spark.storage.CacheId
- name() - Method in class org.apache.spark.storage.PythonStreamBlockId
- name() - Method in class org.apache.spark.storage.RDDBlockId
- name() - Method in class org.apache.spark.storage.RDDInfo
- name() - Method in class org.apache.spark.storage.ShuffleBlockBatchId
- name() - Method in class org.apache.spark.storage.ShuffleBlockChunkId
- name() - Method in class org.apache.spark.storage.ShuffleBlockId
- name() - Method in class org.apache.spark.storage.ShuffleChecksumBlockId
- name() - Method in class org.apache.spark.storage.ShuffleDataBlockId
- name() - Method in class org.apache.spark.storage.ShuffleIndexBlockId
- name() - Method in class org.apache.spark.storage.ShuffleMergedBlockId
- name() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
- name() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
- name() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
- name() - Method in class org.apache.spark.storage.ShufflePushBlockId
- name() - Method in class org.apache.spark.storage.StreamBlockId
- name() - Method in class org.apache.spark.storage.TaskResultBlockId
- name() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- name() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- name() - Method in class org.apache.spark.ui.flamegraph.FlamegraphNode
- name() - Method in class org.apache.spark.util.AccumulatorV2
-
Returns the name of this accumulator, can only be called after registration.
- name() - Method in class org.apache.spark.util.MethodIdentifier
- name() - Method in class org.apache.spark.util.SparkTestUtils.JavaSourceFromString
- name(String) - Method in class org.apache.spark.sql.Column
-
Gives the column a name (alias).
- name(String) - Method in class org.apache.spark.sql.TypedColumn
-
Gives the
TypedColumn
a name (alias). - NAME() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- named_struct(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a struct with the given field names and values.
- namedArgumentsNotEnabledError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- namedArgumentsNotSupported(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- NamedReference - Interface in org.apache.spark.sql.connector.expressions
-
Represents a field or column reference in the public logical expression API.
- namedThreadFactory(String) - Static method in class org.apache.spark.util.ThreadUtils
-
Create a thread factory that names threads with a prefix and also sets the threads to daemon.
- NamedTransform - Class in org.apache.spark.sql.connector.expressions
-
Convenience extractor for any Transform.
- NamedTransform() - Constructor for class org.apache.spark.sql.connector.expressions.NamedTransform
- names() - Method in interface org.apache.spark.metrics.ExecutorMetricType
- names() - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- names() - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- names() - Method in interface org.apache.spark.metrics.SingleValueExecutorMetricType
- names() - Method in class org.apache.spark.ml.feature.VectorSlicer
-
An array of feature names to select features from a vector column.
- names() - Method in class org.apache.spark.sql.types.StructType
-
Returns all field names in an array.
- namespace() - Method in class org.apache.spark.sql.catalog.Function
- namespace() - Method in class org.apache.spark.sql.catalog.Table
- namespace() - Method in interface org.apache.spark.sql.connector.catalog.Identifier
- NAMESPACE_RESERVED_PROPERTIES() - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
The list of reserved namespace properties, which can not be removed or changed directly by the syntax: {{ ALTER NAMESPACE ...
- namespaceAlreadyExistsError(String[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- NamespaceChange - Interface in org.apache.spark.sql.connector.catalog
-
NamespaceChange subclasses represent requested changes to a namespace.
- NamespaceChange.RemoveProperty - Class in org.apache.spark.sql.connector.catalog
-
A NamespaceChange to remove a namespace property.
- NamespaceChange.SetProperty - Class in org.apache.spark.sql.connector.catalog
-
A NamespaceChange to set a namespace property.
- namespaceExists(String[]) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- namespaceExists(String[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
Test whether a namespace exists.
- NamespaceHelper(String[]) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.NamespaceHelper
- nameToObjectMap() - Static method in class org.apache.spark.mllib.stat.correlation.CorrelationNames
- nanoTime() - Method in interface org.apache.spark.util.Clock
-
Current value of high resolution time source, in ns.
- nanvl(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns col1 if it is not NaN, or col2 if col1 is NaN.
- NarrowDependency<T> - Class in org.apache.spark
-
:: DeveloperApi :: Base class for dependencies where each partition of the child RDD depends on a small number of partitions of the parent RDD.
- NarrowDependency(RDD<T>) - Constructor for class org.apache.spark.NarrowDependency
- ndcgAt(int) - Method in class org.apache.spark.mllib.evaluation.RankingMetrics
-
Compute the average NDCG value of all the queries, truncated at ranking position k.
- ndv() - Method in interface org.apache.spark.sql.connector.read.colstats.HistogramBin
- needConversion() - Method in class org.apache.spark.sql.sources.BaseRelation
-
Whether does it need to convert the objects in Row to internal representation, for example: java.lang.String to UTF8String java.lang.Decimal to Decimal
- needsReconfiguration() - Method in interface org.apache.spark.sql.connector.read.streaming.ContinuousStream
-
The execution engine will call this method in every epoch to determine if new input partitions need to be generated, which may be required if for example the underlying source system has had partitions added or removed.
- needTokenUpdate(Map<String, Object>, Option<KafkaTokenClusterConf>) - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- negate(byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- negate(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- negate(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- negate(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- negate(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- negate(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- negate(long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- negate(short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- negate(Column) - Static method in class org.apache.spark.sql.functions
-
Unary minus, i.e.
- negate(Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- negate(Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- negative(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the negated value.
- negativeScaleNotAllowedError(int) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- negativeValueUnexpectedError(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- Neither - Enum constant in enum class org.apache.spark.graphx.impl.EdgeActiveness
-
Neither the source vertex nor the destination vertex need be active.
- nestedArraysUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- nestedDatabaseUnsupportedByV1SessionCatalogError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nestedExecuteImmediate(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nestedFieldUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- nestedGeneratorError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nestedTypeMissingElementTypeError(String, SqlBaseParser.PrimitiveDataTypeContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- newAccumulatorInfos(Iterable<AccumulableInfo>) - Static method in class org.apache.spark.status.LiveEntityHelpers
- newAggregationState() - Method in interface org.apache.spark.sql.connector.catalog.functions.AggregateFunction
-
Initialize state for an aggregation.
- newAPIHadoopFile(String, Class<F>, Class<K>, Class<V>, Configuration) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
- newAPIHadoopFile(String, Class<F>, Class<K>, Class<V>, Configuration) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
- newAPIHadoopFile(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.SparkContext
-
Smarter version of
newApiHadoopFile
that uses class tags to figure out the classes of keys, values and theorg.apache.hadoop.mapreduce.InputFormat
(new MapReduce API) so that user don't need to pass them directly. - newAPIHadoopRDD(Configuration, Class<F>, Class<K>, Class<V>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
- newAPIHadoopRDD(Configuration, Class<F>, Class<K>, Class<V>) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
- newBooleanArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBooleanEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBooleanSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBoxedBooleanEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBoxedByteEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBoxedDoubleEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBoxedFloatEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBoxedIntEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBoxedLongEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBoxedShortEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newBroadcast(T, boolean, long, boolean, ClassTag<T>) - Method in interface org.apache.spark.broadcast.BroadcastFactory
-
Creates a new broadcast variable.
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- newBuilder() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- newBuilder(StoreTypes.AccumulableInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- newBuilder(StoreTypes.ApplicationAttemptInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- newBuilder(StoreTypes.ApplicationEnvironmentInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- newBuilder(StoreTypes.ApplicationEnvironmentInfoWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- newBuilder(StoreTypes.ApplicationInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- newBuilder(StoreTypes.ApplicationInfoWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- newBuilder(StoreTypes.AppSummary) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- newBuilder(StoreTypes.CachedQuantile) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- newBuilder(StoreTypes.ExecutorMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- newBuilder(StoreTypes.ExecutorMetricsDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- newBuilder(StoreTypes.ExecutorPeakMetricsDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- newBuilder(StoreTypes.ExecutorResourceRequest) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- newBuilder(StoreTypes.ExecutorStageSummary) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- newBuilder(StoreTypes.ExecutorStageSummaryWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- newBuilder(StoreTypes.ExecutorSummary) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- newBuilder(StoreTypes.ExecutorSummaryWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- newBuilder(StoreTypes.InputMetricDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- newBuilder(StoreTypes.InputMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- newBuilder(StoreTypes.JobData) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- newBuilder(StoreTypes.JobDataWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- newBuilder(StoreTypes.MemoryMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- newBuilder(StoreTypes.OutputMetricDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- newBuilder(StoreTypes.OutputMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- newBuilder(StoreTypes.PairStrings) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- newBuilder(StoreTypes.PoolData) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- newBuilder(StoreTypes.ProcessSummary) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- newBuilder(StoreTypes.ProcessSummaryWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- newBuilder(StoreTypes.RDDDataDistribution) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- newBuilder(StoreTypes.RDDOperationClusterWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- newBuilder(StoreTypes.RDDOperationEdge) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- newBuilder(StoreTypes.RDDOperationGraphWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- newBuilder(StoreTypes.RDDOperationNode) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- newBuilder(StoreTypes.RDDPartitionInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- newBuilder(StoreTypes.RDDStorageInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- newBuilder(StoreTypes.RDDStorageInfoWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- newBuilder(StoreTypes.ResourceInformation) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- newBuilder(StoreTypes.ResourceProfileInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- newBuilder(StoreTypes.ResourceProfileWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- newBuilder(StoreTypes.RuntimeInfo) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- newBuilder(StoreTypes.ShufflePushReadMetricDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- newBuilder(StoreTypes.ShufflePushReadMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- newBuilder(StoreTypes.ShuffleReadMetricDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- newBuilder(StoreTypes.ShuffleReadMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- newBuilder(StoreTypes.ShuffleWriteMetricDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- newBuilder(StoreTypes.ShuffleWriteMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- newBuilder(StoreTypes.SinkProgress) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- newBuilder(StoreTypes.SourceProgress) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- newBuilder(StoreTypes.SparkPlanGraphClusterWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- newBuilder(StoreTypes.SparkPlanGraphEdge) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- newBuilder(StoreTypes.SparkPlanGraphNode) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- newBuilder(StoreTypes.SparkPlanGraphNodeWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- newBuilder(StoreTypes.SparkPlanGraphWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- newBuilder(StoreTypes.SpeculationStageSummary) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- newBuilder(StoreTypes.SpeculationStageSummaryWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- newBuilder(StoreTypes.SQLExecutionUIData) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- newBuilder(StoreTypes.SQLPlanMetric) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- newBuilder(StoreTypes.StageData) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- newBuilder(StoreTypes.StageDataWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- newBuilder(StoreTypes.StateOperatorProgress) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- newBuilder(StoreTypes.StreamBlockData) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- newBuilder(StoreTypes.StreamingQueryData) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- newBuilder(StoreTypes.StreamingQueryProgress) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- newBuilder(StoreTypes.StreamingQueryProgressWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- newBuilder(StoreTypes.TaskData) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- newBuilder(StoreTypes.TaskDataWrapper) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- newBuilder(StoreTypes.TaskMetricDistributions) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- newBuilder(StoreTypes.TaskMetrics) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- newBuilder(StoreTypes.TaskResourceRequest) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- newBuilderForType() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- newByteArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newByteEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newByteSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newComment() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment
- newDaemonCachedThreadPool(String) - Static method in class org.apache.spark.util.ThreadUtils
-
Wrapper over newCachedThreadPool.
- newDaemonCachedThreadPool(String, int, int) - Static method in class org.apache.spark.util.ThreadUtils
-
Create a cached thread pool whose max number of threads is
maxThreadNumber
. - newDaemonFixedThreadPool(int, String) - Static method in class org.apache.spark.util.ThreadUtils
-
Wrapper over newFixedThreadPool.
- newDaemonSingleThreadExecutor(String) - Static method in class org.apache.spark.util.ThreadUtils
-
Wrapper over newFixedThreadPool with single daemon thread.
- newDaemonSingleThreadExecutorWithRejectedExecutionHandler(String, int, RejectedExecutionHandler) - Static method in class org.apache.spark.util.ThreadUtils
-
Wrapper over newSingleThreadExecutor that allows the specification of a RejectedExecutionHandler
- newDaemonSingleThreadScheduledExecutor(String) - Static method in class org.apache.spark.util.ThreadUtils
-
Wrapper over ScheduledThreadPoolExecutor the pool with daemon threads.
- newDaemonThreadPoolScheduledExecutor(String, int) - Static method in class org.apache.spark.util.ThreadUtils
-
Wrapper over ScheduledThreadPoolExecutor.
- newDataType() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType
- newDateEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newDefaultValue() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnDefaultValue
-
Returns the column default value SQL string (Spark SQL dialect).
- newDoubleArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newDoubleEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newDoubleSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newDurationEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newFloatArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newFloatEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newFloatSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newForkJoinPool(String, int) - Static method in class org.apache.spark.util.ThreadUtils
-
Construct a new ForkJoinPool with a specified max parallelism and name prefix.
- NewHadoopMapPartitionsWithSplitRDD$() - Constructor for class org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD$
- NewHadoopRDD<K,
V> - Class in org.apache.spark.rdd -
:: DeveloperApi :: An RDD that provides core functionality for reading data stored in Hadoop (e.g., files in HDFS, sources in HBase, or S3), using the new MapReduce API (
org.apache.hadoop.mapreduce
). - NewHadoopRDD(SparkContext, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, Configuration) - Constructor for class org.apache.spark.rdd.NewHadoopRDD
- NewHadoopRDD(SparkContext, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, Configuration, boolean, boolean) - Constructor for class org.apache.spark.rdd.NewHadoopRDD
- NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD$ - Class in org.apache.spark.rdd
- newId() - Static method in class org.apache.spark.util.AccumulatorContext
-
Returns a globally unique ID for a new
AccumulatorV2
. - newInstance() - Method in class org.apache.spark.serializer.JavaSerializer
- newInstance() - Method in class org.apache.spark.serializer.KryoSerializer
- newInstance() - Method in class org.apache.spark.serializer.Serializer
-
Creates a new
SerializerInstance
. - newInstantEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newIntArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newIntEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newIntSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newJavaDecimalEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newJavaEnumEncoder(TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLImplicits
- newKryo() - Method in class org.apache.spark.serializer.KryoSerializer
- newKryoOutput() - Method in class org.apache.spark.serializer.KryoSerializer
- newLocalDateEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newLocalDateTimeEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newLongArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newLongEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newLongSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newMapEncoder(TypeTags.TypeTag<T>) - Method in class org.apache.spark.sql.SQLImplicits
- newName() - Method in class org.apache.spark.sql.connector.catalog.TableChange.RenameColumn
- newPeriodEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newProductArrayEncoder(TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLImplicits
- newProductEncoder(TypeTags.TypeTag<T>) - Method in interface org.apache.spark.sql.LowPrioritySQLImplicits
- newProductSeqEncoder(TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLImplicits
- newRowLevelOperationBuilder(RowLevelOperationInfo) - Method in interface org.apache.spark.sql.connector.catalog.SupportsRowLevelOperations
-
Returns a
RowLevelOperationBuilder
to build aRowLevelOperation
. - newScalaDecimalEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newScanBuilder(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.catalog.SupportsRead
-
Returns a
ScanBuilder
which can be used to build aScan
. - newScanBuilder(CaseInsensitiveStringMap) - Method in interface org.apache.spark.sql.connector.write.RowLevelOperation
-
Returns a
ScanBuilder
to configure aScan
for this row-level operation. - newSequenceEncoder(TypeTags.TypeTag<T>) - Method in class org.apache.spark.sql.SQLImplicits
- newSession() - Method in class org.apache.spark.sql.api.SparkSession
-
Start a new session with isolated SQL configurations, temporary tables, registered functions are isolated, but sharing the underlying
SparkContext
and cached data. - newSession() - Method in class org.apache.spark.sql.SparkSession
- newSession() - Method in class org.apache.spark.sql.SQLContext
-
Returns a
SQLContext
as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the sameSparkContext
, cached data and other things. - newSetEncoder(TypeTags.TypeTag<T>) - Method in class org.apache.spark.sql.SQLImplicits
-
Notice that we serialize
Set
to Catalyst array. - newShortArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newShortEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newShortSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newShuffleMergeState() - Method in class org.apache.spark.ShuffleDependency
- newSingleThreadScheduledExecutor(String) - Static method in class org.apache.spark.util.ThreadUtils
-
Wrapper over ScheduledThreadPoolExecutor the pool with non-daemon threads.
- newStringArrayEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newStringEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newStringSeqEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newTimeStampEncoder() - Method in class org.apache.spark.sql.SQLImplicits
- newWriteBuilder(LogicalWriteInfo) - Method in interface org.apache.spark.sql.connector.catalog.SupportsWrite
-
Returns a
WriteBuilder
which can be used to createBatchWrite
. - newWriteBuilder(LogicalWriteInfo) - Method in interface org.apache.spark.sql.connector.write.RowLevelOperation
-
Returns a
WriteBuilder
to configure aWrite
for this row-level operation. - newWriteBuilder(LogicalWriteInfo) - Method in interface org.apache.spark.sql.connector.write.SupportsDelta
- next() - Method in class org.apache.spark.ContextAwareIterator
-
Deprecated.
- next() - Method in class org.apache.spark.InterruptibleIterator
- next() - Method in interface org.apache.spark.mllib.clustering.LDAOptimizer
- next() - Method in interface org.apache.spark.sql.connector.read.PartitionReader
-
Proceed to next record, returns false if there is no more records.
- next() - Method in class org.apache.spark.status.LiveRDDPartition
- next_day(Column, String) - Static method in class org.apache.spark.sql.functions
-
Returns the first date which is later than the value of the
date
column that is on the specified day of the week. - next_day(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the first date which is later than the value of the
date
column that is on the specified day of the week. - nextRow() - Method in interface org.apache.spark.sql.avro.AvroUtils.RowReader
- nextValue() - Method in class org.apache.spark.mllib.random.ExponentialGenerator
- nextValue() - Method in class org.apache.spark.mllib.random.GammaGenerator
- nextValue() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
- nextValue() - Method in class org.apache.spark.mllib.random.PoissonGenerator
- nextValue() - Method in interface org.apache.spark.mllib.random.RandomDataGenerator
-
Returns an i.i.d.
- nextValue() - Method in class org.apache.spark.mllib.random.StandardNormalGenerator
- nextValue() - Method in class org.apache.spark.mllib.random.UniformGenerator
- nextValue() - Method in class org.apache.spark.mllib.random.WeibullGenerator
- NGram - Class in org.apache.spark.ml.feature
-
A feature transformer that converts the input array of strings into an array of n-grams.
- NGram() - Constructor for class org.apache.spark.ml.feature.NGram
- NGram(String) - Constructor for class org.apache.spark.ml.feature.NGram
- NioBufferedFileInputStream - Class in org.apache.spark.io
-
InputStream
implementation which uses direct buffer to read a file to avoid extra copy of data between Java and native memory which happens when usingBufferedInputStream
. - NioBufferedFileInputStream(File) - Constructor for class org.apache.spark.io.NioBufferedFileInputStream
- NioBufferedFileInputStream(File, int) - Constructor for class org.apache.spark.io.NioBufferedFileInputStream
- NNLS - Class in org.apache.spark.mllib.optimization
-
Object used to solve nonnegative least squares problems using a modified projected gradient method.
- NNLS() - Constructor for class org.apache.spark.mllib.optimization.NNLS
- NNLS.Workspace - Class in org.apache.spark.mllib.optimization
- NO_PREF() - Static method in class org.apache.spark.scheduler.TaskLocality
- NO_RESOURCE - Static variable in class org.apache.spark.launcher.SparkLauncher
-
A special value for the resource that tells Spark to not try to process the app resource as a file.
- node() - Method in class org.apache.spark.scheduler.ExcludedExecutor
- node() - Method in class org.apache.spark.sql.Column
- Node - Class in org.apache.spark.ml.tree
-
Decision tree node interface.
- Node - Class in org.apache.spark.mllib.tree.model
-
Node in a decision tree.
- Node - Class in org.apache.spark.status.api.v1.sql
- Node() - Constructor for class org.apache.spark.ml.tree.Node
- Node(int, Predict, double, boolean, Option<Split>, Option<Node>, Option<Node>, Option<InformationGainStats>) - Constructor for class org.apache.spark.mllib.tree.model.Node
- NODE - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
- NODE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- NODE_LOCAL() - Static method in class org.apache.spark.scheduler.TaskLocality
- nodeData() - Method in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
- NodeData(int, double, double, double[], long, double, int, int, DecisionTreeModelReadWrite.SplitData) - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- NodeData(int, int, org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData, double, boolean, Option<org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData>, Option<Object>, Option<Object>, Option<Object>) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- NodeData$() - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
- NodeData$() - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
- noDefaultForDataTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- nodeId() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- nodeId() - Method in class org.apache.spark.status.api.v1.sql.Node
- nodeName() - Method in class org.apache.spark.status.api.v1.sql.Node
- nodes() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- NODES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- NODES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- noExecutorIdleError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- noHandlerForUDAFError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- NoLegacyJDBCError - Interface in org.apache.spark.sql.jdbc
-
Make the
classifyException
method throw out the original exception - noLocality() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- Nominal() - Static method in class org.apache.spark.ml.attribute.AttributeType
-
Nominal type.
- NominalAttribute - Class in org.apache.spark.ml.attribute
-
A nominal attribute.
- nonBooleanFilterInAggregateError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nonDeterministicFilterInAggregateError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nonDeterministicMergeCondition(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- None - Static variable in class org.apache.spark.graphx.TripletFields
-
None of the triplet fields are exposed.
- None() - Static method in class org.apache.spark.sql.streaming.TimeMode
-
Neither timers nor ttl is supported in this mode.
- NONE - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- NONE - Static variable in class org.apache.spark.api.java.StorageLevels
- NONE() - Static method in class org.apache.spark.scheduler.SchedulingMode
- NONE() - Static method in class org.apache.spark.storage.StorageLevel
-
Various
StorageLevel
defined and utility functions for creating new storage levels. - nonEmptyEventQueueAfterTimeoutError(long) - Static method in class org.apache.spark.errors.SparkCoreErrors
- nonFoldableArgumentError(String, String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nonLastMatchedClauseOmitConditionError(SqlBaseParser.MergeIntoTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- nonLastNotMatchedBySourceClauseOmitConditionError(SqlBaseParser.MergeIntoTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- nonLastNotMatchedClauseOmitConditionError(SqlBaseParser.MergeIntoTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- NonLeafStatementExec - Interface in org.apache.spark.sql.scripting
-
Non-leaf node in the execution tree.
- nonLiteralPivotValError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nonLocalPaths(String, boolean) - Static method in class org.apache.spark.util.Utils
-
Return all non-local paths from a comma-separated list of paths.
- nonMapFunctionNotAllowedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nonnegative() - Method in class org.apache.spark.ml.recommendation.ALS
- nonnegative() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for whether to apply nonnegativity constraints.
- nonNegativeHash(Object) - Static method in class org.apache.spark.util.Utils
- nonNegativeMod(int, int) - Static method in class org.apache.spark.util.Utils
- nonPartitionColError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nonPartitionPruningPredicatesNotExpectedError(Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- NonSessionCatalogAndIdentifier() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.NonSessionCatalogAndIdentifier
- NonSessionCatalogAndIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
- NonSessionCatalogAndIdentifier$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.NonSessionCatalogAndIdentifier$
- nonTimeWindowNotSupportedInStreamingError(Seq<String>, Seq<String>, Seq<String>, Origin) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- nonZeroIterator() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns an iterator over all the non-zero elements of this vector.
- nonZeroIterator() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns an iterator over all the non-zero elements of this vector.
- NoopDialect - Class in org.apache.spark.sql.jdbc
-
NOOP dialect object, always returning the neutral element.
- NoopDialect() - Constructor for class org.apache.spark.sql.jdbc.NoopDialect
- noRecordsFromEmptyDataReaderError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- norm(Vector, double) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Returns the p-norm of this vector.
- norm(Vector, double) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Returns the p-norm of this vector.
- NormalEquationSolver - Interface in org.apache.spark.ml.optim
-
Interface for classes that solve the normal equations locally.
- normalizeDuration(long) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
Find the best
TimeUnit
for converting milliseconds to a friendly string. - normalizePartitionSpec(Map<String, T>, StructType, String, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.PartitioningUtils
-
Normalize the column names in partition specification, w.r.t.
- Normalizer - Class in org.apache.spark.ml.feature
-
Normalize a vector to have unit norm using the given p-norm.
- Normalizer - Class in org.apache.spark.mllib.feature
-
Normalizes samples individually to unit L^p^ norm
- Normalizer() - Constructor for class org.apache.spark.ml.feature.Normalizer
- Normalizer() - Constructor for class org.apache.spark.mllib.feature.Normalizer
- Normalizer(double) - Constructor for class org.apache.spark.mllib.feature.Normalizer
- Normalizer(String) - Constructor for class org.apache.spark.ml.feature.Normalizer
- normalizeToProbabilitiesInPlace(DenseVector) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
Normalize a vector of raw predictions to be a multinomial probability vector, in place.
- normalJavaRDD(JavaSparkContext, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.normalJavaRDD
with the default number of partitions and the default seed. - normalJavaRDD(JavaSparkContext, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.normalJavaRDD
with the default seed. - normalJavaRDD(JavaSparkContext, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.normalRDD
. - normalJavaVectorRDD(JavaSparkContext, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.normalJavaVectorRDD
with the default number of partitions and the default seed. - normalJavaVectorRDD(JavaSparkContext, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.normalJavaVectorRDD
with the default seed. - normalJavaVectorRDD(JavaSparkContext, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.normalVectorRDD
. - normalRDD(SparkContext, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples from the standard normal distribution. - normalVectorRDD(SparkContext, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing
i.i.d.
samples drawn from the standard normal distribution. - normL1() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
L1 norm of each dimension.
- normL1() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
L1 norm of each column
- normL1(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- normL1(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- normL2() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
L2 (Euclidean) norm of each dimension.
- normL2() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Euclidean magnitude of each column
- normL2(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- normL2(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- normPdf(double, double, double, double) - Static method in class org.apache.spark.mllib.stat.KernelDensity
-
Evaluates the PDF of a normal distribution.
- NoSuccess() - Static method in class org.apache.spark.ml.feature.RFormulaParser
- noSuchElementError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- noSuchElementExceptionError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- noSuchFunctionError(FunctionIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- noSuchNamespaceError(String[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- noSuchPartitionError(String, String, Map<String, String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- noSuchStructFieldInGivenFieldsError(String, StructField[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- noSuchTableError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- noSuchTableError(Identifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- noSuchTableError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- not(Column) - Static method in class org.apache.spark.sql.functions
-
Inversion of boolean expression, i.e.
- not(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- Not - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that evaluates to
true
iffchild
is evaluated tofalse
. - Not - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iffchild
is evaluated tofalse
. - Not(Predicate) - Constructor for class org.apache.spark.sql.connector.expressions.filter.Not
- Not(Filter) - Constructor for class org.apache.spark.sql.sources.Not
- notADatasourceRDDPartitionError(Partition) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notAllowedToAddDBPrefixForTempViewError(Seq<String>, SqlBaseParser.CreateViewContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- notAllowedToCreatePermanentViewByReferencingTempFuncError(TableIdentifier, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- notAllowedToCreatePermanentViewByReferencingTempVarError(TableIdentifier, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- notAllowedToCreatePermanentViewByReferencingTempViewError(TableIdentifier, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- notAllowedToCreatePermanentViewWithoutAssigningAliasForExpressionError(TableIdentifier, Attribute) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- notEnoughMemoryToBuildAndBroadcastTableError(OutOfMemoryError, Seq<TableIdentifier>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notEnoughMemoryToLoadStore(String, String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notEqual(Object) - Method in class org.apache.spark.sql.Column
-
Inequality test.
- notExistPartitionError(Identifier, InternalRow, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- notExpectedUnresolvedEncoderError(AttributeReference) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notifyPartitionCompletion(int, int) - Method in interface org.apache.spark.scheduler.TaskScheduler
- NoTimeout() - Static method in class org.apache.spark.sql.streaming.GroupStateTimeout
-
No timeout.
- notNullAssertViolation(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notNullConstraintViolationArrayElementError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- notNullConstraintViolationMapValueError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- notOverrideExpectedMethodsError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notPublicClassError(String) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- notPublicClassError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notSupportNonPrimitiveTypeError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notSupportTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- notUserDefinedTypeError(String, String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- now() - Static method in class org.apache.spark.sql.functions
-
Returns the current timestamp at the start of query evaluation.
- nth_value(Column, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is the
offset
th row of the window frame (counting from 1), andnull
if the size of window frame is less thanoffset
rows. - nth_value(Column, int, boolean) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is the
offset
th row of the window frame (counting from 1), andnull
if the size of window frame is less thanoffset
rows. - ntile(int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the ntile group id (from 1 to
n
inclusive) in an ordered window partition. - NULL - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- NULL - Static variable in class org.apache.spark.types.variant.VariantUtil
- nullable() - Method in class org.apache.spark.sql.avro.SchemaConverters.SchemaType
- nullable() - Method in class org.apache.spark.sql.catalog.Column
- nullable() - Method in interface org.apache.spark.sql.connector.catalog.Column
-
Returns true if this column may produce null values.
- nullable() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnNullability
- nullable() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns true when the UDF can return a nullable value.
- nullable() - Method in class org.apache.spark.sql.types.StructField
- nullableColumnOrFieldError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nullableRowIdError(Seq<AttributeReference>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nullArgumentError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nullAsMapKeyNotAllowedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- nullCount() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
- nullDataSourceOption(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- nullDeviance() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- nullHypothesis() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
- nullHypothesis() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
- nullHypothesis() - Method in interface org.apache.spark.mllib.stat.test.StreamingTestMethod
- nullHypothesis() - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
- nullHypothesis() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
Null hypothesis of the test.
- nullHypothesis() - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
- NullHypothesis$() - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
- NullHypothesis$() - Constructor for class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis$
- nullif(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if
col1
equals tocol2
, orcol1
otherwise. - nullifzero(Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if
col
is equal to zero, orcol
otherwise. - nullLiteralsCannotBeCastedError(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- nullOrdering() - Method in interface org.apache.spark.sql.connector.expressions.SortOrder
-
Returns the null ordering.
- NullOrdering - Enum Class in org.apache.spark.sql.connector.expressions
-
A null order used in sorting expressions.
- NULLS_FIRST - Enum constant in enum class org.apache.spark.sql.connector.expressions.NullOrdering
- NULLS_LAST - Enum constant in enum class org.apache.spark.sql.connector.expressions.NullOrdering
- nullSQLStringExecuteImmediate(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- NullType - Class in org.apache.spark.sql.types
-
The data type representing
NULL
values. - NullType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the NullType object.
- NullType() - Constructor for class org.apache.spark.sql.types.NullType
- NUM_ACTIVE_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_ACTIVE_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_ACTIVE_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- NUM_ACTIVE_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- NUM_ATTRIBUTES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- NUM_CACHED_PARTITIONS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- NUM_COMPLETE_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- NUM_COMPLETED_INDICES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_COMPLETED_INDICES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- NUM_COMPLETED_JOBS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- NUM_COMPLETED_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- NUM_COMPLETED_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_COMPLETED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_COMPLETED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- NUM_FAILED_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_FAILED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_FAILED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- NUM_FAILED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- NUM_INPUT_ROWS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- NUM_KILLED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_KILLED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- NUM_KILLED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- NUM_OUTPUT_ROWS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- NUM_PARTITIONS() - Static method in class org.apache.spark.ui.UIWorkloadGenerator
- NUM_PARTITIONS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- NUM_ROWS_DROPPED_BY_WATERMARK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- NUM_ROWS_REMOVED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- NUM_ROWS_TOTAL_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- NUM_ROWS_UPDATED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- NUM_SHUFFLE_PARTITIONS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- NUM_SKIPPED_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_SKIPPED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_STATE_STORE_INSTANCES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- NUM_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- NUM_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- NUM_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- NUM_VALUES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- numAccums() - Static method in class org.apache.spark.util.AccumulatorContext
-
Returns the number of accumulators registered.
- numActiveBatches() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numActiveOutputOps() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- numActiveReceivers() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numActives() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- numActives() - Method in class org.apache.spark.ml.linalg.DenseVector
- numActives() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Find the number of values stored explicitly.
- numActives() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- numActives() - Method in class org.apache.spark.ml.linalg.SparseVector
- numActives() - Method in interface org.apache.spark.ml.linalg.Vector
-
Number of active entries.
- numActives() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- numActives() - Method in class org.apache.spark.mllib.linalg.DenseVector
- numActives() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Find the number of values stored explicitly.
- numActives() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- numActives() - Method in class org.apache.spark.mllib.linalg.SparseVector
- numActives() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Number of active entries.
- numActiveStages() - Method in class org.apache.spark.status.api.v1.JobData
- numActiveTasks() - Method in interface org.apache.spark.SparkStageInfo
- numActiveTasks() - Method in class org.apache.spark.SparkStageInfoImpl
- numActiveTasks() - Method in class org.apache.spark.status.api.v1.JobData
- numActiveTasks() - Method in class org.apache.spark.status.api.v1.SpeculationStageSummary
- numActiveTasks() - Method in class org.apache.spark.status.api.v1.StageData
- numActiveTasks() - Method in class org.apache.spark.status.LiveSpeculationStageSummary
- numAttributes() - Method in class org.apache.spark.ml.attribute.AttributeGroup
- numAvailableMapOutputs() - Method in class org.apache.spark.ShuffleStatus
-
Number of partitions that have shuffle map outputs.
- numAvailableMergeResults() - Method in class org.apache.spark.ShuffleStatus
-
Number of shuffle partitions that have already been merge finalized when push-based is enabled.
- numberAndSizeOfPartitionsNotAllowedTogether() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- numberOfPartitionsNotAllowedWithUnspecifiedDistributionError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- numBins() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
param for number of bins to down-sample the curves (ROC curve, PR curve) in area computation.
- numBins() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
- numBuckets() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- numBuckets() - Method in interface org.apache.spark.ml.feature.QuantileDiscretizerBase
-
Number of buckets (quantiles, or categories) into which data points are grouped.
- numBucketsArray() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- numBucketsArray() - Method in interface org.apache.spark.ml.feature.QuantileDiscretizerBase
-
Array of number of buckets (quantiles, or categories) into which data points are grouped.
- numCachedPartitions() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- numCachedPartitions() - Method in class org.apache.spark.storage.RDDInfo
- numCategories() - Method in class org.apache.spark.ml.tree.CategoricalSplit
- numCategories() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
- numClasses() - Method in class org.apache.spark.ml.classification.ClassificationModel
-
Number of classes (values which the label can take).
- numClasses() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- numClasses() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- numClasses() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- numClasses() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- numClasses() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- numClasses() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- numClasses() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- numClasses() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- numClasses() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- numClasses() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
- numClasses() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- numColBlocks() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- numCols() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- numCols() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Number of columns.
- numCols() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- numCols() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- numCols() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- numCols() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Gets or computes the number of columns.
- numCols() - Method in interface org.apache.spark.mllib.linalg.distributed.DistributedMatrix
-
Gets or computes the number of columns.
- numCols() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
- numCols() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Gets or computes the number of columns.
- numCols() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Number of columns.
- numCols() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- numCols() - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
-
Returns the number of columns that make up this batch.
- numCompletedIndices() - Method in class org.apache.spark.status.api.v1.JobData
- numCompletedIndices() - Method in class org.apache.spark.status.api.v1.StageData
- numCompletedOutputOps() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- numCompletedStages() - Method in class org.apache.spark.status.api.v1.JobData
- numCompletedTasks() - Method in interface org.apache.spark.SparkStageInfo
- numCompletedTasks() - Method in class org.apache.spark.SparkStageInfoImpl
- numCompletedTasks() - Method in class org.apache.spark.status.api.v1.JobData
- numCompletedTasks() - Method in class org.apache.spark.status.api.v1.SpeculationStageSummary
- numCompletedTasks() - Method in class org.apache.spark.status.LiveSpeculationStageSummary
- numCompleteTasks() - Method in class org.apache.spark.status.api.v1.StageData
- numDocs() - Method in class org.apache.spark.ml.feature.IDFModel
-
Returns number of documents evaluated to compute idf
- numDocs() - Method in class org.apache.spark.mllib.feature.IDFModel
- numEdges() - Method in class org.apache.spark.graphx.GraphOps
- numElements() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- numElements() - Method in class org.apache.spark.sql.vectorized.ColumnarMap
- Numeric() - Static method in class org.apache.spark.ml.attribute.AttributeType
-
Numeric type.
- NumericAttribute - Class in org.apache.spark.ml.attribute
-
A numeric attribute with optional summary statistics.
- NumericHistogram - Class in org.apache.spark.sql.util
-
A generic, re-usable histogram class that supports partial aggregations.
- NumericHistogram() - Constructor for class org.apache.spark.sql.util.NumericHistogram
-
Creates a new histogram object.
- NumericHistogram.Coord - Class in org.apache.spark.sql.util
-
The Coord class defines a histogram bin, which is just an (x,y) pair.
- NumericParser - Class in org.apache.spark.mllib.util
-
Simple parser for a numeric structure consisting of three types:
- NumericParser() - Constructor for class org.apache.spark.mllib.util.NumericParser
- numericPrecedence() - Static method in class org.apache.spark.sql.types.UpCastRule
- numericRDDToDoubleRDDFunctions(RDD<T>, Numeric<T>) - Static method in class org.apache.spark.rdd.RDD
- NumericType - Class in org.apache.spark.sql.types
-
Numeric data types.
- NumericType() - Constructor for class org.apache.spark.sql.types.NumericType
- NumericTypeExpression - Class in org.apache.spark.sql.types
- NumericTypeExpression() - Constructor for class org.apache.spark.sql.types.NumericTypeExpression
- numFailedOutputOps() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- numFailedStages() - Method in class org.apache.spark.status.api.v1.JobData
- numFailedTasks() - Method in interface org.apache.spark.SparkStageInfo
- numFailedTasks() - Method in class org.apache.spark.SparkStageInfoImpl
- numFailedTasks() - Method in class org.apache.spark.status.api.v1.JobData
- numFailedTasks() - Method in class org.apache.spark.status.api.v1.SpeculationStageSummary
- numFailedTasks() - Method in class org.apache.spark.status.api.v1.StageData
- numFailedTasks() - Method in class org.apache.spark.status.LiveSpeculationStageSummary
- numFeatures() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- numFeatures() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- numFeatures() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- numFeatures() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- numFeatures() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- numFeatures() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- numFeatures() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- numFeatures() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- numFeatures() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- numFeatures() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- numFeatures() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- numFeatures() - Method in class org.apache.spark.ml.clustering.KMeansModel
- numFeatures() - Method in class org.apache.spark.ml.feature.FeatureHasher
- numFeatures() - Method in class org.apache.spark.ml.feature.HashingTF
- numFeatures() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- numFeatures() - Method in interface org.apache.spark.ml.param.shared.HasNumFeatures
-
Param for Number of features.
- numFeatures() - Method in class org.apache.spark.ml.PredictionModel
-
Returns the number of features the model was trained on.
- numFeatures() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- numFeatures() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- numFeatures() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
- numFeatures() - Method in class org.apache.spark.mllib.feature.HashingTF
- numFields() - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- numFields() - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- numFolds() - Method in class org.apache.spark.ml.tuning.CrossValidator
- numFolds() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- numFolds() - Method in interface org.apache.spark.ml.tuning.CrossValidatorParams
-
Param for number of folds for cross validation.
- numHashTables() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- numHashTables() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- numHashTables() - Method in interface org.apache.spark.ml.feature.LSHParams
-
Param for the number of hash tables used in LSH OR-amplification.
- numInactiveReceivers() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numInputRows() - Method in class org.apache.spark.sql.streaming.SourceProgress
- numInputRows() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
The aggregate (across all sources) number of records processed in a trigger.
- numInstances() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- numInstances() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- numItemBlocks() - Method in class org.apache.spark.ml.recommendation.ALS
- numItemBlocks() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for number of item blocks (positive).
- numIter() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
- numIterations() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
- numIterations() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- numKilledTasks() - Method in class org.apache.spark.status.api.v1.JobData
- numKilledTasks() - Method in class org.apache.spark.status.api.v1.SpeculationStageSummary
- numKilledTasks() - Method in class org.apache.spark.status.api.v1.StageData
- numKilledTasks() - Method in class org.apache.spark.status.LiveSpeculationStageSummary
- numLeave() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
- numLocalityAwareTasksPerResourceProfileId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
- numMergersNeeded() - Method in class org.apache.spark.storage.BlockManagerMessages.GetShufflePushMergerLocations
- numNodes() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
-
Number of nodes in tree, including leaf nodes.
- numNodes() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Get number of nodes in tree, including leaf nodes.
- numNonzeros() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- numNonzeros() - Method in class org.apache.spark.ml.linalg.DenseVector
- numNonzeros() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Find the number of non-zero active values.
- numNonzeros() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- numNonzeros() - Method in class org.apache.spark.ml.linalg.SparseVector
- numNonzeros() - Method in interface org.apache.spark.ml.linalg.Vector
-
Number of nonzero elements.
- numNonzeros() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- numNonzeros() - Method in class org.apache.spark.mllib.linalg.DenseVector
- numNonzeros() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Find the number of non-zero active values.
- numNonzeros() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- numNonzeros() - Method in class org.apache.spark.mllib.linalg.SparseVector
- numNonzeros() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Number of nonzero elements.
- numNonzeros() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Number of nonzero elements in each dimension.
- numNonzeros() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Number of nonzero elements (including explicitly presented zero values) in each column.
- numNonZeros(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- numNonZeros(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- numNulls() - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
- numNulls() - Method in class org.apache.spark.sql.vectorized.ColumnVector
-
Returns the number of nulls in this column vector.
- numOutputRows() - Method in class org.apache.spark.sql.streaming.SinkProgress
- numPartitions() - Method in class org.apache.spark.BarrierTaskContext
- numPartitions() - Method in class org.apache.spark.HashPartitioner
- numPartitions() - Method in class org.apache.spark.ml.feature.Word2Vec
- numPartitions() - Method in interface org.apache.spark.ml.feature.Word2VecBase
-
Number of partitions for sentences of words.
- numPartitions() - Method in class org.apache.spark.ml.feature.Word2VecModel
- numPartitions() - Method in class org.apache.spark.ml.fpm.FPGrowth
- numPartitions() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- numPartitions() - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
-
Number of partitions (at least 1) used by parallel FP-growth.
- numPartitions() - Method in class org.apache.spark.Partitioner
- numPartitions() - Method in class org.apache.spark.RangePartitioner
- numPartitions() - Method in class org.apache.spark.rdd.PartitionGroup
- numPartitions() - Method in class org.apache.spark.sql.connector.read.partitioning.KeyGroupedPartitioning
- numPartitions() - Method in interface org.apache.spark.sql.connector.read.partitioning.Partitioning
-
Returns the number of partitions that the data is split across.
- numPartitions() - Method in class org.apache.spark.sql.connector.read.partitioning.UnknownPartitioning
- numPartitions() - Method in interface org.apache.spark.sql.connector.write.PhysicalWriteInfo
-
The number of partitions of the input data that is going to be written.
- numPartitions() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- numPartitions() - Method in class org.apache.spark.storage.RDDInfo
- numPartitions() - Method in class org.apache.spark.TaskContext
-
Total number of partitions in the stage that this task belongs to.
- numPartitions(int) - Method in class org.apache.spark.streaming.StateSpec
-
Set the number of partitions by which the state RDDs generated by
mapWithState
will be partitioned. - numPartitionsGreaterThanMaxNumConcurrentTasksError(int, int) - Static method in class org.apache.spark.errors.SparkCoreErrors
- numProcessedRecords() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numReceivedRecords() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numReceivers() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numRecords() - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockStoreResult
- numRecords() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
The number of recorders received by the receivers in this batch.
- numRecords() - Method in class org.apache.spark.streaming.scheduler.StreamInputInfo
- numRetainedCompletedBatches() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numRowBlocks() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- numRows() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- numRows() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Number of rows.
- numRows() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- numRows() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- numRows() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- numRows() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Gets or computes the number of rows.
- numRows() - Method in interface org.apache.spark.mllib.linalg.distributed.DistributedMatrix
-
Gets or computes the number of rows.
- numRows() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
- numRows() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Gets or computes the number of rows.
- numRows() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Number of rows.
- numRows() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- numRows() - Method in interface org.apache.spark.sql.columnar.CachedBatch
- numRows() - Method in interface org.apache.spark.sql.connector.read.HasPartitionStatistics
-
Returns the number of rows in the partition statistics associated to this partition.
- numRows() - Method in interface org.apache.spark.sql.connector.read.Statistics
- numRows() - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
-
Returns the number of rows for read, including filtered rows.
- numRowsDroppedByWatermark() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- numRowsRemoved() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- numRowsTotal() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- numRowsUpdated() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- numRunningTasks() - Method in interface org.apache.spark.SparkExecutorInfo
- numRunningTasks() - Method in class org.apache.spark.SparkExecutorInfoImpl
- numShufflePartitions() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- numSkippedStages() - Method in class org.apache.spark.status.api.v1.JobData
- numSkippedTasks() - Method in class org.apache.spark.status.api.v1.JobData
- numSpilledStages() - Method in class org.apache.spark.SpillListener
- numStateStoreInstances() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- numStreamBlocks() - Method in class org.apache.spark.ui.storage.ExecutorStreamSummary
- numTasks() - Method in class org.apache.spark.scheduler.StageInfo
- numTasks() - Method in interface org.apache.spark.SparkStageInfo
- numTasks() - Method in class org.apache.spark.SparkStageInfoImpl
- numTasks() - Method in class org.apache.spark.status.api.v1.JobData
- numTasks() - Method in class org.apache.spark.status.api.v1.SpeculationStageSummary
- numTasks() - Method in class org.apache.spark.status.api.v1.StageData
- numTasks() - Method in class org.apache.spark.status.LiveSpeculationStageSummary
- numTopFeatures() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- numTopFeatures() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- numTopFeatures() - Method in interface org.apache.spark.ml.feature.SelectorParams
-
Number of features that selector will select, ordered by ascending p-value.
- numTopFeatures() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- numTotalCompletedBatches() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- numTotalOutputOps() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- numTrees() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- numTrees() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- numTrees() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- numTrees() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- numTrees() - Method in interface org.apache.spark.ml.tree.RandomForestParams
-
Number of trees to train (at least 1).
- numTrees() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Get number of trees in ensemble.
- numUserBlocks() - Method in class org.apache.spark.ml.recommendation.ALS
- numUserBlocks() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for number of user blocks (positive).
- numValues() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- numVertices() - Method in class org.apache.spark.graphx.GraphOps
- nvl(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns
col2
ifcol1
is null, orcol1
otherwise. - nvl2(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns
col2
ifcol1
is not null, orcol3
otherwise.
O
- OBJECT - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- OBJECT - Static variable in class org.apache.spark.types.variant.VariantUtil
- ObjectField(String, Variant) - Constructor for class org.apache.spark.types.variant.Variant.ObjectField
- objectFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
- objectFile(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
- objectFile(String, int, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
- objectHeader(boolean, int, int) - Static method in class org.apache.spark.types.variant.VariantUtil
- objectiveHistory() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummaryImpl
- objectiveHistory() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationTrainingSummaryImpl
- objectiveHistory() - Method in class org.apache.spark.ml.classification.FMClassificationTrainingSummaryImpl
- objectiveHistory() - Method in class org.apache.spark.ml.classification.LinearSVCTrainingSummaryImpl
- objectiveHistory() - Method in class org.apache.spark.ml.classification.LogisticRegressionTrainingSummaryImpl
- objectiveHistory() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationTrainingSummaryImpl
- objectiveHistory() - Method in class org.apache.spark.ml.classification.RandomForestClassificationTrainingSummaryImpl
- objectiveHistory() - Method in interface org.apache.spark.ml.classification.TrainingSummary
-
objective function (scaled loss + regularization) at each iteration.
- objectiveHistory() - Method in class org.apache.spark.ml.regression.LinearRegressionTrainingSummary
- objectName() - Method in interface org.apache.spark.QueryContext
- objectSize() - Method in class org.apache.spark.types.variant.Variant
- ObjectStreamClassMethods(ObjectStreamClass) - Constructor for class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
- ObjectStreamClassMethods$() - Constructor for class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods$
- objectType() - Method in interface org.apache.spark.QueryContext
- ObjectType - Class in org.apache.spark.sql.types
- ObjectType(Class<?>) - Constructor for class org.apache.spark.sql.types.ObjectType
- Observation - Class in org.apache.spark.sql
-
Helper class to simplify usage of
Dataset.observe(String, Column, Column*)
: - Observation() - Constructor for class org.apache.spark.sql.Observation
-
Create an Observation with a random name.
- Observation(String) - Constructor for class org.apache.spark.sql.Observation
- observe(String, Column, Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Define (named) metrics to observe on the Dataset.
- observe(String, Column, Column...) - Method in class org.apache.spark.sql.Dataset
- observe(String, Column, Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Define (named) metrics to observe on the Dataset.
- observe(String, Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- observe(Observation, Column, Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Observe (named) metrics through an
org.apache.spark.sql.Observation
instance. - observe(Observation, Column, Column...) - Method in class org.apache.spark.sql.Dataset
- observe(Observation, Column, Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Observe (named) metrics through an
org.apache.spark.sql.Observation
instance. - observe(Observation, Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- OBSERVED_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- observedMetrics() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- obtainDelegationTokens(Configuration, SparkConf, Credentials) - Method in interface org.apache.spark.security.HadoopDelegationTokenProvider
-
Obtain delegation tokens for this service and get the time of the next renewal.
- octet_length(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the byte length for the specified string column.
- ocvTypes() - Static method in class org.apache.spark.ml.image.ImageSchema
-
(Scala-specific) OpenCV type mapping supported
- of(long[]) - Static method in class org.apache.spark.shuffle.api.metadata.MapOutputCommitMessage
- of(long[], MapOutputMetadata) - Static method in class org.apache.spark.shuffle.api.metadata.MapOutputCommitMessage
- of(String[], String) - Static method in interface org.apache.spark.sql.connector.catalog.Identifier
- of(JavaRDD<? extends Product>) - Static method in class org.apache.spark.mllib.evaluation.RankingMetrics
-
Creates a
RankingMetrics
instance (for Java users). - of(RDD<Tuple2<Object, Object>>) - Static method in class org.apache.spark.mllib.evaluation.AreaUnderCurve
-
Returns the area under the given curve.
- of(Iterable<Tuple2<Object, Object>>) - Static method in class org.apache.spark.mllib.evaluation.AreaUnderCurve
-
Returns the area under the given curve.
- of(T) - Static method in class org.apache.spark.api.java.Optional
- OFF_HEAP - Enum constant in enum class org.apache.spark.storage.StorageLevelMapper
- OFF_HEAP - Static variable in class org.apache.spark.api.java.StorageLevels
- OFF_HEAP() - Static method in class org.apache.spark.storage.StorageLevel
- OFF_HEAP_MEMORY_REMAINING_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- OFF_HEAP_MEMORY_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- OFFHEAP_MEM() - Static method in class org.apache.spark.resource.ResourceProfile
-
built-in executor resource: offHeap
- OffHeapExecutionMemory - Class in org.apache.spark.metrics
- OffHeapExecutionMemory() - Constructor for class org.apache.spark.metrics.OffHeapExecutionMemory
- offHeapMemory(String) - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
Specify off heap memory.
- offHeapMemoryRemaining() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- offHeapMemoryUsed() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- OffHeapStorageMemory - Class in org.apache.spark.metrics
- OffHeapStorageMemory() - Constructor for class org.apache.spark.metrics.OffHeapStorageMemory
- OffHeapUnifiedMemory - Class in org.apache.spark.metrics
- OffHeapUnifiedMemory() - Constructor for class org.apache.spark.metrics.OffHeapUnifiedMemory
- offHeapUsed() - Method in class org.apache.spark.status.LiveRDDDistribution
- offset(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by skipping the first
n
rows. - offset(int) - Method in class org.apache.spark.sql.Dataset
- Offset - Class in org.apache.spark.sql.connector.read.streaming
-
An abstract representation of progress through a
MicroBatchStream
orContinuousStream
. - Offset() - Constructor for class org.apache.spark.sql.connector.read.streaming.Offset
- offsetBytes(String, long, long, long) - Static method in class org.apache.spark.util.Utils
-
Return a string containing part of a file from byte 'start' to 'end'.
- offsetBytes(Seq<File>, Seq<Object>, long, long) - Static method in class org.apache.spark.util.Utils
-
Return a string containing data across a set of files.
- offsetCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- offsetCol() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Param for offset column name.
- offsetCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- ofNullable(T) - Static method in class org.apache.spark.api.java.Optional
- ofRows(SparkSession, LogicalPlan) - Static method in class org.apache.spark.sql.Dataset
- ofRows(SparkSession, LogicalPlan, QueryPlanningTracker, ShuffleCleanupMode) - Static method in class org.apache.spark.sql.Dataset
-
A variant of ofRows that allows passing in a tracker so we can track query parsing time.
- ofRows(SparkSession, LogicalPlan, ShuffleCleanupMode) - Static method in class org.apache.spark.sql.Dataset
- on(Function1<U, T>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- on(Function1<U, T>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- on(Function1<U, T>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- on(Function1<U, T>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- on(Function1<U, T>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- on(Function1<U, T>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- on(Function1<U, T>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- ON_HEAP_MEMORY_REMAINING_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- ON_HEAP_MEMORY_USED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- onAddData(Object, Object) - Method in interface org.apache.spark.streaming.receiver.BlockGeneratorListener
-
Called after a data item is added into the BlockGenerator.
- onApplicationEnd(SparkListenerApplicationEnd) - Method in class org.apache.spark.scheduler.SparkListener
- onApplicationEnd(SparkListenerApplicationEnd) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the application ends
- onApplicationEnd(SparkListenerApplicationEnd) - Method in class org.apache.spark.SparkFirehoseListener
- onApplicationStart(SparkListenerApplicationStart) - Method in class org.apache.spark.scheduler.SparkListener
- onApplicationStart(SparkListenerApplicationStart) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the application starts
- onApplicationStart(SparkListenerApplicationStart) - Method in class org.apache.spark.SparkFirehoseListener
- onBatchCompleted(JavaStreamingListenerBatchCompleted) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when processing of a batch of jobs has completed.
- onBatchCompleted(StreamingListenerBatchCompleted) - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
- onBatchCompleted(StreamingListenerBatchCompleted) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when processing of a batch of jobs has completed.
- onBatchStarted(JavaStreamingListenerBatchStarted) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when processing of a batch of jobs has started.
- onBatchStarted(StreamingListenerBatchStarted) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when processing of a batch of jobs has started.
- onBatchSubmitted(JavaStreamingListenerBatchSubmitted) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when a batch of jobs has been submitted for processing.
- onBatchSubmitted(StreamingListenerBatchSubmitted) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when a batch of jobs has been submitted for processing.
- onBlockManagerAdded(SparkListenerBlockManagerAdded) - Method in class org.apache.spark.scheduler.SparkListener
- onBlockManagerAdded(SparkListenerBlockManagerAdded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a new block manager has joined
- onBlockManagerAdded(SparkListenerBlockManagerAdded) - Method in class org.apache.spark.SparkFirehoseListener
- onBlockManagerRemoved(SparkListenerBlockManagerRemoved) - Method in class org.apache.spark.scheduler.SparkListener
- onBlockManagerRemoved(SparkListenerBlockManagerRemoved) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when an existing block manager has been removed
- onBlockManagerRemoved(SparkListenerBlockManagerRemoved) - Method in class org.apache.spark.SparkFirehoseListener
- onBlockUpdated(SparkListenerBlockUpdated) - Method in class org.apache.spark.scheduler.SparkListener
- onBlockUpdated(SparkListenerBlockUpdated) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver receives a block update info.
- onBlockUpdated(SparkListenerBlockUpdated) - Method in class org.apache.spark.SparkFirehoseListener
- Once() - Static method in class org.apache.spark.sql.streaming.Trigger
-
Deprecated.This is deprecated as of Spark 3.4.0. Use
Trigger.AvailableNow()
to leverage better guarantee of processing, fine-grained scale of batches, and better gradual processing of watermark advancement including no-data batch. See the NOTES inTrigger.AvailableNow()
for details. - OnceParser(Function1<Reader<Object>, Parsers.ParseResult<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- onceStrategyIdempotenceIsBrokenForBatchError(String, TreeType, TreeType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- onComplete(TaskContext) - Method in class org.apache.spark.storage.ShuffleFetchCompletionListener
- onComplete(Function1<R, BoxedUnit>) - Method in class org.apache.spark.partial.PartialResult
-
Set a handler to be called when this PartialResult completes.
- onComplete(Function1<Try<T>, U>, ExecutionContext) - Method in class org.apache.spark.ComplexFutureAction
- onComplete(Function1<Try<T>, U>, ExecutionContext) - Method in interface org.apache.spark.FutureAction
-
When this action is completed, either through an exception, or a value, applies the provided function.
- onComplete(Function1<Try<T>, U>, ExecutionContext) - Method in class org.apache.spark.SimpleFutureAction
- onDataWriterCommit(WriterCommitMessage) - Method in interface org.apache.spark.sql.connector.write.BatchWrite
-
Handles a commit message on receiving from a successful data writer.
- one() - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- one() - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- one() - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- one() - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- one() - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- one() - Static method in class org.apache.spark.sql.types.LongExactNumeric
- one() - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- ONE_ENTIRE_RESOURCE() - Static method in class org.apache.spark.resource.ResourceAmountUtils
-
Using "double" to do the resource calculation may encounter a problem of precision loss.
- OneHotEncoder - Class in org.apache.spark.ml.feature
-
A one-hot encoder that maps a column of category indices to a column of binary vectors, with at most a single one-value per row that indicates the input category index.
- OneHotEncoder() - Constructor for class org.apache.spark.ml.feature.OneHotEncoder
- OneHotEncoder(String) - Constructor for class org.apache.spark.ml.feature.OneHotEncoder
- OneHotEncoderBase - Interface in org.apache.spark.ml.feature
-
Private trait for params and common methods for OneHotEncoder and OneHotEncoderModel
- OneHotEncoderCommon - Class in org.apache.spark.ml.feature
-
Provides some helper methods used by
OneHotEncoder
. - OneHotEncoderCommon() - Constructor for class org.apache.spark.ml.feature.OneHotEncoderCommon
- OneHotEncoderModel - Class in org.apache.spark.ml.feature
-
param: categorySizes Original number of categories for each feature being encoded.
- onEnvironmentUpdate(SparkListenerEnvironmentUpdate) - Method in class org.apache.spark.scheduler.SparkListener
- onEnvironmentUpdate(SparkListenerEnvironmentUpdate) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when environment properties have been updated
- onEnvironmentUpdate(SparkListenerEnvironmentUpdate) - Method in class org.apache.spark.SparkFirehoseListener
- onError(String, Throwable) - Method in interface org.apache.spark.streaming.receiver.BlockGeneratorListener
-
Called when an error has occurred in the BlockGenerator.
- ones(int, int) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting of ones. - ones(int, int) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a
DenseMatrix
consisting of ones. - ones(int, int) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting of ones. - ones(int, int) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a
DenseMatrix
consisting of ones. - OneSampleTwoSided() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis$
- OneToOneDependency<T> - Class in org.apache.spark
-
:: DeveloperApi :: Represents a one-to-one dependency between partitions of the parent and child RDDs.
- OneToOneDependency(RDD<T>) - Constructor for class org.apache.spark.OneToOneDependency
- onEvent(SparkListenerEvent) - Method in class org.apache.spark.SparkFirehoseListener
- OneVsRest - Class in org.apache.spark.ml.classification
-
Reduction of Multiclass Classification to Binary Classification.
- OneVsRest() - Constructor for class org.apache.spark.ml.classification.OneVsRest
- OneVsRest(String) - Constructor for class org.apache.spark.ml.classification.OneVsRest
- OneVsRestModel - Class in org.apache.spark.ml.classification
-
Model produced by
OneVsRest
. - OneVsRestParams - Interface in org.apache.spark.ml.classification
-
Params for
OneVsRest
. - onExecutorAdded(SparkListenerExecutorAdded) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorAdded(SparkListenerExecutorAdded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver registers a new executor.
- onExecutorAdded(SparkListenerExecutorAdded) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorBlacklisted(SparkListenerExecutorBlacklisted) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorBlacklisted(SparkListenerExecutorBlacklisted) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Deprecated.use onExecutorExcluded instead. Since 3.1.0.
- onExecutorBlacklisted(SparkListenerExecutorBlacklisted) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorBlacklistedForStage(SparkListenerExecutorBlacklistedForStage) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorBlacklistedForStage(SparkListenerExecutorBlacklistedForStage) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Deprecated.use onExecutorExcludedForStage instead. Since 3.1.0.
- onExecutorBlacklistedForStage(SparkListenerExecutorBlacklistedForStage) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorExcluded(SparkListenerExecutorExcluded) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorExcluded(SparkListenerExecutorExcluded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver excludes an executor for a Spark application.
- onExecutorExcluded(SparkListenerExecutorExcluded) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorExcludedForStage(SparkListenerExecutorExcludedForStage) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorExcludedForStage(SparkListenerExecutorExcludedForStage) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver excludes an executor for a stage.
- onExecutorExcludedForStage(SparkListenerExecutorExcludedForStage) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorMetricsUpdate(SparkListenerExecutorMetricsUpdate) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorMetricsUpdate(SparkListenerExecutorMetricsUpdate) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver receives task metrics from an executor in a heartbeat.
- onExecutorMetricsUpdate(SparkListenerExecutorMetricsUpdate) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorRemoved(SparkListenerExecutorRemoved) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorRemoved(SparkListenerExecutorRemoved) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver removes an executor.
- onExecutorRemoved(SparkListenerExecutorRemoved) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorUnblacklisted(SparkListenerExecutorUnblacklisted) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorUnblacklisted(SparkListenerExecutorUnblacklisted) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Deprecated.use onExecutorUnexcluded instead. Since 3.1.0.
- onExecutorUnblacklisted(SparkListenerExecutorUnblacklisted) - Method in class org.apache.spark.SparkFirehoseListener
- onExecutorUnexcluded(SparkListenerExecutorUnexcluded) - Method in class org.apache.spark.scheduler.SparkListener
- onExecutorUnexcluded(SparkListenerExecutorUnexcluded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver re-enables a previously excluded executor.
- onExecutorUnexcluded(SparkListenerExecutorUnexcluded) - Method in class org.apache.spark.SparkFirehoseListener
- onFail(Function1<Exception, BoxedUnit>) - Method in class org.apache.spark.partial.PartialResult
-
Set a handler to be called if this PartialResult's job fails.
- onFailure(String, QueryExecution, Exception) - Method in interface org.apache.spark.sql.util.QueryExecutionListener
-
A callback function that will be called when a query execution failed.
- onGenerateBlock(StreamBlockId) - Method in interface org.apache.spark.streaming.receiver.BlockGeneratorListener
-
Called when a new block of data is generated by the block generator.
- OnHeapExecutionMemory - Class in org.apache.spark.metrics
- OnHeapExecutionMemory() - Constructor for class org.apache.spark.metrics.OnHeapExecutionMemory
- onHeapMemoryRemaining() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- onHeapMemoryUsed() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
- OnHeapStorageMemory - Class in org.apache.spark.metrics
- OnHeapStorageMemory() - Constructor for class org.apache.spark.metrics.OnHeapStorageMemory
- OnHeapUnifiedMemory - Class in org.apache.spark.metrics
- OnHeapUnifiedMemory() - Constructor for class org.apache.spark.metrics.OnHeapUnifiedMemory
- onHeapUsed() - Method in class org.apache.spark.status.LiveRDDDistribution
- onJobEnd(SparkListenerJobEnd) - Method in class org.apache.spark.scheduler.SparkListener
- onJobEnd(SparkListenerJobEnd) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a job ends
- onJobEnd(SparkListenerJobEnd) - Method in class org.apache.spark.SparkFirehoseListener
- onJobStart(SparkListenerJobStart) - Method in class org.apache.spark.scheduler.SparkListener
- onJobStart(SparkListenerJobStart) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a job starts
- onJobStart(SparkListenerJobStart) - Method in class org.apache.spark.SparkFirehoseListener
- OnlineLDAOptimizer - Class in org.apache.spark.mllib.clustering
-
An online optimizer for LDA.
- OnlineLDAOptimizer() - Constructor for class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
- onlySupportDataSourcesProvidingFileFormatError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- onNodeBlacklisted(SparkListenerNodeBlacklisted) - Method in class org.apache.spark.scheduler.SparkListener
- onNodeBlacklisted(SparkListenerNodeBlacklisted) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Deprecated.use onNodeExcluded instead. Since 3.1.0.
- onNodeBlacklisted(SparkListenerNodeBlacklisted) - Method in class org.apache.spark.SparkFirehoseListener
- onNodeBlacklistedForStage(SparkListenerNodeBlacklistedForStage) - Method in class org.apache.spark.scheduler.SparkListener
- onNodeBlacklistedForStage(SparkListenerNodeBlacklistedForStage) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Deprecated.use onNodeExcludedForStage instead. Since 3.1.0.
- onNodeBlacklistedForStage(SparkListenerNodeBlacklistedForStage) - Method in class org.apache.spark.SparkFirehoseListener
- onNodeExcluded(SparkListenerNodeExcluded) - Method in class org.apache.spark.scheduler.SparkListener
- onNodeExcluded(SparkListenerNodeExcluded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver excludes a node for a Spark application.
- onNodeExcluded(SparkListenerNodeExcluded) - Method in class org.apache.spark.SparkFirehoseListener
- onNodeExcludedForStage(SparkListenerNodeExcludedForStage) - Method in class org.apache.spark.scheduler.SparkListener
- onNodeExcludedForStage(SparkListenerNodeExcludedForStage) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver excludes a node for a stage.
- onNodeExcludedForStage(SparkListenerNodeExcludedForStage) - Method in class org.apache.spark.SparkFirehoseListener
- onNodeUnblacklisted(SparkListenerNodeUnblacklisted) - Method in class org.apache.spark.scheduler.SparkListener
- onNodeUnblacklisted(SparkListenerNodeUnblacklisted) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Deprecated.use onNodeUnexcluded instead. Since 3.1.0.
- onNodeUnblacklisted(SparkListenerNodeUnblacklisted) - Method in class org.apache.spark.SparkFirehoseListener
- onNodeUnexcluded(SparkListenerNodeUnexcluded) - Method in class org.apache.spark.scheduler.SparkListener
- onNodeUnexcluded(SparkListenerNodeUnexcluded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when the driver re-enables a previously excluded node.
- onNodeUnexcluded(SparkListenerNodeUnexcluded) - Method in class org.apache.spark.SparkFirehoseListener
- onOtherEvent(SparkListenerEvent) - Method in class org.apache.spark.scheduler.SparkListener
- onOtherEvent(SparkListenerEvent) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when other events like SQL-specific events are posted.
- onOtherEvent(SparkListenerEvent) - Method in class org.apache.spark.SparkFirehoseListener
- onOutputOperationCompleted(JavaStreamingListenerOutputOperationCompleted) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when processing of a job of a batch has completed.
- onOutputOperationCompleted(StreamingListenerOutputOperationCompleted) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when processing of a job of a batch has completed.
- onOutputOperationStarted(JavaStreamingListenerOutputOperationStarted) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when processing of a job of a batch has started.
- onOutputOperationStarted(StreamingListenerOutputOperationStarted) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when processing of a job of a batch has started.
- onParamChange(Param<?>) - Method in interface org.apache.spark.ml.param.Params
- onPushBlock(StreamBlockId, ArrayBuffer<?>) - Method in interface org.apache.spark.streaming.receiver.BlockGeneratorListener
-
Called when a new block is ready to be pushed.
- onQueryIdle(StreamingQueryListener.QueryIdleEvent) - Method in interface org.apache.spark.sql.streaming.PythonStreamingQueryListener
- onQueryIdle(StreamingQueryListener.QueryIdleEvent) - Method in class org.apache.spark.sql.streaming.StreamingQueryListener
-
Called when the query is idle and waiting for new data to process.
- onQueryProgress(StreamingQueryListener.QueryProgressEvent) - Method in interface org.apache.spark.sql.streaming.PythonStreamingQueryListener
- onQueryProgress(StreamingQueryListener.QueryProgressEvent) - Method in class org.apache.spark.sql.streaming.StreamingQueryListener
-
Called when there is some status update (ingestion rate updated, etc.)
- onQueryStarted(StreamingQueryListener.QueryStartedEvent) - Method in interface org.apache.spark.sql.streaming.PythonStreamingQueryListener
- onQueryStarted(StreamingQueryListener.QueryStartedEvent) - Method in class org.apache.spark.sql.streaming.StreamingQueryListener
-
Called when a query is started.
- onQueryTerminated(StreamingQueryListener.QueryTerminatedEvent) - Method in interface org.apache.spark.sql.streaming.PythonStreamingQueryListener
- onQueryTerminated(StreamingQueryListener.QueryTerminatedEvent) - Method in class org.apache.spark.sql.streaming.StreamingQueryListener
-
Called when a query is stopped, with or without error.
- onReceiverError(JavaStreamingListenerReceiverError) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when a receiver has reported an error
- onReceiverError(StreamingListenerReceiverError) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when a receiver has reported an error
- onReceiverStarted(JavaStreamingListenerReceiverStarted) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when a receiver has been started
- onReceiverStarted(StreamingListenerReceiverStarted) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when a receiver has been started
- onReceiverStopped(JavaStreamingListenerReceiverStopped) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when a receiver has been stopped
- onReceiverStopped(StreamingListenerReceiverStopped) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when a receiver has been stopped
- onResourceProfileAdded(SparkListenerResourceProfileAdded) - Method in class org.apache.spark.scheduler.SparkListener
- onResourceProfileAdded(SparkListenerResourceProfileAdded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a Resource Profile is added to the manager.
- onResourceProfileAdded(SparkListenerResourceProfileAdded) - Method in class org.apache.spark.SparkFirehoseListener
- onSpeculativeTaskSubmitted(SparkListenerSpeculativeTaskSubmitted) - Method in class org.apache.spark.scheduler.SparkListener
- onSpeculativeTaskSubmitted(SparkListenerSpeculativeTaskSubmitted) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a speculative task is submitted
- onSpeculativeTaskSubmitted(SparkListenerSpeculativeTaskSubmitted) - Method in class org.apache.spark.SparkFirehoseListener
- onStageCompleted(SparkListenerStageCompleted) - Method in class org.apache.spark.scheduler.SparkListener
- onStageCompleted(SparkListenerStageCompleted) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a stage completes successfully or fails, with information on the completed stage.
- onStageCompleted(SparkListenerStageCompleted) - Method in class org.apache.spark.scheduler.StatsReportListener
- onStageCompleted(SparkListenerStageCompleted) - Method in class org.apache.spark.SparkFirehoseListener
- onStageCompleted(SparkListenerStageCompleted) - Method in class org.apache.spark.SpillListener
- onStageExecutorMetrics(SparkListenerStageExecutorMetrics) - Method in class org.apache.spark.scheduler.SparkListener
- onStageExecutorMetrics(SparkListenerStageExecutorMetrics) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called with the peak memory metrics for a given (executor, stage) combination.
- onStageExecutorMetrics(SparkListenerStageExecutorMetrics) - Method in class org.apache.spark.SparkFirehoseListener
- onStageSubmitted(SparkListenerStageSubmitted) - Method in class org.apache.spark.scheduler.SparkListener
- onStageSubmitted(SparkListenerStageSubmitted) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a stage is submitted
- onStageSubmitted(SparkListenerStageSubmitted) - Method in class org.apache.spark.SparkFirehoseListener
- onStart() - Method in class org.apache.spark.streaming.receiver.Receiver
-
This method is called by the system when the receiver is started.
- onStop() - Method in class org.apache.spark.streaming.receiver.Receiver
-
This method is called by the system when the receiver is stopped.
- onStreamingStarted(JavaStreamingListenerStreamingStarted) - Method in interface org.apache.spark.streaming.api.java.PythonStreamingListener
-
Called when the streaming has been started
- onStreamingStarted(StreamingListenerStreamingStarted) - Method in interface org.apache.spark.streaming.scheduler.StreamingListener
-
Called when the streaming has been started
- onSuccess(String, QueryExecution, long) - Method in interface org.apache.spark.sql.util.QueryExecutionListener
-
A callback function that will be called when a query executed successfully.
- onTaskCompletion(TaskContext) - Method in class org.apache.spark.storage.ShuffleFetchCompletionListener
- onTaskCompletion(TaskContext) - Method in interface org.apache.spark.util.TaskCompletionListener
- onTaskEnd(SparkListenerTaskEnd) - Method in class org.apache.spark.scheduler.SparkListener
- onTaskEnd(SparkListenerTaskEnd) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a task ends
- onTaskEnd(SparkListenerTaskEnd) - Method in class org.apache.spark.scheduler.StatsReportListener
- onTaskEnd(SparkListenerTaskEnd) - Method in class org.apache.spark.SparkFirehoseListener
- onTaskEnd(SparkListenerTaskEnd) - Method in class org.apache.spark.SpillListener
- onTaskFailed(TaskFailedReason) - Method in interface org.apache.spark.api.plugin.ExecutorPlugin
-
Perform an action after tasks completes with exceptions.
- onTaskFailure(TaskContext, Throwable) - Method in interface org.apache.spark.util.TaskFailureListener
- onTaskGettingResult(SparkListenerTaskGettingResult) - Method in class org.apache.spark.scheduler.SparkListener
- onTaskGettingResult(SparkListenerTaskGettingResult) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a task begins remotely fetching its result (will not be called for tasks that do not need to fetch the result remotely).
- onTaskGettingResult(SparkListenerTaskGettingResult) - Method in class org.apache.spark.SparkFirehoseListener
- onTaskStart() - Method in interface org.apache.spark.api.plugin.ExecutorPlugin
-
Perform any action before the task is run.
- onTaskStart(SparkListenerTaskStart) - Method in class org.apache.spark.scheduler.SparkListener
- onTaskStart(SparkListenerTaskStart) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a task starts
- onTaskStart(SparkListenerTaskStart) - Method in class org.apache.spark.SparkFirehoseListener
- onTaskSucceeded() - Method in interface org.apache.spark.api.plugin.ExecutorPlugin
-
Perform an action after tasks completes without exceptions.
- onUnpersistRDD(SparkListenerUnpersistRDD) - Method in class org.apache.spark.scheduler.SparkListener
- onUnpersistRDD(SparkListenerUnpersistRDD) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when an RDD is manually unpersisted by the application
- onUnpersistRDD(SparkListenerUnpersistRDD) - Method in class org.apache.spark.SparkFirehoseListener
- onUnschedulableTaskSetAdded(SparkListenerUnschedulableTaskSetAdded) - Method in class org.apache.spark.scheduler.SparkListener
- onUnschedulableTaskSetAdded(SparkListenerUnschedulableTaskSetAdded) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when a taskset becomes unschedulable due to exludeOnFailure and dynamic allocation is enabled.
- onUnschedulableTaskSetAdded(SparkListenerUnschedulableTaskSetAdded) - Method in class org.apache.spark.SparkFirehoseListener
- onUnschedulableTaskSetRemoved(SparkListenerUnschedulableTaskSetRemoved) - Method in class org.apache.spark.scheduler.SparkListener
- onUnschedulableTaskSetRemoved(SparkListenerUnschedulableTaskSetRemoved) - Method in interface org.apache.spark.scheduler.SparkListenerInterface
-
Called when an unschedulable taskset becomes schedulable and dynamic allocation is enabled.
- onUnschedulableTaskSetRemoved(SparkListenerUnschedulableTaskSetRemoved) - Method in class org.apache.spark.SparkFirehoseListener
- OOM() - Static method in class org.apache.spark.util.SparkExitCode
-
The default uncaught exception handler was reached, and the uncaught exception was an
- open() - Method in class org.apache.spark.input.PortableDataStream
-
Create a new DataInputStream from the split and context.
- open(long, long) - Method in class org.apache.spark.sql.ForeachWriter
-
Called when starting to process one partition of new data in the executor.
- open(File, M, SparkConf, boolean, ClassTag<M>) - Static method in class org.apache.spark.status.KVUtils
-
Open or create a disk-based KVStore.
- openChannelWrapper() - Method in interface org.apache.spark.shuffle.api.ShufflePartitionWriter
-
Opens and returns a
WritableByteChannelWrapper
for transferring bytes from input byte channels to the underlying shuffle data store. - openStream() - Method in interface org.apache.spark.shuffle.api.ShufflePartitionWriter
-
Open and return an
OutputStream
that can write bytes to the underlying data store. - operationInHiveStyleCommandUnsupportedError(String, String, SqlBaseParser.StatementContext, Option<String>) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- operationNotAllowedError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- operationNotSupportClusteringError(String) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- operationNotSupportClusteringError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- operationNotSupportPartitioningError(String) - Method in interface org.apache.spark.sql.errors.CompilationErrors
- operationNotSupportPartitioningError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- OPERATOR_NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- operatorName() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- ops() - Method in class org.apache.spark.graphx.Graph
-
The associated
GraphOps
object. - opt(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- optimalNumOfBits(long, double) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Computes m (total bits of Bloom filter) which is expected to achieve, for the specified expected insertions, the required false positive probability.
- optimalNumOfBits(long, long, long) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Computes m (total bits of Bloom filter) which is expected to achieve.
- optimize(RDD<Tuple2<Object, Vector>>, Vector) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Runs gradient descent on the given training data.
- optimize(RDD<Tuple2<Object, Vector>>, Vector) - Method in class org.apache.spark.mllib.optimization.LBFGS
- optimize(RDD<Tuple2<Object, Vector>>, Vector) - Method in interface org.apache.spark.mllib.optimization.Optimizer
-
Solve the provided convex optimization problem.
- optimizeDocConcentration() - Method in class org.apache.spark.ml.clustering.LDA
- optimizeDocConcentration() - Method in class org.apache.spark.ml.clustering.LDAModel
- optimizeDocConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
For Online optimizer only (currently):
LDAParams.optimizer()
= "online". - optimizer() - Method in class org.apache.spark.ml.clustering.LDA
- optimizer() - Method in class org.apache.spark.ml.clustering.LDAModel
- optimizer() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Optimizer or inference algorithm used to estimate the LDA model.
- optimizer() - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
- optimizer() - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
- optimizer() - Method in class org.apache.spark.mllib.classification.SVMWithSGD
- optimizer() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
The optimizer to solve the problem.
- optimizer() - Method in class org.apache.spark.mllib.regression.LassoWithSGD
- optimizer() - Method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
- optimizer() - Method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
- Optimizer - Interface in org.apache.spark.mllib.optimization
-
Trait for optimization problem solvers.
- optimizerFailed(org.apache.spark.ml.util.Instrumentation, Class<?>) - Static method in class org.apache.spark.mllib.util.MLUtils
- optimizeWithLossReturned(RDD<Tuple2<Object, Vector>>, Vector) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Runs gradient descent on the given training data.
- optimizeWithLossReturned(RDD<Tuple2<Object, Vector>>, Vector) - Method in class org.apache.spark.mllib.optimization.LBFGS
- option(String, boolean) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Adds an input option for the underlying data source.
- option(String, boolean) - Method in class org.apache.spark.sql.DataFrameReader
- option(String, boolean) - Method in class org.apache.spark.sql.DataFrameWriter
-
Adds an output option for the underlying data source.
- option(String, boolean) - Method in class org.apache.spark.sql.DataFrameWriterV2
- option(String, boolean) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Adds an input option for the underlying data source.
- option(String, boolean) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Adds an output option for the underlying data source.
- option(String, boolean) - Method in interface org.apache.spark.sql.WriteConfigMethods
-
Add a boolean output option.
- option(String, double) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Adds an input option for the underlying data source.
- option(String, double) - Method in class org.apache.spark.sql.DataFrameReader
- option(String, double) - Method in class org.apache.spark.sql.DataFrameWriter
-
Adds an output option for the underlying data source.
- option(String, double) - Method in class org.apache.spark.sql.DataFrameWriterV2
- option(String, double) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Adds an input option for the underlying data source.
- option(String, double) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Adds an output option for the underlying data source.
- option(String, double) - Method in interface org.apache.spark.sql.WriteConfigMethods
-
Add a double output option.
- option(String, long) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Adds an input option for the underlying data source.
- option(String, long) - Method in class org.apache.spark.sql.DataFrameReader
- option(String, long) - Method in class org.apache.spark.sql.DataFrameWriter
-
Adds an output option for the underlying data source.
- option(String, long) - Method in class org.apache.spark.sql.DataFrameWriterV2
- option(String, long) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Adds an input option for the underlying data source.
- option(String, long) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Adds an output option for the underlying data source.
- option(String, long) - Method in interface org.apache.spark.sql.WriteConfigMethods
-
Add a long output option.
- option(String, String) - Method in class org.apache.spark.ml.util.MLWriter
-
Adds an option to the underlying MLWriter.
- option(String, String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Adds an input option for the underlying data source.
- option(String, String) - Method in class org.apache.spark.sql.DataFrameReader
- option(String, String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Adds an output option for the underlying data source.
- option(String, String) - Method in class org.apache.spark.sql.DataFrameWriterV2
- option(String, String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Adds an input option for the underlying data source.
- option(String, String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Adds an output option for the underlying data source.
- option(String, String) - Method in interface org.apache.spark.sql.WriteConfigMethods
-
Add a write option.
- OPTION_PREFIX - Static variable in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
A prefix used to pass OPTIONS in table properties
- Optional<T> - Class in org.apache.spark.api.java
-
Like
java.util.Optional
in Java 8,scala.Option
in Scala, andcom.google.common.base.Optional
in Google Guava, this class represents a value of a given type that may or may not exist. - optionMustBeConstant(String, Option<Throwable>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- optionMustBeLiteralString(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- options() - Method in interface org.apache.spark.sql.connector.write.LogicalWriteInfo
-
the options that the user specified when writing the dataset
- options() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperationInfo
-
Returns options that the user specified when performing the row-level operation.
- options(Map<String, String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Adds input options for the underlying data source.
- options(Map<String, String>) - Method in class org.apache.spark.sql.DataFrameReader
- options(Map<String, String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Adds output options for the underlying data source.
- options(Map<String, String>) - Method in class org.apache.spark.sql.DataFrameWriterV2
- options(Map<String, String>) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
(Java-specific) Adds input options for the underlying data source.
- options(Map<String, String>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Adds output options for the underlying data source.
- options(Map<String, String>) - Method in interface org.apache.spark.sql.WriteConfigMethods
-
Add write options from a Java Map.
- options(Map<String, String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
(Scala-specific) Adds input options for the underlying data source.
- options(Map<String, String>) - Method in class org.apache.spark.sql.DataFrameReader
- options(Map<String, String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
(Scala-specific) Adds output options for the underlying data source.
- options(Map<String, String>) - Method in class org.apache.spark.sql.DataFrameWriterV2
- options(Map<String, String>) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
(Scala-specific) Adds input options for the underlying data source.
- options(Map<String, String>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
(Scala-specific) Adds output options for the underlying data source.
- options(Map<String, String>) - Method in interface org.apache.spark.sql.WriteConfigMethods
-
Add write options from a Scala Map.
- optionToOptional(Option<T>) - Static method in class org.apache.spark.api.java.JavaUtils
- or(Column) - Method in class org.apache.spark.sql.Column
-
Boolean OR.
- or(T) - Method in class org.apache.spark.api.java.Optional
- Or - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that evaluates to
true
iff at least one ofleft
orright
evaluates totrue
. - Or - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff at least one ofleft
orright
evaluates totrue
. - Or(Predicate, Predicate) - Constructor for class org.apache.spark.sql.connector.expressions.filter.Or
- Or(Filter, Filter) - Constructor for class org.apache.spark.sql.sources.Or
- OracleDialect - Class in org.apache.spark.sql.jdbc
- OracleDialect() - Constructor for class org.apache.spark.sql.jdbc.OracleDialect
- OracleDialect.OracleSQLBuilder - Class in org.apache.spark.sql.jdbc
- OracleDialect.OracleSQLQueryBuilder - Class in org.apache.spark.sql.jdbc
- OracleSQLBuilder() - Constructor for class org.apache.spark.sql.jdbc.OracleDialect.OracleSQLBuilder
- OracleSQLQueryBuilder(JdbcDialect, JDBCOptions) - Constructor for class org.apache.spark.sql.jdbc.OracleDialect.OracleSQLQueryBuilder
- orc(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads an ORC file and returns the result as a
DataFrame
. - orc(String) - Method in class org.apache.spark.sql.DataFrameReader
- orc(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
in ORC format at the specified path. - orc(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads a ORC file stream, returning the result as a
DataFrame
. - orc(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads ORC files and returns the result as a
DataFrame
. - orc(String...) - Method in class org.apache.spark.sql.DataFrameReader
- orc(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads ORC files and returns the result as a
DataFrame
. - orc(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- orcNotUsedWithHiveEnabledError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- orderBy(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the given expressions.
- orderBy(String, String...) - Method in class org.apache.spark.sql.Dataset
- orderBy(String, String...) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the ordering defined. - orderBy(String, String...) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the ordering columns in a
WindowSpec
. - orderBy(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the given expressions.
- orderBy(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- orderBy(String, Seq<String>) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the ordering defined. - orderBy(String, Seq<String>) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the ordering columns in a
WindowSpec
. - orderBy(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the given expressions.
- orderBy(Column...) - Method in class org.apache.spark.sql.Dataset
- orderBy(Column...) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the ordering defined. - orderBy(Column...) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the ordering columns in a
WindowSpec
. - orderBy(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the given expressions.
- orderBy(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- orderBy(Seq<Column>) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the ordering defined. - orderBy(Seq<Column>) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the ordering columns in a
WindowSpec
. - orderByPositionRangeError(int, int, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ordered(SortOrder[]) - Static method in class org.apache.spark.sql.connector.distributions.Distributions
-
Creates a distribution where tuples have been ordered across partitions according to ordering expressions, but not necessarily within a given partition.
- ordered(SortOrder[]) - Static method in class org.apache.spark.sql.connector.distributions.LogicalDistributions
- OrderedDistribution - Interface in org.apache.spark.sql.connector.distributions
-
A distribution where tuples have been ordered across partitions according to ordering expressions, but not necessarily within a given partition.
- orderedOperationUnsupportedByDataTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- orderedOperationUnsupportedByDataTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- OrderedRDDFunctions<K,
V, P extends scala.Product2<K, V>> - Class in org.apache.spark.rdd -
Extra functions available on RDDs of (key, value) pairs where the key is sortable through an implicit conversion.
- OrderedRDDFunctions(RDD<P>, Ordering<K>, ClassTag<K>, ClassTag<V>, ClassTag<P>) - Constructor for class org.apache.spark.rdd.OrderedRDDFunctions
- ordering() - Method in interface org.apache.spark.sql.connector.distributions.OrderedDistribution
-
Returns ordering expressions.
- ordering() - Static method in class org.apache.spark.streaming.Time
- orderingWithinGroups() - Method in class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
- ORDINAL() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- ordinalNumber(int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ordinalNumber(int) - Method in interface org.apache.spark.sql.errors.QueryErrorsBase
- ordinalNumber(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- orElse(Ordering<T>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- orElse(Ordering<T>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- orElse(Ordering<T>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- orElse(Ordering<T>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- orElse(Ordering<T>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- orElse(Ordering<T>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- orElse(Ordering<T>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- orElse(T) - Method in class org.apache.spark.api.java.Optional
- orElseBy(Function1<T, S>, Ordering<S>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- orElseBy(Function1<T, S>, Ordering<S>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- orElseBy(Function1<T, S>, Ordering<S>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- orElseBy(Function1<T, S>, Ordering<S>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- orElseBy(Function1<T, S>, Ordering<S>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- orElseBy(Function1<T, S>, Ordering<S>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- orElseBy(Function1<T, S>, Ordering<S>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- org.apache.parquet.filter2.predicate - package org.apache.parquet.filter2.predicate
- org.apache.spark - package org.apache.spark
-
Core Spark classes in Scala.
- org.apache.spark.api.java - package org.apache.spark.api.java
-
Spark Java programming APIs.
- org.apache.spark.api.java.function - package org.apache.spark.api.java.function
-
Set of interfaces to represent functions in Spark's Java API.
- org.apache.spark.api.plugin - package org.apache.spark.api.plugin
- org.apache.spark.api.r - package org.apache.spark.api.r
- org.apache.spark.api.resource - package org.apache.spark.api.resource
- org.apache.spark.broadcast - package org.apache.spark.broadcast
-
Spark's broadcast variables, used to broadcast immutable datasets to all nodes.
- org.apache.spark.errors - package org.apache.spark.errors
- org.apache.spark.graphx - package org.apache.spark.graphx
-
ALPHA COMPONENT GraphX is a graph processing framework built on top of Spark.
- org.apache.spark.graphx.impl - package org.apache.spark.graphx.impl
- org.apache.spark.graphx.lib - package org.apache.spark.graphx.lib
-
Various analytics functions for graphs.
- org.apache.spark.graphx.util - package org.apache.spark.graphx.util
-
Collections of utilities used by graphx.
- org.apache.spark.input - package org.apache.spark.input
- org.apache.spark.io - package org.apache.spark.io
-
IO codecs used for compression.
- org.apache.spark.kafka010 - package org.apache.spark.kafka010
- org.apache.spark.launcher - package org.apache.spark.launcher
-
Library for launching Spark applications programmatically.
- org.apache.spark.mapred - package org.apache.spark.mapred
- org.apache.spark.metrics - package org.apache.spark.metrics
- org.apache.spark.metrics.sink - package org.apache.spark.metrics.sink
- org.apache.spark.metrics.source - package org.apache.spark.metrics.source
- org.apache.spark.ml - package org.apache.spark.ml
-
DataFrame-based machine learning APIs to let users quickly assemble and configure practical machine learning pipelines.
- org.apache.spark.ml.ann - package org.apache.spark.ml.ann
- org.apache.spark.ml.attribute - package org.apache.spark.ml.attribute
-
ML attributes
- org.apache.spark.ml.classification - package org.apache.spark.ml.classification
- org.apache.spark.ml.clustering - package org.apache.spark.ml.clustering
- org.apache.spark.ml.evaluation - package org.apache.spark.ml.evaluation
- org.apache.spark.ml.feature - package org.apache.spark.ml.feature
-
Feature transformers The `ml.feature` package provides common feature transformers that help convert raw data or features into more suitable forms for model fitting.
- org.apache.spark.ml.fpm - package org.apache.spark.ml.fpm
- org.apache.spark.ml.image - package org.apache.spark.ml.image
- org.apache.spark.ml.impl - package org.apache.spark.ml.impl
- org.apache.spark.ml.linalg - package org.apache.spark.ml.linalg
- org.apache.spark.ml.optim - package org.apache.spark.ml.optim
- org.apache.spark.ml.optim.aggregator - package org.apache.spark.ml.optim.aggregator
- org.apache.spark.ml.optim.loss - package org.apache.spark.ml.optim.loss
- org.apache.spark.ml.param - package org.apache.spark.ml.param
- org.apache.spark.ml.param.shared - package org.apache.spark.ml.param.shared
- org.apache.spark.ml.r - package org.apache.spark.ml.r
- org.apache.spark.ml.recommendation - package org.apache.spark.ml.recommendation
- org.apache.spark.ml.regression - package org.apache.spark.ml.regression
- org.apache.spark.ml.source.image - package org.apache.spark.ml.source.image
- org.apache.spark.ml.source.libsvm - package org.apache.spark.ml.source.libsvm
- org.apache.spark.ml.stat - package org.apache.spark.ml.stat
- org.apache.spark.ml.stat.distribution - package org.apache.spark.ml.stat.distribution
- org.apache.spark.ml.tree - package org.apache.spark.ml.tree
- org.apache.spark.ml.tree.impl - package org.apache.spark.ml.tree.impl
- org.apache.spark.ml.tuning - package org.apache.spark.ml.tuning
- org.apache.spark.ml.util - package org.apache.spark.ml.util
- org.apache.spark.mllib - package org.apache.spark.mllib
-
RDD-based machine learning APIs (in maintenance mode).
- org.apache.spark.mllib.classification - package org.apache.spark.mllib.classification
- org.apache.spark.mllib.classification.impl - package org.apache.spark.mllib.classification.impl
- org.apache.spark.mllib.clustering - package org.apache.spark.mllib.clustering
- org.apache.spark.mllib.evaluation - package org.apache.spark.mllib.evaluation
- org.apache.spark.mllib.evaluation.binary - package org.apache.spark.mllib.evaluation.binary
- org.apache.spark.mllib.feature - package org.apache.spark.mllib.feature
- org.apache.spark.mllib.fpm - package org.apache.spark.mllib.fpm
- org.apache.spark.mllib.linalg - package org.apache.spark.mllib.linalg
- org.apache.spark.mllib.linalg.distributed - package org.apache.spark.mllib.linalg.distributed
- org.apache.spark.mllib.optimization - package org.apache.spark.mllib.optimization
- org.apache.spark.mllib.pmml - package org.apache.spark.mllib.pmml
- org.apache.spark.mllib.pmml.export - package org.apache.spark.mllib.pmml.export
- org.apache.spark.mllib.random - package org.apache.spark.mllib.random
- org.apache.spark.mllib.rdd - package org.apache.spark.mllib.rdd
- org.apache.spark.mllib.recommendation - package org.apache.spark.mllib.recommendation
- org.apache.spark.mllib.regression - package org.apache.spark.mllib.regression
- org.apache.spark.mllib.regression.impl - package org.apache.spark.mllib.regression.impl
- org.apache.spark.mllib.stat - package org.apache.spark.mllib.stat
- org.apache.spark.mllib.stat.correlation - package org.apache.spark.mllib.stat.correlation
- org.apache.spark.mllib.stat.distribution - package org.apache.spark.mllib.stat.distribution
- org.apache.spark.mllib.stat.test - package org.apache.spark.mllib.stat.test
- org.apache.spark.mllib.tree - package org.apache.spark.mllib.tree
- org.apache.spark.mllib.tree.configuration - package org.apache.spark.mllib.tree.configuration
- org.apache.spark.mllib.tree.impurity - package org.apache.spark.mllib.tree.impurity
- org.apache.spark.mllib.tree.loss - package org.apache.spark.mllib.tree.loss
- org.apache.spark.mllib.tree.model - package org.apache.spark.mllib.tree.model
- org.apache.spark.mllib.util - package org.apache.spark.mllib.util
- org.apache.spark.partial - package org.apache.spark.partial
- org.apache.spark.paths - package org.apache.spark.paths
- org.apache.spark.rdd - package org.apache.spark.rdd
-
Provides implementation's of various RDDs.
- org.apache.spark.resource - package org.apache.spark.resource
- org.apache.spark.scheduler - package org.apache.spark.scheduler
-
Spark's DAG scheduler.
- org.apache.spark.scheduler.cluster - package org.apache.spark.scheduler.cluster
- org.apache.spark.scheduler.local - package org.apache.spark.scheduler.local
- org.apache.spark.security - package org.apache.spark.security
- org.apache.spark.serializer - package org.apache.spark.serializer
-
Pluggable serializers for RDD and shuffle data.
- org.apache.spark.shuffle.api - package org.apache.spark.shuffle.api
- org.apache.spark.shuffle.api.metadata - package org.apache.spark.shuffle.api.metadata
- org.apache.spark.sql - package org.apache.spark.sql
- org.apache.spark.sql.api - package org.apache.spark.sql.api
- org.apache.spark.sql.api.java - package org.apache.spark.sql.api.java
-
Allows the execution of relational queries, including those expressed in SQL using Spark.
- org.apache.spark.sql.api.r - package org.apache.spark.sql.api.r
- org.apache.spark.sql.artifact - package org.apache.spark.sql.artifact
- org.apache.spark.sql.avro - package org.apache.spark.sql.avro
- org.apache.spark.sql.catalog - package org.apache.spark.sql.catalog
- org.apache.spark.sql.columnar - package org.apache.spark.sql.columnar
- org.apache.spark.sql.connector - package org.apache.spark.sql.connector
- org.apache.spark.sql.connector.catalog - package org.apache.spark.sql.connector.catalog
- org.apache.spark.sql.connector.catalog.functions - package org.apache.spark.sql.connector.catalog.functions
- org.apache.spark.sql.connector.catalog.index - package org.apache.spark.sql.connector.catalog.index
- org.apache.spark.sql.connector.catalog.procedures - package org.apache.spark.sql.connector.catalog.procedures
- org.apache.spark.sql.connector.distributions - package org.apache.spark.sql.connector.distributions
- org.apache.spark.sql.connector.expressions - package org.apache.spark.sql.connector.expressions
- org.apache.spark.sql.connector.expressions.aggregate - package org.apache.spark.sql.connector.expressions.aggregate
- org.apache.spark.sql.connector.expressions.filter - package org.apache.spark.sql.connector.expressions.filter
- org.apache.spark.sql.connector.metric - package org.apache.spark.sql.connector.metric
- org.apache.spark.sql.connector.read - package org.apache.spark.sql.connector.read
- org.apache.spark.sql.connector.read.colstats - package org.apache.spark.sql.connector.read.colstats
- org.apache.spark.sql.connector.read.partitioning - package org.apache.spark.sql.connector.read.partitioning
- org.apache.spark.sql.connector.read.streaming - package org.apache.spark.sql.connector.read.streaming
- org.apache.spark.sql.connector.util - package org.apache.spark.sql.connector.util
- org.apache.spark.sql.connector.write - package org.apache.spark.sql.connector.write
- org.apache.spark.sql.connector.write.streaming - package org.apache.spark.sql.connector.write.streaming
- org.apache.spark.sql.errors - package org.apache.spark.sql.errors
- org.apache.spark.sql.exceptions - package org.apache.spark.sql.exceptions
- org.apache.spark.sql.expressions - package org.apache.spark.sql.expressions
- org.apache.spark.sql.expressions.javalang - package org.apache.spark.sql.expressions.javalang
- org.apache.spark.sql.expressions.scalalang - package org.apache.spark.sql.expressions.scalalang
- org.apache.spark.sql.jdbc - package org.apache.spark.sql.jdbc
- org.apache.spark.sql.ml - package org.apache.spark.sql.ml
- org.apache.spark.sql.scripting - package org.apache.spark.sql.scripting
- org.apache.spark.sql.sources - package org.apache.spark.sql.sources
- org.apache.spark.sql.streaming - package org.apache.spark.sql.streaming
- org.apache.spark.sql.streaming.ui - package org.apache.spark.sql.streaming.ui
- org.apache.spark.sql.types - package org.apache.spark.sql.types
- org.apache.spark.sql.util - package org.apache.spark.sql.util
- org.apache.spark.sql.vectorized - package org.apache.spark.sql.vectorized
- org.apache.spark.status - package org.apache.spark.status
- org.apache.spark.status.api.v1 - package org.apache.spark.status.api.v1
- org.apache.spark.status.api.v1.sql - package org.apache.spark.status.api.v1.sql
- org.apache.spark.status.api.v1.streaming - package org.apache.spark.status.api.v1.streaming
- org.apache.spark.status.protobuf - package org.apache.spark.status.protobuf
- org.apache.spark.status.protobuf.sql - package org.apache.spark.status.protobuf.sql
- org.apache.spark.storage - package org.apache.spark.storage
- org.apache.spark.storage.memory - package org.apache.spark.storage.memory
- org.apache.spark.streaming - package org.apache.spark.streaming
- org.apache.spark.streaming.api.java - package org.apache.spark.streaming.api.java
-
Java APIs for spark streaming.
- org.apache.spark.streaming.dstream - package org.apache.spark.streaming.dstream
-
Various implementations of DStreams.
- org.apache.spark.streaming.kinesis - package org.apache.spark.streaming.kinesis
- org.apache.spark.streaming.receiver - package org.apache.spark.streaming.receiver
- org.apache.spark.streaming.scheduler - package org.apache.spark.streaming.scheduler
- org.apache.spark.streaming.scheduler.rate - package org.apache.spark.streaming.scheduler.rate
- org.apache.spark.streaming.ui - package org.apache.spark.streaming.ui
- org.apache.spark.streaming.util - package org.apache.spark.streaming.util
- org.apache.spark.types.variant - package org.apache.spark.types.variant
- org.apache.spark.ui - package org.apache.spark.ui
- org.apache.spark.ui.flamegraph - package org.apache.spark.ui.flamegraph
- org.apache.spark.ui.jobs - package org.apache.spark.ui.jobs
- org.apache.spark.ui.storage - package org.apache.spark.ui.storage
- org.apache.spark.unsafe.types - package org.apache.spark.unsafe.types
- org.apache.spark.util - package org.apache.spark.util
-
Spark utilities.
- org.apache.spark.util.logging - package org.apache.spark.util.logging
- org.apache.spark.util.random - package org.apache.spark.util.random
-
Utilities for random number generation.
- org.apache.spark.util.sketch - package org.apache.spark.util.sketch
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.graphx.GraphLoader
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.graphx.lib.PageRank
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.graphx.Pregel
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.graphx.util.GraphGenerators
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.kafka010.KafkaRedactionUtil
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mapred.SparkHadoopMapRedUtil
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ml.r.RWrapperUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ml.recommendation.ALS
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ml.stat.Summarizer
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ml.tree.impl.RandomForest
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ml.util.DatasetUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.clustering.LocalKMeans
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.clustering.PowerIterationClustering
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.fpm.PrefixSpan
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.linalg.BLAS
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.optimization.GradientDescent
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.optimization.LBFGS
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.tree.DecisionTree
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.tree.GradientBoostedTrees
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.tree.RandomForest
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.util.DataValidators
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.mllib.util.MLUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.rdd.HadoopRDD
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.resource.ResourceProfile
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.resource.ResourceUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.scheduler.StatsReportListener
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.security.CryptoStreamUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.serializer.SerializationDebugger
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.serializer.SerializerHelper
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.SparkConf
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.SparkContext
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.SparkEnv
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.sql.api.r.SQLUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.sql.artifact.ArtifactManager
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.sql.avro.AvroUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.sql.avro.SchemaConverters
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.sql.SparkSession
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.sql.types.UDTRegistration
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.status.KVUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.storage.StorageUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.streaming.CheckpointReader
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.streaming.util.RawTextSender
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ui.JettyUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.ui.UIUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.AccumulatorContext
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.ClosureCleaner
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.DependencyUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.HadoopFSUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.MavenUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.ShutdownHookManager
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.SignalUtils
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.SizeEstimator
- org$apache$spark$internal$Logging$$log_() - Static method in class org.apache.spark.util.Utils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.graphx.GraphLoader
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.graphx.lib.PageRank
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.graphx.Pregel
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.graphx.util.GraphGenerators
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.kafka010.KafkaRedactionUtil
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mapred.SparkHadoopMapRedUtil
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ml.r.RWrapperUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ml.recommendation.ALS
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ml.stat.Summarizer
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ml.tree.impl.RandomForest
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ml.util.DatasetUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.clustering.LocalKMeans
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.clustering.PowerIterationClustering
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.fpm.PrefixSpan
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.linalg.BLAS
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.optimization.GradientDescent
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.optimization.LBFGS
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.tree.DecisionTree
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.tree.GradientBoostedTrees
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.tree.RandomForest
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.util.DataValidators
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.mllib.util.MLUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.rdd.HadoopRDD
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.resource.ResourceProfile
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.resource.ResourceUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.scheduler.StatsReportListener
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.security.CryptoStreamUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.serializer.SerializationDebugger
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.serializer.SerializerHelper
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.SparkConf
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.SparkContext
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.SparkEnv
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.sql.artifact.ArtifactManager
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.sql.avro.AvroUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.sql.avro.SchemaConverters
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.sql.SparkSession
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.sql.types.UDTRegistration
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.status.KVUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.storage.StorageUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.streaming.CheckpointReader
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.streaming.util.RawTextSender
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ui.JettyUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.ui.UIUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.AccumulatorContext
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.ClosureCleaner
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.DependencyUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.HadoopFSUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.IndylambdaScalaClosures
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.MavenUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.ShutdownHookManager
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.SignalUtils
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.SizeEstimator
- org$apache$spark$internal$Logging$$log__$eq(Logger) - Static method in class org.apache.spark.util.Utils
- org$apache$spark$ml$util$BaseReadWrite$$optionSparkSession() - Static method in class org.apache.spark.ml.r.RWrappers
- org$apache$spark$ml$util$BaseReadWrite$$optionSparkSession_$eq(Option<SparkSession>) - Static method in class org.apache.spark.ml.r.RWrappers
- org$apache$spark$util$SparkTestUtils$$SOURCE() - Static method in class org.apache.spark.TestUtils
- origin() - Method in class org.apache.spark.ErrorStateInfo
- origin() - Method in exception org.apache.spark.sql.AnalysisException
- origin() - Method in exception org.apache.spark.sql.exceptions.SqlScriptingException
- origin() - Method in class org.apache.spark.sql.scripting.SingleStatementExec
- original() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- original() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
- originalMax() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- originalMin() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- orNull() - Method in class org.apache.spark.api.java.Optional
- other() - Method in class org.apache.spark.scheduler.RuntimePercentage
- otherVertexAttr(long) - Method in class org.apache.spark.graphx.EdgeTriplet
-
Given one vertex in the edge return the other vertex.
- otherVertexId(long) - Method in class org.apache.spark.graphx.Edge
-
Given one vertex in the edge return the other vertex.
- otherwise(Object) - Method in class org.apache.spark.sql.Column
-
Evaluates a list of conditions and returns one of multiple possible result expressions.
- Out() - Static method in class org.apache.spark.graphx.EdgeDirection
-
Edges originating from a vertex.
- OUT - Enum constant in enum class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Mode
- outDegrees() - Method in class org.apache.spark.graphx.GraphOps
- outerJoinVertices(RDD<Tuple2<Object, U>>, Function3<Object, VD, Option<U>, VD2>, ClassTag<U>, ClassTag<VD2>, $eq$colon$eq<VD, VD2>) - Method in class org.apache.spark.graphx.Graph
-
Joins the vertices with entries in the
table
RDD and merges the results usingmapFunc
. - outerJoinVertices(RDD<Tuple2<Object, U>>, Function3<Object, VD, Option<U>, VD2>, ClassTag<U>, ClassTag<VD2>, $eq$colon$eq<VD, VD2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- outerScopeFailureForNewInstanceError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- OUTGOING_EDGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- outOfDecimalTypeRangeError(UTF8String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- outOfMemoryError(long, long) - Static method in class org.apache.spark.errors.SparkCoreErrors
- output() - Method in class org.apache.spark.ml.TransformEnd
- OUTPUT() - Static method in class org.apache.spark.ui.ToolTips
- OUTPUT_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- OUTPUT_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- OUTPUT_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- OUTPUT_BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- OUTPUT_DETERMINISTIC_LEVEL_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- OUTPUT_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- OUTPUT_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- OUTPUT_METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
- OUTPUT_RECORDS() - Static method in class org.apache.spark.status.TaskIndexNames
- OUTPUT_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- OUTPUT_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- OUTPUT_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- OUTPUT_RECORDS_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- OUTPUT_SIZE() - Static method in class org.apache.spark.status.TaskIndexNames
- output$() - Constructor for class org.apache.spark.InternalAccumulator.output$
- outputBytes() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- outputBytes() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- outputBytes() - Method in class org.apache.spark.status.api.v1.StageData
- outputCol() - Method in class org.apache.spark.ml.feature.Binarizer
- outputCol() - Method in class org.apache.spark.ml.feature.Bucketizer
- outputCol() - Method in class org.apache.spark.ml.feature.CountVectorizer
- outputCol() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- outputCol() - Method in class org.apache.spark.ml.feature.FeatureHasher
- outputCol() - Method in class org.apache.spark.ml.feature.HashingTF
- outputCol() - Method in class org.apache.spark.ml.feature.IDF
- outputCol() - Method in class org.apache.spark.ml.feature.IDFModel
- outputCol() - Method in class org.apache.spark.ml.feature.Imputer
- outputCol() - Method in class org.apache.spark.ml.feature.ImputerModel
- outputCol() - Method in class org.apache.spark.ml.feature.IndexToString
- outputCol() - Method in class org.apache.spark.ml.feature.Interaction
- outputCol() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- outputCol() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- outputCol() - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- outputCol() - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- outputCol() - Method in class org.apache.spark.ml.feature.MinMaxScaler
- outputCol() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- outputCol() - Method in class org.apache.spark.ml.feature.OneHotEncoder
- outputCol() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- outputCol() - Method in class org.apache.spark.ml.feature.PCA
- outputCol() - Method in class org.apache.spark.ml.feature.PCAModel
- outputCol() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- outputCol() - Method in class org.apache.spark.ml.feature.RobustScaler
- outputCol() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- outputCol() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- outputCol() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- outputCol() - Method in class org.apache.spark.ml.feature.StandardScaler
- outputCol() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- outputCol() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- outputCol() - Method in class org.apache.spark.ml.feature.StringIndexer
- outputCol() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- outputCol() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- outputCol() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- outputCol() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- outputCol() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- outputCol() - Method in class org.apache.spark.ml.feature.VectorAssembler
- outputCol() - Method in class org.apache.spark.ml.feature.VectorIndexer
- outputCol() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- outputCol() - Method in class org.apache.spark.ml.feature.VectorSlicer
- outputCol() - Method in class org.apache.spark.ml.feature.Word2Vec
- outputCol() - Method in class org.apache.spark.ml.feature.Word2VecModel
- outputCol() - Method in interface org.apache.spark.ml.param.shared.HasOutputCol
-
Param for output column name.
- outputCol() - Method in class org.apache.spark.ml.UnaryTransformer
- outputCols() - Method in class org.apache.spark.ml.feature.Binarizer
- outputCols() - Method in class org.apache.spark.ml.feature.Bucketizer
- outputCols() - Method in class org.apache.spark.ml.feature.Imputer
- outputCols() - Method in class org.apache.spark.ml.feature.ImputerModel
- outputCols() - Method in class org.apache.spark.ml.feature.OneHotEncoder
- outputCols() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- outputCols() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- outputCols() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- outputCols() - Method in class org.apache.spark.ml.feature.StringIndexer
- outputCols() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- outputCols() - Method in interface org.apache.spark.ml.param.shared.HasOutputCols
-
Param for output column names.
- OutputCommitCoordinationMessage - Interface in org.apache.spark.scheduler
- outputCommitCoordinator() - Method in class org.apache.spark.SparkEnv
- outputDataTypeUnsupportedByNodeWithoutSerdeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- outputDeterministicLevel() - Method in class org.apache.spark.storage.RDDInfo
- outputEncoder() - Method in class org.apache.spark.sql.expressions.Aggregator
-
Specifies the
Encoder
for the final output value type. - OutputMetricDistributions - Class in org.apache.spark.status.api.v1
- outputMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- outputMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- OutputMetrics - Class in org.apache.spark.status.api.v1
- outputMode(String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink.
- outputMode(OutputMode) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink.
- OutputMode - Class in org.apache.spark.sql.streaming
-
OutputMode describes what data will be written to a streaming sink when there is new data available in a streaming DataFrame/Dataset.
- OutputMode() - Constructor for class org.apache.spark.sql.streaming.OutputMode
- outputOperationInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
- outputOperationInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
- OutputOperationInfo - Class in org.apache.spark.status.api.v1.streaming
- OutputOperationInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi :: Class having information on output operations.
- OutputOperationInfo(Time, int, String, String, Option<Object>, Option<Object>, Option<String>) - Constructor for class org.apache.spark.streaming.scheduler.OutputOperationInfo
- outputOperationInfos() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
- outputOpId() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- outputOrdering() - Method in interface org.apache.spark.sql.connector.read.SupportsReportOrdering
-
Returns the order in each partition of this data source scan.
- outputPartitioning() - Method in interface org.apache.spark.sql.connector.read.SupportsReportPartitioning
-
Returns the output data partitioning that this reader guarantees.
- outputPathAlreadyExistsError(Path) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- outputRecords() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- outputRecords() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- outputRecords() - Method in class org.apache.spark.status.api.v1.StageData
- over() - Method in class org.apache.spark.sql.Column
-
Defines an empty analytic clause.
- over(WindowSpec) - Method in class org.apache.spark.sql.Column
-
Defines a windowing column.
- overallScore(Dataset<Row>, Column, Column) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
- overallScore(Dataset<Row>, Column, Column) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
- overflowInConvError(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- overflowInIntegralDivideError(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- overflowInSumOfDecimalError(QueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- OVERHEAD_MEM() - Static method in class org.apache.spark.resource.ResourceProfile
-
built-in executor resource: memoryOverhead
- overlay(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Overlay the specified portion of
src
withreplace
, starting from byte positionpos
ofsrc
. - overlay(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Overlay the specified portion of
src
withreplace
, starting from byte positionpos
ofsrc
and proceeding forlen
bytes. - overwrite() - Method in class org.apache.spark.ml.util.MLWriter
-
Overwrites if the output path already exists.
- overwrite(Column) - Method in class org.apache.spark.sql.DataFrameWriterV2
-
Overwrite rows matching the given filter condition with the contents of the data frame in the output table.
- overwrite(Predicate[]) - Method in interface org.apache.spark.sql.connector.write.SupportsOverwrite
- overwrite(Predicate[]) - Method in interface org.apache.spark.sql.connector.write.SupportsOverwriteV2
-
Configures a write to replace data matching the filters with data committed in the write.
- overwrite(Filter[]) - Method in interface org.apache.spark.sql.connector.write.SupportsOverwrite
-
Configures a write to replace data matching the filters with data committed in the write.
- Overwrite - Enum constant in enum class org.apache.spark.sql.SaveMode
-
Overwrite mode means that when saving a DataFrame to a data source, if data/table already exists, existing data is expected to be overwritten by the contents of the DataFrame.
- OVERWRITE_BY_FILTER - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table can replace existing data that matches a filter with appended data in a write operation.
- OVERWRITE_DYNAMIC - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table can dynamically replace existing data partitions with appended data in a write operation.
- overwriteDynamicPartitions() - Method in interface org.apache.spark.sql.connector.write.SupportsDynamicOverwrite
-
Configures a write to dynamically replace partitions with data committed in the write.
- overwritePartitions() - Method in class org.apache.spark.sql.DataFrameWriterV2
-
Overwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.
- overwriteTableByUnsupportedExpressionError(Table) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
P
- p() - Method in class org.apache.spark.ml.feature.Normalizer
-
Normalization in L^p^ space.
- PagedTable<T> - Interface in org.apache.spark.ui
-
A paged table that will generate a HTML table for a specified page and also the page navigation.
- pageLink(int) - Method in interface org.apache.spark.ui.PagedTable
-
Return a link to jump to a page.
- pageNavigation(int, int, int, String) - Method in interface org.apache.spark.ui.PagedTable
-
Return a page navigation.
- pageNumberFormField() - Method in interface org.apache.spark.ui.PagedTable
- pageRank(double, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run a dynamic version of PageRank returning a graph with vertex attributes containing the PageRank and edge attributes containing the normalized edge weight.
- PageRank - Class in org.apache.spark.graphx.lib
-
PageRank algorithm implementation.
- PageRank() - Constructor for class org.apache.spark.graphx.lib.PageRank
- pageSizeFormField() - Method in interface org.apache.spark.ui.PagedTable
- PairDStreamFunctions<K,
V> - Class in org.apache.spark.streaming.dstream -
Extra functions available on DStream of (key, value) pairs through an implicit conversion.
- PairDStreamFunctions(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Constructor for class org.apache.spark.streaming.dstream.PairDStreamFunctions
- PairFlatMapFunction<T,
K, V> - Interface in org.apache.spark.api.java.function -
A function that returns zero or more key-value pair records from each input record.
- PairFunction<T,
K, V> - Interface in org.apache.spark.api.java.function -
A function that returns key-value pairs (Tuple2<K, V>), and can be used to construct PairRDDs.
- PairRDDFunctions<K,
V> - Class in org.apache.spark.rdd -
Extra functions available on RDDs of (key, value) pairs through an implicit conversion.
- PairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Constructor for class org.apache.spark.rdd.PairRDDFunctions
- pairUnsupportedAtFunctionError(ValueInterval, ValueInterval, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- PairwiseRRDD<T> - Class in org.apache.spark.api.r
-
Form an RDD[(Int, Array[Byte])] from key-value pairs returned from R.
- PairwiseRRDD(RDD<T>, int, byte[], String, byte[], Object[], ClassTag<T>) - Constructor for class org.apache.spark.api.r.PairwiseRRDD
- pandasUDFAggregateNotSupportedInPivotError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- parallelism() - Method in class org.apache.spark.ml.classification.OneVsRest
- parallelism() - Method in interface org.apache.spark.ml.param.shared.HasParallelism
-
The number of threads to use when running parallel algorithms.
- parallelism() - Method in class org.apache.spark.ml.tuning.CrossValidator
- parallelism() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- parallelize(List<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelize(List<T>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelize(Seq<T>, int, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizeDoubles(List<Double>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizeDoubles(List<Double>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizePairs(List<Tuple2<K, V>>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizePairs(List<Tuple2<K, V>>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelListLeafFiles(SparkContext, Seq<Path>, Configuration, PathFilter, boolean, boolean, int, int) - Static method in class org.apache.spark.util.HadoopFSUtils
-
Lists a collection of paths recursively.
- param() - Method in class org.apache.spark.ml.param.ParamPair
- Param<T> - Class in org.apache.spark.ml.param
-
A param with self-contained documentation and optionally default value.
- Param(String, String, String) - Constructor for class org.apache.spark.ml.param.Param
- Param(String, String, String, Function1<T, Object>) - Constructor for class org.apache.spark.ml.param.Param
- Param(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.Param
- Param(Identifiable, String, String, Function1<T, Object>) - Constructor for class org.apache.spark.ml.param.Param
- parameterMarkerNotAllowed(String, Origin) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- parameters() - Method in interface org.apache.spark.sql.connector.catalog.procedures.BoundProcedure
-
Returns parameters of this procedure.
- paramExceedOneCharError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- ParamGridBuilder - Class in org.apache.spark.ml.tuning
-
Builder for a param grid used in grid search-based model selection.
- ParamGridBuilder() - Constructor for class org.apache.spark.ml.tuning.ParamGridBuilder
- paramIsNotBooleanValueError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- paramIsNotIntegerError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- paramMap() - Method in interface org.apache.spark.ml.param.Params
-
Internal param map for user-supplied values.
- ParamMap - Class in org.apache.spark.ml.param
-
A param to value map.
- ParamMap() - Constructor for class org.apache.spark.ml.param.ParamMap
-
Creates an empty param map.
- ParamPair<T> - Class in org.apache.spark.ml.param
-
A param and its value.
- ParamPair(Param<T>, T) - Constructor for class org.apache.spark.ml.param.ParamPair
- params() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- params() - Method in class org.apache.spark.ml.evaluation.Evaluator
- params() - Method in class org.apache.spark.ml.fpm.PrefixSpan
- params() - Method in class org.apache.spark.ml.param.JavaParams
- params() - Method in interface org.apache.spark.ml.param.Params
-
Returns all params sorted by their names.
- params() - Method in class org.apache.spark.ml.PipelineStage
- Params - Interface in org.apache.spark.ml.param
-
Trait for components that take parameters.
- ParamValidators - Class in org.apache.spark.ml.param
-
Factory methods for common validation functions for
Param.isValid
. - ParamValidators() - Constructor for class org.apache.spark.ml.param.ParamValidators
- parent() - Method in class org.apache.spark.ml.Model
-
The parent estimator that produced this model.
- parent() - Method in class org.apache.spark.ml.param.Param
- parent() - Method in interface org.apache.spark.scheduler.Schedulable
- ParentClassLoader - Class in org.apache.spark.util
-
A class loader which makes some protected methods in ClassLoader accessible.
- ParentClassLoader(ClassLoader) - Constructor for class org.apache.spark.util.ParentClassLoader
- parentIds() - Method in class org.apache.spark.scheduler.StageInfo
- parentIds() - Method in class org.apache.spark.storage.RDDInfo
- parentIndex(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Get the parent index of the given node, or 0 if it is the root.
- parentSparkUIToAttachTabNotFoundError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- parmap(Seq<I>, String, int, Function1<I, O>) - Static method in class org.apache.spark.util.ThreadUtils
-
Transforms input collection by applying the given function to each element in parallel fashion.
- parquet(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads a Parquet file, returning the result as a
DataFrame
. - parquet(String) - Method in class org.apache.spark.sql.DataFrameReader
- parquet(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
in Parquet format at the specified path. - parquet(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads a Parquet file stream, returning the result as a
DataFrame
. - parquet(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads a Parquet file, returning the result as a
DataFrame
. - parquet(String...) - Method in class org.apache.spark.sql.DataFrameReader
- parquet(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads a Parquet file, returning the result as a
DataFrame
. - parquet(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- parquetColumnDataTypeMismatchError(String, String, String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- parquetFile(String...) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.As of 1.4.0, replaced by
read().parquet()
. - parquetFile(Seq<String>) - Method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use read.parquet() instead. Since 1.4.0.
- parquetTypeUnsupportedYetError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- parse(String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- parse(String) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Parses a string resulted from
Vector.toString
into aVector
. - parse(String) - Static method in class org.apache.spark.mllib.regression.LabeledPoint
-
Parses a string resulted from
LabeledPoint#toString
into anLabeledPoint
. - parse(String) - Static method in class org.apache.spark.mllib.util.NumericParser
-
Parses a string into a Double, an Array[Double], or a Seq[Any].
- parse_json(Column) - Static method in class org.apache.spark.sql.functions
-
Parses a JSON string and constructs a Variant value.
- parse_url(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Extracts a part from a URL.
- parse_url(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Extracts a part from a URL.
- parseAll(Parsers.Parser<T>, Reader) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- parseAll(Parsers.Parser<T>, CharSequence) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- parseAll(Parsers.Parser<T>, Reader<Object>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- parseAllocated(Option<String>, String) - Static method in class org.apache.spark.resource.ResourceUtils
- parseAllocatedFromJsonFile(String) - Static method in class org.apache.spark.resource.ResourceUtils
- parseAllResourceRequests(SparkConf, String) - Static method in class org.apache.spark.resource.ResourceUtils
- parseColumnPath(String) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseDelimitedFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseDelimitedFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parsedPlan() - Method in class org.apache.spark.sql.scripting.SingleStatementExec
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(byte[]) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(byte[], ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(ByteString) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(ByteString, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(CodedInputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(CodedInputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(InputStream) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(InputStream, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(ByteBuffer) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parseFrom(ByteBuffer, ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- parseFunctionName(String) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits
- parseHostPort(String) - Static method in class org.apache.spark.util.Utils
- parseIgnoreCase(Class<E>, String) - Static method in class org.apache.spark.util.EnumUtil
- parseJson(JsonParser, boolean) - Static method in class org.apache.spark.types.variant.VariantBuilder
-
Similar
VariantBuilder.parseJson(String, boolean)
, but takes a JSON parser instead of string input. - parseJson(String) - Static method in class org.apache.spark.resource.ResourceInformation
-
Parses a JSON string into a
ResourceInformation
instance. - parseJson(String, boolean) - Static method in class org.apache.spark.types.variant.VariantBuilder
-
Parse a JSON string as a Variant value.
- parseJson(JValue) - Static method in class org.apache.spark.resource.ResourceInformation
- parseModeUnsupportedError(String, ParseMode) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- parseProgressTimestamp(String) - Static method in class org.apache.spark.sql.streaming.ui.UIUtils
- parseQueryParams(URI) - Static method in class org.apache.spark.util.MavenUtils
-
Parse URI query string's parameter value of
transitive
,exclude
andrepos
. - parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- parser() - Static method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- Parser(Function1<Reader<Object>, Parsers.ParseResult<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- parseReference(String) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- parseResourceRequest(SparkConf, ResourceID) - Static method in class org.apache.spark.resource.ResourceUtils
- parseResourceRequirements(SparkConf, String) - Static method in class org.apache.spark.resource.ResourceUtils
- parserStackOverflow(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- parseStandaloneMasterUrls(String) - Static method in class org.apache.spark.util.Utils
-
Split the comma delimited string of master URLs into a list.
- parseString(String) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- parseString(String) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- parseString(String) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- parseString(String) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- parseString(String) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- parseString(String) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- parseString(String) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- parseString(String) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- parseString(String) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- parseString(String) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- parseTypeWithFallback(String, Function1<String, DataType>, Function1<String, DataType>) - Static method in class org.apache.spark.sql.types.DataType
-
Parses data type from a string with schema.
- PartialResult<R> - Class in org.apache.spark.partial
- PartialResult(R, boolean) - Constructor for class org.apache.spark.partial.PartialResult
- partition() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
- partition(String) - Method in class org.apache.spark.status.LiveRDD
- Partition - Interface in org.apache.spark
-
An identifier for a partition in an RDD.
- PARTITION_DEFINED - Enum constant in enum class org.apache.spark.sql.connector.read.Scan.ColumnarSupportMode
- PARTITION_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- PARTITION_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- partitionBy(String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Partitions the output by the given columns on the file system.
- partitionBy(String...) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Partitions the output by the given columns on the file system.
- partitionBy(String, String...) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined. - partitionBy(String, String...) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the partitioning columns in a
WindowSpec
. - partitionBy(String, Seq<String>) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined. - partitionBy(String, Seq<String>) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the partitioning columns in a
WindowSpec
. - partitionBy(PartitionStrategy) - Method in class org.apache.spark.graphx.Graph
-
Repartitions the edges in the graph according to
partitionStrategy
. - partitionBy(PartitionStrategy) - Method in class org.apache.spark.graphx.impl.GraphImpl
- partitionBy(PartitionStrategy, int) - Method in class org.apache.spark.graphx.Graph
-
Repartitions the edges in the graph according to
partitionStrategy
. - partitionBy(PartitionStrategy, int) - Method in class org.apache.spark.graphx.impl.GraphImpl
- partitionBy(Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a copy of the RDD partitioned using the specified partitioner.
- partitionBy(Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return a copy of the RDD partitioned using the specified partitioner.
- partitionBy(Column...) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined. - partitionBy(Column...) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the partitioning columns in a
WindowSpec
. - partitionBy(Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Partitions the output by the given columns on the file system.
- partitionBy(Seq<String>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Partitions the output by the given columns on the file system.
- partitionBy(Seq<Column>) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined. - partitionBy(Seq<Column>) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the partitioning columns in a
WindowSpec
. - partitionByDoesNotAllowedWhenUsingInsertIntoError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- PartitionCoalescer - Interface in org.apache.spark.rdd
-
::DeveloperApi:: A PartitionCoalescer defines how to coalesce the partitions of a given RDD.
- partitionColumnNotFoundInSchemaError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- partitionColumnNotFoundInSchemaError(String, StructType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- partitionColumnNotSpecifiedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- partitionedBy(Column, Column...) - Method in interface org.apache.spark.sql.CreateTableWriter
-
Partition the output table created by
create
,createOrReplace
, orreplace
using the given columns or transforms. - partitionedBy(Column, Column...) - Method in class org.apache.spark.sql.DataFrameWriterV2
- partitionedBy(Column, Seq<Column>) - Method in interface org.apache.spark.sql.CreateTableWriter
-
Partition the output table created by
create
,createOrReplace
, orreplace
using the given columns or transforms. - partitionedBy(Column, Seq<Column>) - Method in class org.apache.spark.sql.DataFrameWriterV2
- partitioner() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The partitioner of this RDD.
- partitioner() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
If
partitionsRDD
already has a partitioner, use it. - partitioner() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- partitioner() - Method in class org.apache.spark.rdd.CoGroupedRDD
- partitioner() - Method in class org.apache.spark.rdd.RDD
-
Optionally overridden by subclasses to specify how they are partitioned.
- partitioner() - Method in class org.apache.spark.rdd.ShuffledRDD
- partitioner() - Method in class org.apache.spark.ShuffleDependency
- partitioner(Partitioner) - Method in class org.apache.spark.streaming.StateSpec
-
Set the partitioner by which the state RDDs generated by
mapWithState
will be partitioned. - Partitioner - Class in org.apache.spark
-
An object that defines how the elements in a key-value pair RDD are partitioned by key.
- Partitioner() - Constructor for class org.apache.spark.Partitioner
- PartitionEvaluator<T,
U> - Interface in org.apache.spark -
An evaluator for computing RDD partitions.
- PartitionEvaluatorFactory<T,
U> - Interface in org.apache.spark -
A factory to create
PartitionEvaluator
. - partitionExists(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Test whether a partition exists using an
ident
from the table. - PartitionGroup - Class in org.apache.spark.rdd
-
::DeveloperApi:: A group of
Partition
s param: prefLoc preferred location for the partition group - PartitionGroup(Option<String>) - Constructor for class org.apache.spark.rdd.PartitionGroup
- partitionGroupOrdering() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Accessor for nested Scala object
- partitionGroupOrdering$() - Constructor for class org.apache.spark.rdd.DefaultPartitionCoalescer.partitionGroupOrdering$
- partitionId() - Method in class org.apache.spark.BarrierTaskContext
- partitionId() - Method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- partitionId() - Method in class org.apache.spark.scheduler.TaskInfo
-
The actual RDD partition ID in this task.
- partitionId() - Method in class org.apache.spark.status.api.v1.TaskData
- partitionId() - Method in class org.apache.spark.TaskContext
-
The ID of the RDD partition that is computed by this task.
- partitionID() - Method in class org.apache.spark.TaskCommitDenied
- partitioning() - Method in interface org.apache.spark.sql.connector.catalog.Table
-
Returns the physical partitioning of this table.
- Partitioning - Interface in org.apache.spark.sql.connector.read.partitioning
-
An interface to represent the output data partitioning for a data source, which is returned by
SupportsReportPartitioning.outputPartitioning()
. - partitioning$() - Constructor for class org.apache.spark.sql.functions.partitioning$
- PartitioningUtils - Class in org.apache.spark.sql.util
- PartitioningUtils() - Constructor for class org.apache.spark.sql.util.PartitioningUtils
- partitionKey() - Method in interface org.apache.spark.sql.connector.read.HasPartitionKey
-
Returns the value of the partition key(s) associated to this partition.
- partitionNotSpecifyLocationUriError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- partitionNumMismatchError(int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- PartitionOffset - Interface in org.apache.spark.sql.connector.read.streaming
-
Used for per-partition offsets in continuous processing.
- PartitionPruningRDD<T> - Class in org.apache.spark.rdd
-
:: DeveloperApi :: An RDD used to prune RDD partitions/partitions so we can avoid launching tasks on all partitions.
- PartitionPruningRDD(RDD<T>, Function1<Object, Object>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.PartitionPruningRDD
- PartitionReader<T> - Interface in org.apache.spark.sql.connector.read
-
A partition reader returned by
PartitionReaderFactory.createReader(InputPartition)
orPartitionReaderFactory.createColumnarReader(InputPartition)
. - PartitionReaderFactory - Interface in org.apache.spark.sql.connector.read
-
A factory used to create
PartitionReader
instances. - partitions() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Set of partitions in this RDD.
- partitions() - Method in class org.apache.spark.rdd.PartitionGroup
- partitions() - Method in class org.apache.spark.rdd.RDD
-
Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.
- partitions() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- PARTITIONS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- partitionSchema() - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Get the partition schema of table, this must be consistent with $
Table.partitioning()
. - partitionSizeNotAllowedWithUnspecifiedDistributionError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- partitionsRDD() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- partitionsRDD() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- PartitionStrategy - Interface in org.apache.spark.graphx
-
Represents the way edges are assigned to edge partitions based on their source and destination vertex IDs.
- PartitionStrategy.CanonicalRandomVertexCut$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions by hashing the source and destination vertex IDs in a canonical direction, resulting in a random vertex cut that colocates all edges between two vertices, regardless of direction.
- PartitionStrategy.EdgePartition1D$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions using only the source vertex ID, colocating edges with the same source.
- PartitionStrategy.EdgePartition2D$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions using a 2D partitioning of the sparse edge adjacency matrix, guaranteeing a
2 * sqrt(numParts)
bound on vertex replication. - PartitionStrategy.RandomVertexCut$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions by hashing the source and destination vertex IDs, resulting in a random vertex cut that colocates all same-direction edges between two vertices.
- partitionTransformNotExpectedError(String, String, SqlBaseParser.ApplyTransformContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- PartitionTypeHelper(Seq<String>) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.PartitionTypeHelper
- path() - Method in class org.apache.spark.ml.LoadInstanceStart
- path() - Method in class org.apache.spark.ml.SaveInstanceEnd
- path() - Method in class org.apache.spark.ml.SaveInstanceStart
- path() - Method in class org.apache.spark.scheduler.InputFormatInfo
- path() - Method in class org.apache.spark.scheduler.SplitInfo
- pathNotSupportedError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- pathOptionNotSetCorrectlyWhenReadingError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- pathOptionNotSetCorrectlyWhenWritingError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- pattern() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Regex pattern used to match delimiters if
RegexTokenizer.gaps()
is true or tokens ifRegexTokenizer.gaps()
is false. - pc() - Method in class org.apache.spark.ml.feature.PCAModel
- pc() - Method in class org.apache.spark.mllib.feature.PCAModel
- PCA - Class in org.apache.spark.ml.feature
-
PCA trains a model to project vectors to a lower dimensional space of the top
PCA!.k
principal components. - PCA - Class in org.apache.spark.mllib.feature
-
A feature transformer that projects vectors to a low-dimensional space using PCA.
- PCA() - Constructor for class org.apache.spark.ml.feature.PCA
- PCA(int) - Constructor for class org.apache.spark.mllib.feature.PCA
- PCA(String) - Constructor for class org.apache.spark.ml.feature.PCA
- PCAModel - Class in org.apache.spark.ml.feature
-
Model fitted by
PCA
. - PCAModel - Class in org.apache.spark.mllib.feature
-
Model fitted by
PCA
that can project vectors to a low-dimensional space using PCA. - PCAParams - Interface in org.apache.spark.ml.feature
- PCAUtil - Class in org.apache.spark.mllib.feature
- PCAUtil() - Constructor for class org.apache.spark.mllib.feature.PCAUtil
- pdf(Vector) - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
-
Returns density of this multivariate Gaussian at given point, x
- pdf(Vector) - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
-
Returns density of this multivariate Gaussian at given point, x
- PEAK_EXECUTION_MEMORY() - Static method in class org.apache.spark.InternalAccumulator
- PEAK_EXECUTION_MEMORY() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
- PEAK_EXECUTION_MEMORY() - Static method in class org.apache.spark.ui.ToolTips
- PEAK_EXECUTION_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- PEAK_EXECUTION_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- PEAK_EXECUTION_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- PEAK_EXECUTION_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- PEAK_EXECUTION_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- PEAK_EXECUTOR_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- PEAK_MEM() - Static method in class org.apache.spark.status.TaskIndexNames
- PEAK_MEMORY_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- PEAK_MEMORY_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- PEAK_MEMORY_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- PEAK_OFF_HEAP_EXECUTION_MEMORY() - Static method in class org.apache.spark.InternalAccumulator
- PEAK_ON_HEAP_EXECUTION_MEMORY() - Static method in class org.apache.spark.InternalAccumulator
- peakExecutionMemory() - Method in class org.apache.spark.status.api.v1.StageData
- peakExecutionMemory() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- peakExecutionMemory() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- peakExecutorMetrics() - Method in class org.apache.spark.status.api.v1.StageData
- peakExecutorMetrics() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- peakExecutorMetrics() - Method in class org.apache.spark.status.LiveStage
- peakMemoryMetrics() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- peakMemoryMetrics() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- peakMemoryMetrics() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- PEARSON() - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
- PearsonCorrelation - Class in org.apache.spark.mllib.stat.correlation
-
Compute Pearson correlation for two RDDs of the type RDD[Double] or the correlation matrix for an RDD of the type RDD[Vector].
- PearsonCorrelation() - Constructor for class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
- PENDING - Enum constant in enum class org.apache.spark.status.api.v1.StageStatus
- percent_rank() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the relative rank (i.e.
- percentile() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- percentile() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- percentile() - Method in interface org.apache.spark.ml.feature.SelectorParams
-
Percentile of features that selector will select, ordered by ascending p-value.
- percentile() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- percentile(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the exact percentile(s) of numeric column
expr
at the given percentage(s) with value range in [0.0, 1.0]. - percentile(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the exact percentile(s) of numeric column
expr
at the given percentage(s) with value range in [0.0, 1.0]. - percentile_approx(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate
percentile
of the numeric columncol
which is the smallest value in the orderedcol
values (sorted from least to greatest) such that no more thanpercentage
ofcol
values is less than the value or equal to that value. - percentiles() - Static method in class org.apache.spark.scheduler.StatsReportListener
- percentilesHeader() - Static method in class org.apache.spark.scheduler.StatsReportListener
- PERIOD() - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder that serializes instances of the
java.time.Period
class to the internal representation of nullable Catalyst's YearMonthIntervalType. - permanentViewNotSupportedByStreamingReadingAPIError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- persist() - Method in class org.apache.spark.rdd.RDD
-
Persist this RDD with the default storage level (
MEMORY_ONLY
). - persist() - Method in class org.apache.spark.sql.api.Dataset
-
Persist this Dataset with the default storage level (
MEMORY_AND_DISK
). - persist() - Method in class org.apache.spark.sql.Dataset
- persist() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- persist() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- persist() - Method in class org.apache.spark.streaming.dstream.DStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- persist(StorageLevel) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Set this RDD's storage level to persist its values across operations after the first time it is computed.
- persist(StorageLevel) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Set this RDD's storage level to persist its values across operations after the first time it is computed.
- persist(StorageLevel) - Method in class org.apache.spark.api.java.JavaRDD
-
Set this RDD's storage level to persist its values across operations after the first time it is computed.
- persist(StorageLevel) - Method in class org.apache.spark.graphx.Graph
-
Caches the vertices and edges associated with this graph at the specified storage level, ignoring any target storage levels previously set.
- persist(StorageLevel) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
Persists the edge partitions at the specified storage level, ignoring any existing target storage level.
- persist(StorageLevel) - Method in class org.apache.spark.graphx.impl.GraphImpl
- persist(StorageLevel) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
Persists the vertex partitions at the specified storage level, ignoring any existing target storage level.
- persist(StorageLevel) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Persists the underlying RDD with the specified storage level.
- persist(StorageLevel) - Method in class org.apache.spark.rdd.HadoopRDD
- persist(StorageLevel) - Method in class org.apache.spark.rdd.NewHadoopRDD
- persist(StorageLevel) - Method in class org.apache.spark.rdd.RDD
-
Set this RDD's storage level to persist its values across operations after the first time it is computed.
- persist(StorageLevel) - Method in class org.apache.spark.sql.api.Dataset
-
Persist this Dataset with the given storage level.
- persist(StorageLevel) - Method in class org.apache.spark.sql.Dataset
- persist(StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Persist the RDDs of this DStream with the given storage level
- persist(StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Persist the RDDs of this DStream with the given storage level
- persist(StorageLevel) - Method in class org.apache.spark.streaming.dstream.DStream
-
Persist the RDDs of this DStream with the given storage level
- personalizedPageRank(long, double, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run personalized PageRank for a given vertex, such that all random walks are started relative to the source node.
- phrase(Parsers.Parser<T>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- PHYSICAL_PLAN_DESCRIPTION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- PhysicalWriteInfo - Interface in org.apache.spark.sql.connector.write
-
This interface contains physical write information that data sources can use when generating a
DataWriterFactory
or aStreamingDataWriterFactory
. - pi() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- pi() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
- pi() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
- pi() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
- pi() - Static method in class org.apache.spark.sql.functions
-
Returns Pi.
- pickBin(Partition, RDD<?>, double, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Takes a parent RDD partition and decides which of the partition groups to put it in Takes locality into account, but also uses power of 2 choices to load balance It strikes a balance between the two using the balanceSlack variable
- pickRandomVertex() - Method in class org.apache.spark.graphx.GraphOps
-
Picks a random vertex from the graph and returns its ID.
- pipe(String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(String) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by piping elements to a forked external process.
- pipe(String, Map<String, String>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>, Map<String, String>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>, Map<String, String>, boolean, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>, Map<String, String>, boolean, int, String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by piping elements to a forked external process.
- Pipeline - Class in org.apache.spark.ml
-
A simple pipeline, which acts as an estimator.
- Pipeline() - Constructor for class org.apache.spark.ml.Pipeline
- Pipeline(String) - Constructor for class org.apache.spark.ml.Pipeline
- Pipeline.SharedReadWrite$ - Class in org.apache.spark.ml
- PipelineModel - Class in org.apache.spark.ml
-
Represents a fitted pipeline.
- PipelineStage - Class in org.apache.spark.ml
-
A stage in a pipeline, either an
Estimator
or aTransformer
. - PipelineStage() - Constructor for class org.apache.spark.ml.PipelineStage
- pipeOperatorSelectContainsAggregateFunction(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- pivot(String) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Pivots a column of the current
DataFrame
and performs the specified aggregation. - pivot(String) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- pivot(String, List<Object>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
(Java-specific) Pivots a column of the current
DataFrame
and performs the specified aggregation. - pivot(String, List<Object>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- pivot(String, Seq<Object>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Pivots a column of the current
DataFrame
and performs the specified aggregation. - pivot(String, Seq<Object>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- pivot(Column) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Pivots a column of the current
DataFrame
and performs the specified aggregation. - pivot(Column) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- pivot(Column, List<Object>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
(Java-specific) Pivots a column of the current
DataFrame
and performs the specified aggregation. - pivot(Column, List<Object>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- pivot(Column, Seq<Object>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Pivots a column of the current
DataFrame
and performs the specified aggregation. - pivot(Column, Seq<Object>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- pivotColumnUnsupportedError(Object, Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- pivotNotAfterGroupByUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- PivotType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.PivotType$
- pivotValDataTypeMismatchError(Expression, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- planDescription() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- planInputPartitions() - Method in interface org.apache.spark.sql.connector.read.Batch
-
Returns a list of
input partitions
. - planInputPartitions(Offset) - Method in interface org.apache.spark.sql.connector.read.streaming.ContinuousStream
-
Returns a list of
input partitions
given the start offset. - planInputPartitions(Offset, Offset) - Method in interface org.apache.spark.sql.connector.read.streaming.MicroBatchStream
-
Returns a list of
input partitions
given the start and end offsets. - PluginContext - Interface in org.apache.spark.api.plugin
-
:: DeveloperApi :: Context information and operations for plugins loaded by Spark.
- plus(byte, byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- plus(double, double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- plus(double, double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- plus(float, float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- plus(float, float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- plus(int, int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- plus(long, long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- plus(short, short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- plus(Object) - Method in class org.apache.spark.sql.Column
-
Sum of this expression and another expression.
- plus(Decimal, Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- plus(Decimal, Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- plus(Duration) - Method in class org.apache.spark.streaming.Duration
- plus(Duration) - Method in class org.apache.spark.streaming.Time
- pmml() - Method in interface org.apache.spark.mllib.pmml.export.PMMLModelExport
-
Holder of the exported model in PMML format
- PMMLExportable - Interface in org.apache.spark.mllib.pmml
-
Export model to the PMML format Predictive Model Markup Language (PMML) is an XML-based file format developed by the Data Mining Group (www.dmg.org).
- PMMLKMeansModelWriter - Class in org.apache.spark.ml.clustering
-
A writer for KMeans that handles the "pmml" format
- PMMLKMeansModelWriter() - Constructor for class org.apache.spark.ml.clustering.PMMLKMeansModelWriter
- PMMLLinearRegressionModelWriter - Class in org.apache.spark.ml.regression
-
A writer for LinearRegression that handles the "pmml" format
- PMMLLinearRegressionModelWriter() - Constructor for class org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
- PMMLModelExport - Interface in org.apache.spark.mllib.pmml.export
- PMMLModelExportFactory - Class in org.apache.spark.mllib.pmml.export
- PMMLModelExportFactory() - Constructor for class org.apache.spark.mllib.pmml.export.PMMLModelExportFactory
- pmod(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the positive value of dividend mod divisor.
- point() - Method in class org.apache.spark.mllib.feature.VocabWord
- POINTS() - Static method in class org.apache.spark.mllib.clustering.StreamingKMeans
- pointSilhouetteCoefficient(Set<Object>, double, double, double, Function1<Object, Object>) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
- pointSilhouetteCoefficient(Set<Object>, double, double, double, Function1<Object, Object>) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
- POISON_PILL() - Static method in class org.apache.spark.scheduler.AsyncEventQueue
- Poisson$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
- PoissonBounds - Class in org.apache.spark.util.random
-
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact sample sizes with high confidence when sampling with replacement.
- PoissonBounds() - Constructor for class org.apache.spark.util.random.PoissonBounds
- PoissonGenerator - Class in org.apache.spark.mllib.random
-
Generates i.i.d.
- PoissonGenerator(double) - Constructor for class org.apache.spark.mllib.random.PoissonGenerator
- poissonJavaRDD(JavaSparkContext, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaRDD
with the default number of partitions and the default seed. - poissonJavaRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaRDD
with the default seed. - poissonJavaRDD(JavaSparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.poissonRDD
. - poissonJavaVectorRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaVectorRDD
with the default number of partitions and the default seed. - poissonJavaVectorRDD(JavaSparkContext, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaVectorRDD
with the default seed. - poissonJavaVectorRDD(JavaSparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.poissonVectorRDD
. - poissonRDD(SparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples from the Poisson distribution with the input mean. - PoissonSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi :: A sampler for sampling with replacement, based on values drawn from Poisson distribution.
- PoissonSampler(double) - Constructor for class org.apache.spark.util.random.PoissonSampler
- PoissonSampler(double, boolean) - Constructor for class org.apache.spark.util.random.PoissonSampler
- poissonVectorRDD(SparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing
i.i.d.
samples drawn from the Poisson distribution with the input mean. - PolynomialExpansion - Class in org.apache.spark.ml.feature
-
Perform feature expansion in a polynomial space.
- PolynomialExpansion() - Constructor for class org.apache.spark.ml.feature.PolynomialExpansion
- PolynomialExpansion(String) - Constructor for class org.apache.spark.ml.feature.PolynomialExpansion
- pool() - Method in class org.apache.spark.serializer.KryoSerializer
- popStdev() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the population standard deviation of this RDD's elements.
- popStdev() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the population standard deviation of this RDD's elements.
- popStdev() - Method in class org.apache.spark.util.StatCounter
-
Return the population standard deviation of the values.
- popVariance() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the population variance of this RDD's elements.
- popVariance() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the population variance of this RDD's elements.
- popVariance() - Method in class org.apache.spark.util.StatCounter
-
Return the population variance of the values.
- port() - Method in interface org.apache.spark.SparkExecutorInfo
- port() - Method in class org.apache.spark.SparkExecutorInfoImpl
- port() - Method in class org.apache.spark.storage.BlockManagerId
- PortableDataStream - Class in org.apache.spark.input
-
A class that allows DataStreams to be serialized and moved around by not creating them until they need to be read
- PortableDataStream(CombineFileSplit, TaskAttemptContext, Integer) - Constructor for class org.apache.spark.input.PortableDataStream
- portMaxRetries(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Maximum number of retries when binding to a port before giving up.
- portOptionNotSetError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- posexplode(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element with position in the given array or map column.
- posexplode_outer(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element with position in the given array or map column.
- position() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
- position() - Method in class org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnPosition
- position() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
- position(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the position of the first occurrence of
substr
instr
after position1
. - position(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the position of the first occurrence of
substr
instr
after positionstart
. - positionalAndNamedArgumentDoubleReference(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- positioned(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- positive(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value.
- post(SparkListenerEvent) - Method in class org.apache.spark.scheduler.AsyncEventQueue
- Postfix$() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan.Postfix$
- PostgresDialect - Class in org.apache.spark.sql.jdbc
- PostgresDialect() - Constructor for class org.apache.spark.sql.jdbc.PostgresDialect
- postStartHook() - Method in interface org.apache.spark.scheduler.TaskScheduler
- postToAll(E) - Method in interface org.apache.spark.util.ListenerBus
-
Post the event to all registered listeners.
- pow(double, String) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(double, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(String, double) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(String, String) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(String, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(Column, double) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(Column, String) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- POW_10() - Static method in class org.apache.spark.sql.types.Decimal
- power(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- PowerIterationClustering - Class in org.apache.spark.ml.clustering
-
Power Iteration Clustering (PIC), a scalable graph clustering algorithm developed by Lin and Cohen.
- PowerIterationClustering - Class in org.apache.spark.mllib.clustering
-
Power Iteration Clustering (PIC), a scalable graph clustering algorithm developed by Lin and Cohen.
- PowerIterationClustering() - Constructor for class org.apache.spark.ml.clustering.PowerIterationClustering
- PowerIterationClustering() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Constructs a PIC instance with default parameters: {k: 2, maxIterations: 100, initMode: "random"}.
- PowerIterationClustering.Assignment - Class in org.apache.spark.mllib.clustering
-
Cluster assignment.
- PowerIterationClustering.Assignment$ - Class in org.apache.spark.mllib.clustering
- PowerIterationClusteringModel - Class in org.apache.spark.mllib.clustering
-
Model produced by
PowerIterationClustering
. - PowerIterationClusteringModel(int, RDD<PowerIterationClustering.Assignment>) - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
- PowerIterationClusteringModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.clustering
- PowerIterationClusteringParams - Interface in org.apache.spark.ml.clustering
-
Common params for PowerIterationClustering
- PowerIterationClusteringWrapper - Class in org.apache.spark.ml.r
- PowerIterationClusteringWrapper() - Constructor for class org.apache.spark.ml.r.PowerIterationClusteringWrapper
- pr() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Returns the precision-recall curve, which is a Dataframe containing two fields recall, precision with (0.0, 1.0) prepended to it.
- pr() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
- pr() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- pr() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- pr() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- pr() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the precision-recall curve, which is an RDD of (recall, precision), NOT (precision, recall), with (0.0, p) prepended to it, where p is the precision associated with the lowest recall on the curve.
- preciseSize() - Method in interface org.apache.spark.storage.memory.MemoryEntryBuilder
- precision() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns document-based precision averaged by the number of documents
- precision() - Method in class org.apache.spark.sql.types.Decimal
- precision() - Method in class org.apache.spark.sql.types.DecimalType
- precision(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns precision for a given label (category)
- precision(double) - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns precision for a given label (category)
- Precision - Class in org.apache.spark.mllib.evaluation.binary
-
Precision.
- Precision() - Constructor for class org.apache.spark.mllib.evaluation.binary.Precision
- precisionAt(int) - Method in class org.apache.spark.mllib.evaluation.RankingMetrics
-
Compute the average precision of all the queries, truncated at ranking position k.
- precisionByLabel() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns precision for each label (category).
- precisionByThreshold() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Returns a dataframe with two fields (threshold, precision) curve.
- precisionByThreshold() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
- precisionByThreshold() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- precisionByThreshold() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- precisionByThreshold() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- precisionByThreshold() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, precision) curve.
- Predicate - Class in org.apache.spark.sql.connector.expressions.filter
-
The general representation of predicate expressions, which contains the upper-cased expression name and all the children expressions.
- Predicate(String, Expression[]) - Constructor for class org.apache.spark.sql.connector.expressions.filter.Predicate
- predict() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- predict() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
- predict() - Method in class org.apache.spark.mllib.tree.model.Node
- predict() - Method in class org.apache.spark.mllib.tree.model.Predict
- predict(double) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- predict(double) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
Predict a single label.
- predict(int, int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Predict the rating of one user for one product.
- predict(FeaturesType) - Method in class org.apache.spark.ml.classification.ClassificationModel
-
Predict label for the given features.
- predict(FeaturesType) - Method in class org.apache.spark.ml.PredictionModel
-
Predict label for the given features.
- predict(JavaDoubleRDD) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
Predict labels for provided features.
- predict(JavaPairRDD<Integer, Integer>) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Java-friendly version of
MatrixFactorizationModel.predict
. - predict(JavaRDD<Vector>) - Method in interface org.apache.spark.mllib.classification.ClassificationModel
-
Predict values for examples stored in a JavaRDD.
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Java-friendly version of
predict()
. - predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Java-friendly version of
predict()
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Maps given points to their cluster indices.
- predict(JavaRDD<Vector>) - Method in interface org.apache.spark.mllib.regression.RegressionModel
-
Predict values for examples stored in a JavaRDD.
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Predict values for the given data set using the model trained.
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Java-friendly version of
org.apache.spark.mllib.tree.model.TreeEnsembleModel.predict
. - predict(Vector) - Method in interface org.apache.spark.ml.ann.TopologyModel
-
Prediction of the model.
- predict(Vector) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- predict(Vector) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- predict(Vector) - Method in class org.apache.spark.ml.classification.LinearSVCModel
- predict(Vector) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Predict label for the given feature vector.
- predict(Vector) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
Predict label for the given features.
- predict(Vector) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- predict(Vector) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- predict(Vector) - Method in class org.apache.spark.ml.clustering.KMeansModel
- predict(Vector) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- predict(Vector) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- predict(Vector) - Method in class org.apache.spark.ml.regression.FMRegressionModel
- predict(Vector) - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- predict(Vector) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- predict(Vector) - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- predict(Vector) - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- predict(Vector) - Method in interface org.apache.spark.mllib.classification.ClassificationModel
-
Predict values for a single data point using the model trained.
- predict(Vector) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
- predict(Vector) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Predicts the index of the cluster that the input point belongs to.
- predict(Vector) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Maps given point to its cluster index.
- predict(Vector) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Returns the cluster index that a given point belongs to.
- predict(Vector) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
-
Predict values for a single data point using the model trained.
- predict(Vector) - Method in interface org.apache.spark.mllib.regression.RegressionModel
-
Predict values for a single data point using the model trained.
- predict(Vector) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Predict values for a single data point using the model trained.
- predict(Vector) - Method in class org.apache.spark.mllib.tree.model.Node
-
predict value if node is not leaf
- predict(Vector) - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Predict values for a single data point using the model trained.
- predict(RDD<Object>) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
Predict labels for provided features.
- predict(RDD<Vector>) - Method in interface org.apache.spark.mllib.classification.ClassificationModel
-
Predict values for the given data set using the model trained.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Predicts the indices of the clusters that the input points belong to.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Maps given points to their cluster indices.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Maps given points to their cluster indices.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
-
Predict values for the given data set using the model trained.
- predict(RDD<Vector>) - Method in interface org.apache.spark.mllib.regression.RegressionModel
-
Predict values for the given data set using the model trained.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Predict values for the given data set using the model trained.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Predict values for the given data set.
- predict(RDD<Tuple2<Object, Object>>) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Predict the rating of many users for many products.
- Predict - Class in org.apache.spark.mllib.tree.model
-
Predicted value for a node param: predict predicted value param: prob probability of the label (classification only)
- Predict(double, double) - Constructor for class org.apache.spark.mllib.tree.model.Predict
- PredictData(double, double) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
- PredictData$() - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
- prediction() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- prediction() - Method in class org.apache.spark.ml.tree.InternalNode
- prediction() - Method in class org.apache.spark.ml.tree.LeafNode
- prediction() - Method in class org.apache.spark.ml.tree.Node
-
Prediction a leaf node makes, or which an internal node would make if it were a leaf node
- predictionCol() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Field in "predictions" which gives the prediction of each class.
- predictionCol() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- predictionCol() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- predictionCol() - Method in class org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
- predictionCol() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationSummaryImpl
- predictionCol() - Method in class org.apache.spark.ml.classification.OneVsRest
- predictionCol() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- predictionCol() - Method in class org.apache.spark.ml.classification.RandomForestClassificationSummaryImpl
- predictionCol() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- predictionCol() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- predictionCol() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
- predictionCol() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- predictionCol() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- predictionCol() - Method in class org.apache.spark.ml.clustering.KMeans
- predictionCol() - Method in class org.apache.spark.ml.clustering.KMeansModel
- predictionCol() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- predictionCol() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- predictionCol() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- predictionCol() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- predictionCol() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- predictionCol() - Method in class org.apache.spark.ml.fpm.FPGrowth
- predictionCol() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- predictionCol() - Method in interface org.apache.spark.ml.param.shared.HasPredictionCol
-
Param for prediction column name.
- predictionCol() - Method in class org.apache.spark.ml.PredictionModel
- predictionCol() - Method in class org.apache.spark.ml.Predictor
- predictionCol() - Method in class org.apache.spark.ml.recommendation.ALS
- predictionCol() - Method in class org.apache.spark.ml.recommendation.ALSModel
- predictionCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Field in "predictions" which gives the predicted value of each instance.
- predictionCol() - Method in class org.apache.spark.ml.regression.IsotonicRegression
- predictionCol() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- predictionCol() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- PredictionModel<FeaturesType,
M extends PredictionModel<FeaturesType, M>> - Class in org.apache.spark.ml -
Abstraction for a model for prediction tasks (regression and classification).
- PredictionModel() - Constructor for class org.apache.spark.ml.PredictionModel
- predictions() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Dataframe output by the model's
transform
method. - predictions() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- predictions() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- predictions() - Method in class org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
- predictions() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationSummaryImpl
- predictions() - Method in class org.apache.spark.ml.classification.RandomForestClassificationSummaryImpl
- predictions() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
- predictions() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Predictions output by the model's
transform
method. - predictions() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
Predictions associated with the boundaries at the same index, monotone because of isotonic regression.
- predictions() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- predictions() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
- predictLeaf(Vector) - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
- predictLeaf(Vector) - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
- predictOn(JavaDStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Java-friendly version of
predictOn
. - predictOn(JavaDStream<Vector>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Java-friendly version of
predictOn
. - predictOn(DStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Use the clustering model to make predictions on batches of data from a DStream.
- predictOn(DStream<Vector>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Use the model to make predictions on batches of data from a DStream
- predictOnValues(JavaPairDStream<K, Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Java-friendly version of
predictOnValues
. - predictOnValues(JavaPairDStream<K, Vector>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Java-friendly version of
predictOnValues
. - predictOnValues(DStream<Tuple2<K, Vector>>, ClassTag<K>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Use the model to make predictions on the values of a DStream and carry over its keys.
- predictOnValues(DStream<Tuple2<K, Vector>>, ClassTag<K>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Use the model to make predictions on the values of a DStream and carry over its keys.
- Predictor<FeaturesType,
Learner extends Predictor<FeaturesType, Learner, M>, M extends PredictionModel<FeaturesType, M>> - Class in org.apache.spark.ml -
Abstraction for prediction problems (regression and classification).
- Predictor() - Constructor for class org.apache.spark.ml.Predictor
- PredictorParams - Interface in org.apache.spark.ml
-
(private[ml]) Trait for parameters for prediction (regression and classification).
- predictProbabilities(Vector) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
Predict posterior class probabilities for a single data point using the model trained.
- predictProbabilities(RDD<Vector>) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
Predict values for the given data set using the model trained.
- predictProbability(FeaturesType) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
Predict the probability of each class given the features.
- predictProbability(Vector) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- predictQuantiles(Vector) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- predictRaw(FeaturesType) - Method in class org.apache.spark.ml.classification.ClassificationModel
-
Raw prediction for each possible label.
- predictRaw(Vector) - Method in interface org.apache.spark.ml.ann.TopologyModel
-
Raw prediction of the model.
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.FMClassificationModel
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.LinearSVCModel
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- predictRaw(Vector) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- predictSoft(Vector) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Given the input vector, return the membership values to all mixture components.
- predictSoft(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Given the input vectors, return the membership value of each vector to all mixture components.
- preferIPv6() - Static method in class org.apache.spark.util.Utils
-
Whether the underlying JVM prefer IPv6 addresses.
- preferredLocation() - Method in class org.apache.spark.streaming.receiver.Receiver
-
Override this to specify a preferred location (hostname).
- preferredLocations() - Method in interface org.apache.spark.sql.connector.read.InputPartition
-
The preferred locations where the input partition reader returned by this partition can run faster, but Spark does not guarantee to run the input partition reader on these locations.
- preferredLocations(Partition) - Method in class org.apache.spark.rdd.RDD
-
Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.
- Prefix$() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
- prefixesToRewrite() - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- PrefixSpan - Class in org.apache.spark.ml.fpm
-
A parallel PrefixSpan algorithm to mine frequent sequential patterns.
- PrefixSpan - Class in org.apache.spark.mllib.fpm
-
A parallel PrefixSpan algorithm to mine frequent sequential patterns.
- PrefixSpan() - Constructor for class org.apache.spark.ml.fpm.PrefixSpan
- PrefixSpan() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan
-
Constructs a default instance with default parameters {minSupport:
0.1
, maxPatternLength:10
, maxLocalProjDBSize:32000000L
}. - PrefixSpan(String) - Constructor for class org.apache.spark.ml.fpm.PrefixSpan
- PrefixSpan.FreqSequence<Item> - Class in org.apache.spark.mllib.fpm
-
Represents a frequent sequence.
- PrefixSpan.Postfix$ - Class in org.apache.spark.mllib.fpm
- PrefixSpan.Prefix$ - Class in org.apache.spark.mllib.fpm
- PrefixSpanModel<Item> - Class in org.apache.spark.mllib.fpm
-
Model fitted by
PrefixSpan
param: freqSequences frequent sequences - PrefixSpanModel(RDD<PrefixSpan.FreqSequence<Item>>) - Constructor for class org.apache.spark.mllib.fpm.PrefixSpanModel
- PrefixSpanModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.fpm
- PrefixSpanWrapper - Class in org.apache.spark.ml.r
- PrefixSpanWrapper() - Constructor for class org.apache.spark.ml.r.PrefixSpanWrapper
- prefLoc() - Method in class org.apache.spark.rdd.PartitionGroup
- pregel(A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<A>) - Method in class org.apache.spark.graphx.GraphOps
-
Execute a Pregel-like iterative vertex-parallel abstraction.
- Pregel - Class in org.apache.spark.graphx
-
Implements a Pregel-like bulk-synchronous message-passing API.
- Pregel() - Constructor for class org.apache.spark.graphx.Pregel
- prepareForTriggerAvailableNow() - Method in interface org.apache.spark.sql.connector.read.streaming.SupportsTriggerAvailableNow
-
This will be called at the beginning of streaming queries with Trigger.AvailableNow, to let the source record the offset for the current latest data at the time (a.k.a the target offset for the query).
- prepareWrite(SQLConf, Job, Map<String, String>, StructType) - Static method in class org.apache.spark.sql.avro.AvroUtils
- prependBaseUri(HttpServletRequest, String, String) - Static method in class org.apache.spark.ui.UIUtils
- prepOutputField(StructType, int[], String, String, boolean) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
Prepare the output column field, including per-feature metadata.
- PRETTY() - Static method in class org.apache.spark.ErrorMessageFormat
- prettyJson() - Method in interface org.apache.spark.sql.Row
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.SinkProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.SourceProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
-
The pretty (i.e.
- prettyJson() - Static method in class org.apache.spark.sql.types.BinaryType
- prettyJson() - Static method in class org.apache.spark.sql.types.BooleanType
- prettyJson() - Static method in class org.apache.spark.sql.types.ByteType
- prettyJson() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- prettyJson() - Method in class org.apache.spark.sql.types.DataType
-
The pretty (i.e.
- prettyJson() - Static method in class org.apache.spark.sql.types.DateType
- prettyJson() - Static method in class org.apache.spark.sql.types.DoubleType
- prettyJson() - Static method in class org.apache.spark.sql.types.FloatType
- prettyJson() - Static method in class org.apache.spark.sql.types.IntegerType
- prettyJson() - Static method in class org.apache.spark.sql.types.LongType
- prettyJson() - Static method in class org.apache.spark.sql.types.NullType
- prettyJson() - Static method in class org.apache.spark.sql.types.ShortType
- prettyJson() - Static method in class org.apache.spark.sql.types.StringType
- prettyJson() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- prettyJson() - Static method in class org.apache.spark.sql.types.TimestampType
- prettyJson() - Static method in class org.apache.spark.sql.types.VariantType
- prettyPrint() - Method in class org.apache.spark.streaming.Duration
- prev() - Method in class org.apache.spark.rdd.ShuffledRDD
- prev() - Method in class org.apache.spark.status.LiveRDDPartition
- primaryConstructorNotFoundError(Class<?>) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- primaryConstructorNotFoundError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- PRIMITIVE - Static variable in class org.apache.spark.types.variant.VariantUtil
- primitiveHeader(int) - Static method in class org.apache.spark.types.variant.VariantUtil
- primitiveTypesNotSupportedError() - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- primitiveTypesNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- print() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Print the first ten elements of each RDD generated in this DStream.
- print() - Method in class org.apache.spark.streaming.dstream.DStream
-
Print the first ten elements of each RDD generated in this DStream.
- print(int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Print the first num elements of each RDD generated in this DStream.
- print(int) - Method in class org.apache.spark.streaming.dstream.DStream
-
Print the first num elements of each RDD generated in this DStream.
- printErrorAndExit(String) - Method in interface org.apache.spark.util.CommandLineLoggingUtils
- printf(Column, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Formats the arguments in printf-style and returns the result as a string column.
- printMessage(String) - Method in interface org.apache.spark.util.CommandLineLoggingUtils
- printSchema() - Method in class org.apache.spark.sql.api.Dataset
-
Prints the schema to the console in a nice tree format.
- printSchema(int) - Method in class org.apache.spark.sql.api.Dataset
-
Prints the schema up to the given level to the console in a nice tree format.
- printStats() - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
- printStream() - Method in interface org.apache.spark.util.CommandLineLoggingUtils
- printTreeString() - Method in class org.apache.spark.sql.types.StructType
- prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - Method in class org.apache.spark.storage.BasicBlockReplicationPolicy
-
Method to prioritize a bunch of candidate peers of a block manager.
- prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - Method in interface org.apache.spark.storage.BlockReplicationPolicy
-
Method to prioritize a bunch of candidate peers of a block
- prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - Method in class org.apache.spark.storage.RandomBlockReplicationPolicy
-
Method to prioritize a bunch of candidate peers of a block.
- priority() - Method in interface org.apache.spark.scheduler.Schedulable
- priority() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- prob() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
- prob() - Method in class org.apache.spark.mllib.tree.model.Predict
- ProbabilisticClassificationModel<FeaturesType,
M extends ProbabilisticClassificationModel<FeaturesType, M>> - Class in org.apache.spark.ml.classification -
Model produced by a
ProbabilisticClassifier
. - ProbabilisticClassificationModel() - Constructor for class org.apache.spark.ml.classification.ProbabilisticClassificationModel
- ProbabilisticClassifier<FeaturesType,
E extends ProbabilisticClassifier<FeaturesType, E, M>, M extends ProbabilisticClassificationModel<FeaturesType, M>> - Class in org.apache.spark.ml.classification -
Single-label binary or multiclass classifier which can output class conditional probabilities.
- ProbabilisticClassifier() - Constructor for class org.apache.spark.ml.classification.ProbabilisticClassifier
- ProbabilisticClassifierParams - Interface in org.apache.spark.ml.classification
-
(private[classification]) Params for probabilistic classification.
- probabilities() - Static method in class org.apache.spark.scheduler.StatsReportListener
- probability() - Method in class org.apache.spark.ml.clustering.GaussianMixtureSummary
- probabilityCol() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Field in "predictions" which gives the probability of each class as a vector.
- probabilityCol() - Method in class org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
- probabilityCol() - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
- probabilityCol() - Method in class org.apache.spark.ml.classification.ProbabilisticClassifier
- probabilityCol() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- probabilityCol() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- probabilityCol() - Method in class org.apache.spark.ml.clustering.GaussianMixtureSummary
- probabilityCol() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- probabilityCol() - Method in interface org.apache.spark.ml.param.shared.HasProbabilityCol
-
Param for Column name for predicted class conditional probabilities.
- Probit$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
- Procedure - Interface in org.apache.spark.sql.connector.catalog.procedures
-
A base interface for all procedures.
- ProcedureCatalog - Interface in org.apache.spark.sql.connector.catalog
-
A catalog API for working with procedures.
- ProcedureParameter - Interface in org.apache.spark.sql.connector.catalog.procedures
-
A
procedure
parameter. - ProcedureParameter.Builder - Class in org.apache.spark.sql.connector.catalog.procedures
- ProcedureParameter.Mode - Enum Class in org.apache.spark.sql.connector.catalog.procedures
-
An enum representing procedure parameter modes.
- process(T) - Method in class org.apache.spark.sql.ForeachWriter
-
Called to process the data in the executor side.
- PROCESS_LOCAL() - Static method in class org.apache.spark.scheduler.TaskLocality
- PROCESS_LOGS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- processAllAvailable() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Blocks until all available data in the source has been processed and committed to the sink.
- PROCESSED_ROWS_PER_SECOND_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- processedRowsPerSecond() - Method in class org.apache.spark.sql.streaming.SourceProgress
- processedRowsPerSecond() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
The aggregate (across all sources) rate at which Spark is processing data.
- processId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.MiscellaneousProcessAdded
- processId() - Method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
- PROCESSING - Enum constant in enum class org.apache.spark.status.api.v1.streaming.BatchStatus
- processingDelay() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
Time taken for the all jobs of this batch to finish processing from the time they started processing.
- processingEndTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
- processingStartTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
- processingTime() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- ProcessingTime() - Static method in class org.apache.spark.sql.streaming.TimeMode
-
Stateful processor that uses query processing time to register timers and calculate ttl expiration.
- ProcessingTime(long) - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger policy that runs a query periodically based on an interval in processing time.
- ProcessingTime(long, TimeUnit) - Static method in class org.apache.spark.sql.streaming.Trigger
-
(Java-friendly) A trigger policy that runs a query periodically based on an interval in processing time.
- ProcessingTime(String) - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger policy that runs a query periodically based on an interval in processing time.
- ProcessingTime(Duration) - Static method in class org.apache.spark.sql.streaming.Trigger
-
(Scala-friendly) A trigger policy that runs a query periodically based on an interval in processing time.
- ProcessingTimeTimeout() - Static method in class org.apache.spark.sql.streaming.GroupStateTimeout
-
Timeout based on processing time.
- processLogs() - Method in class org.apache.spark.status.api.v1.ProcessSummary
- processStreamByLine(String, InputStream, Function1<String, BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Return and start a daemon thread that processes the content of the input stream line by line.
- ProcessSummary - Class in org.apache.spark.status.api.v1
- ProcessTreeMetrics - Class in org.apache.spark.metrics
- ProcessTreeMetrics() - Constructor for class org.apache.spark.metrics.ProcessTreeMetrics
- produceResult(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.functions.ScalarFunction
-
Applies the function to an input row to produce a value.
- produceResult(S) - Method in interface org.apache.spark.sql.connector.catalog.functions.AggregateFunction
-
Produce the aggregation result based on intermediate state.
- product() - Method in class org.apache.spark.mllib.recommendation.Rating
- product(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the product of all numerical elements in a group.
- product(TypeTags.TypeTag<T>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's product type (tuples, case classes, etc).
- productArity() - Static method in class org.apache.spark.ExpireDeadHosts
- productArity() - Static method in class org.apache.spark.metrics.DirectPoolMemory
- productArity() - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- productArity() - Static method in class org.apache.spark.metrics.JVMHeapMemory
- productArity() - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- productArity() - Static method in class org.apache.spark.metrics.MappedPoolMemory
- productArity() - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- productArity() - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- productArity() - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- productArity() - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- productArity() - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- productArity() - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- productArity() - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- productArity() - Static method in class org.apache.spark.ml.feature.Dot
- productArity() - Static method in class org.apache.spark.ml.feature.EmptyTerm
- productArity() - Static method in class org.apache.spark.Resubmitted
- productArity() - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- productArity() - Static method in class org.apache.spark.scheduler.JobSucceeded
- productArity() - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- productArity() - Static method in class org.apache.spark.scheduler.StopCoordinator
- productArity() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- productArity() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- productArity() - Static method in class org.apache.spark.sql.types.BinaryType
- productArity() - Static method in class org.apache.spark.sql.types.BooleanType
- productArity() - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.ByteType
- productArity() - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- productArity() - Static method in class org.apache.spark.sql.types.DateType
- productArity() - Static method in class org.apache.spark.sql.types.DateTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- productArity() - Static method in class org.apache.spark.sql.types.DoubleType
- productArity() - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.FloatType
- productArity() - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.IntegerType
- productArity() - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.LongType
- productArity() - Static method in class org.apache.spark.sql.types.LongTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.NullType
- productArity() - Static method in class org.apache.spark.sql.types.ShortType
- productArity() - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.StringType
- productArity() - Static method in class org.apache.spark.sql.types.StringTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- productArity() - Static method in class org.apache.spark.sql.types.TimestampType
- productArity() - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- productArity() - Static method in class org.apache.spark.sql.types.VariantType
- productArity() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- productArity() - Static method in class org.apache.spark.StopMapOutputTracker
- productArity() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- productArity() - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- productArity() - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- productArity() - Static method in class org.apache.spark.Success
- productArity() - Static method in class org.apache.spark.TaskResultLost
- productArity() - Static method in class org.apache.spark.TaskSchedulerIsSet
- productArity() - Static method in class org.apache.spark.UnknownReason
- productElement(int) - Static method in class org.apache.spark.ExpireDeadHosts
- productElement(int) - Static method in class org.apache.spark.metrics.DirectPoolMemory
- productElement(int) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- productElement(int) - Static method in class org.apache.spark.metrics.JVMHeapMemory
- productElement(int) - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- productElement(int) - Static method in class org.apache.spark.metrics.MappedPoolMemory
- productElement(int) - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- productElement(int) - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- productElement(int) - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- productElement(int) - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- productElement(int) - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- productElement(int) - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- productElement(int) - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- productElement(int) - Static method in class org.apache.spark.ml.feature.Dot
- productElement(int) - Static method in class org.apache.spark.ml.feature.EmptyTerm
- productElement(int) - Static method in class org.apache.spark.Resubmitted
- productElement(int) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- productElement(int) - Static method in class org.apache.spark.scheduler.JobSucceeded
- productElement(int) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- productElement(int) - Static method in class org.apache.spark.scheduler.StopCoordinator
- productElement(int) - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- productElement(int) - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- productElement(int) - Static method in class org.apache.spark.sql.types.BinaryType
- productElement(int) - Static method in class org.apache.spark.sql.types.BooleanType
- productElement(int) - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.ByteType
- productElement(int) - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- productElement(int) - Static method in class org.apache.spark.sql.types.DateType
- productElement(int) - Static method in class org.apache.spark.sql.types.DateTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- productElement(int) - Static method in class org.apache.spark.sql.types.DoubleType
- productElement(int) - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.FloatType
- productElement(int) - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.IntegerType
- productElement(int) - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.LongType
- productElement(int) - Static method in class org.apache.spark.sql.types.LongTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.NullType
- productElement(int) - Static method in class org.apache.spark.sql.types.ShortType
- productElement(int) - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.StringType
- productElement(int) - Static method in class org.apache.spark.sql.types.StringTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.TimestampNTZType
- productElement(int) - Static method in class org.apache.spark.sql.types.TimestampType
- productElement(int) - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- productElement(int) - Static method in class org.apache.spark.sql.types.VariantType
- productElement(int) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- productElement(int) - Static method in class org.apache.spark.StopMapOutputTracker
- productElement(int) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- productElement(int) - Static method in class org.apache.spark.Success
- productElement(int) - Static method in class org.apache.spark.TaskResultLost
- productElement(int) - Static method in class org.apache.spark.TaskSchedulerIsSet
- productElement(int) - Static method in class org.apache.spark.UnknownReason
- productElementName(int) - Static method in class org.apache.spark.ExpireDeadHosts
- productElementName(int) - Static method in class org.apache.spark.metrics.DirectPoolMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- productElementName(int) - Static method in class org.apache.spark.metrics.JVMHeapMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.MappedPoolMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- productElementName(int) - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- productElementName(int) - Static method in class org.apache.spark.ml.feature.Dot
- productElementName(int) - Static method in class org.apache.spark.ml.feature.EmptyTerm
- productElementName(int) - Static method in class org.apache.spark.Resubmitted
- productElementName(int) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- productElementName(int) - Static method in class org.apache.spark.scheduler.JobSucceeded
- productElementName(int) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- productElementName(int) - Static method in class org.apache.spark.scheduler.StopCoordinator
- productElementName(int) - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- productElementName(int) - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- productElementName(int) - Static method in class org.apache.spark.sql.types.BinaryType
- productElementName(int) - Static method in class org.apache.spark.sql.types.BooleanType
- productElementName(int) - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.ByteType
- productElementName(int) - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- productElementName(int) - Static method in class org.apache.spark.sql.types.DateType
- productElementName(int) - Static method in class org.apache.spark.sql.types.DateTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- productElementName(int) - Static method in class org.apache.spark.sql.types.DoubleType
- productElementName(int) - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.FloatType
- productElementName(int) - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.IntegerType
- productElementName(int) - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.LongType
- productElementName(int) - Static method in class org.apache.spark.sql.types.LongTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.NullType
- productElementName(int) - Static method in class org.apache.spark.sql.types.ShortType
- productElementName(int) - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.StringType
- productElementName(int) - Static method in class org.apache.spark.sql.types.StringTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.TimestampNTZType
- productElementName(int) - Static method in class org.apache.spark.sql.types.TimestampType
- productElementName(int) - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- productElementName(int) - Static method in class org.apache.spark.sql.types.VariantType
- productElementName(int) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- productElementName(int) - Static method in class org.apache.spark.StopMapOutputTracker
- productElementName(int) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- productElementName(int) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- productElementName(int) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- productElementName(int) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- productElementName(int) - Static method in class org.apache.spark.Success
- productElementName(int) - Static method in class org.apache.spark.TaskResultLost
- productElementName(int) - Static method in class org.apache.spark.TaskSchedulerIsSet
- productElementName(int) - Static method in class org.apache.spark.UnknownReason
- productElementNames() - Static method in class org.apache.spark.ExpireDeadHosts
- productElementNames() - Static method in class org.apache.spark.metrics.DirectPoolMemory
- productElementNames() - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- productElementNames() - Static method in class org.apache.spark.metrics.JVMHeapMemory
- productElementNames() - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- productElementNames() - Static method in class org.apache.spark.metrics.MappedPoolMemory
- productElementNames() - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- productElementNames() - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- productElementNames() - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- productElementNames() - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- productElementNames() - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- productElementNames() - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- productElementNames() - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- productElementNames() - Static method in class org.apache.spark.ml.feature.Dot
- productElementNames() - Static method in class org.apache.spark.ml.feature.EmptyTerm
- productElementNames() - Static method in class org.apache.spark.Resubmitted
- productElementNames() - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- productElementNames() - Static method in class org.apache.spark.scheduler.JobSucceeded
- productElementNames() - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- productElementNames() - Static method in class org.apache.spark.scheduler.StopCoordinator
- productElementNames() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- productElementNames() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- productElementNames() - Static method in class org.apache.spark.sql.types.BinaryType
- productElementNames() - Static method in class org.apache.spark.sql.types.BooleanType
- productElementNames() - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.ByteType
- productElementNames() - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- productElementNames() - Static method in class org.apache.spark.sql.types.DateType
- productElementNames() - Static method in class org.apache.spark.sql.types.DateTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- productElementNames() - Static method in class org.apache.spark.sql.types.DoubleType
- productElementNames() - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.FloatType
- productElementNames() - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.IntegerType
- productElementNames() - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.LongType
- productElementNames() - Static method in class org.apache.spark.sql.types.LongTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.NullType
- productElementNames() - Static method in class org.apache.spark.sql.types.ShortType
- productElementNames() - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.StringType
- productElementNames() - Static method in class org.apache.spark.sql.types.StringTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- productElementNames() - Static method in class org.apache.spark.sql.types.TimestampType
- productElementNames() - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- productElementNames() - Static method in class org.apache.spark.sql.types.VariantType
- productElementNames() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- productElementNames() - Static method in class org.apache.spark.StopMapOutputTracker
- productElementNames() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- productElementNames() - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- productElementNames() - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- productElementNames() - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- productElementNames() - Static method in class org.apache.spark.Success
- productElementNames() - Static method in class org.apache.spark.TaskResultLost
- productElementNames() - Static method in class org.apache.spark.TaskSchedulerIsSet
- productElementNames() - Static method in class org.apache.spark.UnknownReason
- productFeatures() - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
- productIterator() - Static method in class org.apache.spark.ExpireDeadHosts
- productIterator() - Static method in class org.apache.spark.metrics.DirectPoolMemory
- productIterator() - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- productIterator() - Static method in class org.apache.spark.metrics.JVMHeapMemory
- productIterator() - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- productIterator() - Static method in class org.apache.spark.metrics.MappedPoolMemory
- productIterator() - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- productIterator() - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- productIterator() - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- productIterator() - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- productIterator() - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- productIterator() - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- productIterator() - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- productIterator() - Static method in class org.apache.spark.ml.feature.Dot
- productIterator() - Static method in class org.apache.spark.ml.feature.EmptyTerm
- productIterator() - Static method in class org.apache.spark.Resubmitted
- productIterator() - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- productIterator() - Static method in class org.apache.spark.scheduler.JobSucceeded
- productIterator() - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- productIterator() - Static method in class org.apache.spark.scheduler.StopCoordinator
- productIterator() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- productIterator() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- productIterator() - Static method in class org.apache.spark.sql.types.BinaryType
- productIterator() - Static method in class org.apache.spark.sql.types.BooleanType
- productIterator() - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.ByteType
- productIterator() - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- productIterator() - Static method in class org.apache.spark.sql.types.DateType
- productIterator() - Static method in class org.apache.spark.sql.types.DateTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- productIterator() - Static method in class org.apache.spark.sql.types.DoubleType
- productIterator() - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.FloatType
- productIterator() - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.IntegerType
- productIterator() - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.LongType
- productIterator() - Static method in class org.apache.spark.sql.types.LongTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.NullType
- productIterator() - Static method in class org.apache.spark.sql.types.ShortType
- productIterator() - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.StringType
- productIterator() - Static method in class org.apache.spark.sql.types.StringTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- productIterator() - Static method in class org.apache.spark.sql.types.TimestampType
- productIterator() - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- productIterator() - Static method in class org.apache.spark.sql.types.VariantType
- productIterator() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- productIterator() - Static method in class org.apache.spark.StopMapOutputTracker
- productIterator() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- productIterator() - Static method in class org.apache.spark.Success
- productIterator() - Static method in class org.apache.spark.TaskResultLost
- productIterator() - Static method in class org.apache.spark.TaskSchedulerIsSet
- productIterator() - Static method in class org.apache.spark.UnknownReason
- productPrefix() - Static method in class org.apache.spark.ExpireDeadHosts
- productPrefix() - Static method in class org.apache.spark.metrics.DirectPoolMemory
- productPrefix() - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
- productPrefix() - Static method in class org.apache.spark.metrics.JVMHeapMemory
- productPrefix() - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
- productPrefix() - Static method in class org.apache.spark.metrics.MappedPoolMemory
- productPrefix() - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
- productPrefix() - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
- productPrefix() - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
- productPrefix() - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
- productPrefix() - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
- productPrefix() - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
- productPrefix() - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
- productPrefix() - Static method in class org.apache.spark.ml.feature.Dot
- productPrefix() - Static method in class org.apache.spark.ml.feature.EmptyTerm
- productPrefix() - Static method in class org.apache.spark.Resubmitted
- productPrefix() - Static method in class org.apache.spark.scheduler.AllJobsCancelled
- productPrefix() - Static method in class org.apache.spark.scheduler.JobSucceeded
- productPrefix() - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
- productPrefix() - Static method in class org.apache.spark.scheduler.StopCoordinator
- productPrefix() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- productPrefix() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- productPrefix() - Static method in class org.apache.spark.sql.types.BinaryType
- productPrefix() - Static method in class org.apache.spark.sql.types.BooleanType
- productPrefix() - Static method in class org.apache.spark.sql.types.BooleanTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.ByteType
- productPrefix() - Static method in class org.apache.spark.sql.types.ByteTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- productPrefix() - Static method in class org.apache.spark.sql.types.DateType
- productPrefix() - Static method in class org.apache.spark.sql.types.DateTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- productPrefix() - Static method in class org.apache.spark.sql.types.DoubleType
- productPrefix() - Static method in class org.apache.spark.sql.types.DoubleTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.FloatType
- productPrefix() - Static method in class org.apache.spark.sql.types.FloatTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.IntegerType
- productPrefix() - Static method in class org.apache.spark.sql.types.IntegerTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.LongType
- productPrefix() - Static method in class org.apache.spark.sql.types.LongTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.NullType
- productPrefix() - Static method in class org.apache.spark.sql.types.ShortType
- productPrefix() - Static method in class org.apache.spark.sql.types.ShortTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.StringType
- productPrefix() - Static method in class org.apache.spark.sql.types.StringTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- productPrefix() - Static method in class org.apache.spark.sql.types.TimestampType
- productPrefix() - Static method in class org.apache.spark.sql.types.TimestampTypeExpression
- productPrefix() - Static method in class org.apache.spark.sql.types.VariantType
- productPrefix() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- productPrefix() - Static method in class org.apache.spark.StopMapOutputTracker
- productPrefix() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
- productPrefix() - Static method in class org.apache.spark.Success
- productPrefix() - Static method in class org.apache.spark.TaskResultLost
- productPrefix() - Static method in class org.apache.spark.TaskSchedulerIsSet
- productPrefix() - Static method in class org.apache.spark.UnknownReason
- progress() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent
- PROGRESS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- project(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- project(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- project(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- PROP_COMMENT - Static variable in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
A reserved property to specify the description of the namespace.
- PROP_COMMENT - Static variable in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
A reserved property to specify the description of the table.
- PROP_COMMENT - Static variable in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
A reserved property to specify the description of the view.
- PROP_CREATE_ENGINE_VERSION - Static variable in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
A reserved property to specify the software version used to create the view.
- PROP_ENGINE_VERSION - Static variable in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
A reserved property to specify the software version used to change the view.
- PROP_EXTERNAL - Static variable in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
A reserved property to specify a table was created with EXTERNAL.
- PROP_IS_MANAGED_LOCATION - Static variable in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
A reserved property to indicate that the table location is managed, not user-specified.
- PROP_LOCATION - Static variable in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
A reserved property to specify the location of the namespace.
- PROP_LOCATION - Static variable in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
A reserved property to specify the location of the table.
- PROP_OWNER - Static variable in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
A reserved property to specify the owner of the namespace.
- PROP_OWNER - Static variable in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
A reserved property to specify the owner of the table.
- PROP_OWNER - Static variable in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
A reserved property to specify the owner of the view.
- PROP_PROVIDER - Static variable in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
A reserved property to specify the provider of the table.
- PROP_TYPE - Static variable in interface org.apache.spark.sql.connector.catalog.index.SupportsIndex
-
A reserved property to specify the index type.
- properties() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
- properties() - Method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
- properties() - Method in class org.apache.spark.sql.connector.catalog.index.TableIndex
-
Returns the index properties.
- properties() - Method in interface org.apache.spark.sql.connector.catalog.Table
-
Returns the string map of table properties.
- properties() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The view properties.
- properties() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- propertiesAndDbPropertiesBothSpecifiedError(SqlBaseParser.CreateNamespaceContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- propertiesFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- propertiesToJson(Properties, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- property() - Method in class org.apache.spark.sql.connector.catalog.NamespaceChange.RemoveProperty
- property() - Method in class org.apache.spark.sql.connector.catalog.NamespaceChange.SetProperty
- property() - Method in class org.apache.spark.sql.connector.catalog.TableChange.RemoveProperty
- property() - Method in class org.apache.spark.sql.connector.catalog.TableChange.SetProperty
- property() - Method in class org.apache.spark.sql.connector.catalog.ViewChange.RemoveProperty
- property() - Method in class org.apache.spark.sql.connector.catalog.ViewChange.SetProperty
- protobufClassLoadError(String, String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- protobufDescriptorDependencyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- protobufFieldMatchError(String, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- protobufFieldTypeMismatchError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- protobufNotLoadedSqlFunctionsUnusable(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ProtobufSerDe<T> - Interface in org.apache.spark.status.protobuf
-
:: DeveloperApi ::
ProtobufSerDe
used to represent the API for serialize and deserialize of Protobuf data related to UI. - protobufTypeUnsupportedYetError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- provider() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
- provider() - Method in interface org.apache.spark.streaming.kinesis.SparkAWSCredentials
-
Return an AWSCredentialProvider instance that can be used by the Kinesis Client Library to authenticate to AWS services (Kinesis, CloudWatch and DynamoDB).
- proxyBase() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
- ProxyRedirectHandler - Class in org.apache.spark.ui
-
A Jetty handler to handle redirects to a proxy server.
- ProxyRedirectHandler(String) - Constructor for class org.apache.spark.ui.ProxyRedirectHandler
- pruneColumns(StructType) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownRequiredColumns
-
Applies column pruning w.r.t.
- PrunedFilteredScan - Interface in org.apache.spark.sql.sources
-
A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Row objects.
- PrunedScan - Interface in org.apache.spark.sql.sources
-
A BaseRelation that can eliminate unneeded columns before producing an RDD containing all of its tuples as Row objects.
- Pseudorandom - Interface in org.apache.spark.util.random
-
:: DeveloperApi :: A class with pseudorandom behavior.
- purgePartition(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Drop a partition from the table and completely remove partition data by skipping a trash even if it is supported.
- purgePartitions(InternalRow[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsAtomicPartitionManagement
-
Drop an array of partitions atomically from table, and completely remove partitions data by skipping a trash even if it is supported.
- purgeTable(Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- purgeTable(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Drop a table in the catalog and completely remove its data by skipping a trash even if it is supported.
- pushAggregation(Aggregation) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownAggregates
-
Pushes down Aggregation to datasource.
- PushBasedFetchHelper - Class in org.apache.spark.storage
-
Helper class for
ShuffleBlockFetcherIterator
that encapsulates all the push-based functionality to fetch push-merged block meta and shuffle chunks. - PushBasedFetchHelper(ShuffleBlockFetcherIterator, BlockStoreClient, BlockManager, org.apache.spark.MapOutputTracker, ShuffleReadMetricsReporter) - Constructor for class org.apache.spark.storage.PushBasedFetchHelper
- pushedFilters() - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownFilters
-
Returns the filters that are pushed to the data source via
SupportsPushDownFilters.pushFilters(Filter[])
. - pushedPredicates() - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownV2Filters
-
Returns the predicates that are pushed to the data source via
SupportsPushDownV2Filters.pushPredicates(Predicate[])
. - pushFilters(Filter[]) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownFilters
-
Pushes down filters, and returns filters that need to be evaluated after scanning.
- pushLimit(int) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownLimit
-
Pushes down LIMIT to the data source.
- pushOffset(int) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownOffset
-
Pushes down OFFSET to the data source.
- pushPredicates(Predicate[]) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownV2Filters
-
Pushes down predicates, and returns predicates that need to be evaluated after scanning.
- pushTableSample(double, double, boolean, long) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownTableSample
-
Pushes down SAMPLE to the data source.
- pushTopN(SortOrder[], int) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownTopN
-
Pushes down top N to the data source.
- put(Object) - Method in interface org.apache.spark.sql.streaming.ListState
-
Update the value of the list.
- put(Object) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Puts an item into this
BloomFilter
. - put(String, String) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- put(Param<T>, T) - Method in class org.apache.spark.ml.param.ParamMap
-
Puts a (param, value) pair (overwrites if the input param exists).
- put(ParamPair<?>...) - Method in class org.apache.spark.ml.param.ParamMap
-
Puts a list of param pairs (overwrites if the input params exists).
- put(Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.param.ParamMap
-
Puts a list of param pairs (overwrites if the input params exists).
- putAll(Map<? extends String, ? extends String>) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- putAllAttributes(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> attributes = 27;
- putAllCustomMetrics(Map<String, Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
map<string, int64> custom_metrics = 12;
- putAllDurationMs(Map<String, Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, int64> duration_ms = 7;
- putAllEventTime(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> event_time = 8;
- putAllExecutorLogs(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> executor_logs = 23;
- putAllExecutorLogs(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
map<string, string> executor_logs = 16;
- putAllExecutorResources(Map<String, StoreTypes.ExecutorResourceRequest>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- putAllExecutorSummary(Map<String, StoreTypes.ExecutorStageSummary>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- putAllJobs(Map<Long, StoreTypes.JobExecutionStatus>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- putAllJobsValue(Map<Long, Integer>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- putAllKilledTasksSummary(Map<String, Integer>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, int32> killed_tasks_summary = 48;
- putAllKillTasksSummary(Map<String, Integer>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
map<string, int32> kill_tasks_summary = 20;
- putAllLocality(Map<String, Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
map<string, int64> locality = 3;
- putAllMetrics(Map<String, Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
map<string, int64> metrics = 1;
- putAllMetrics(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
map<string, string> metrics = 3;
- putAllMetrics(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
map<string, string> metrics = 8;
- putAllMetricValues(Map<Long, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, string> metric_values = 14;
- putAllModifiedConfigs(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<string, string> modified_configs = 6;
- putAllObservedMetrics(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> observed_metrics = 12;
- putAllProcessLogs(Map<String, String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
map<string, string> process_logs = 7;
- putAllResources(Map<String, StoreTypes.ResourceInformation>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- putAllTaskResources(Map<String, StoreTypes.TaskResourceRequest>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- putAllTasks(Map<Long, StoreTypes.TaskData>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- putAttributes(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> attributes = 27;
- putBinary(byte[]) - Method in class org.apache.spark.util.sketch.BloomFilter
-
A specialized variant of
BloomFilter.put(Object)
that only supports byte array items. - putBoolean(String, boolean) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Boolean.
- putBooleanArray(String, boolean[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Boolean array.
- putCustomMetrics(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
map<string, int64> custom_metrics = 12;
- putDouble(String, double) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Double.
- putDoubleArray(String, double[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Double array.
- putDurationMs(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, int64> duration_ms = 7;
- putEventTime(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> event_time = 8;
- putExecutorLogs(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> executor_logs = 23;
- putExecutorLogs(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
map<string, string> executor_logs = 16;
- putExecutorResources(String, StoreTypes.ExecutorResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- putExecutorResourcesBuilderIfAbsent(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- putExecutorSummary(String, StoreTypes.ExecutorStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- putExecutorSummaryBuilderIfAbsent(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- putJobs(long, StoreTypes.JobExecutionStatus) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- putJobsValue(long, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- putKilledTasksSummary(String, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, int32> killed_tasks_summary = 48;
- putKillTasksSummary(String, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
map<string, int32> kill_tasks_summary = 20;
- putLocality(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
map<string, int64> locality = 3;
- putLong(long) - Method in class org.apache.spark.util.sketch.BloomFilter
-
A specialized variant of
BloomFilter.put(Object)
that only supportslong
items. - putLong(String, long) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Long.
- putLongArray(String, long[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Long array.
- putMetadata(String, Metadata) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a
Metadata
. - putMetadataArray(String, Metadata[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a
Metadata
array. - putMetrics(String, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
map<string, int64> metrics = 1;
- putMetrics(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
map<string, string> metrics = 3;
- putMetrics(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
map<string, string> metrics = 8;
- putMetricValues(long, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, string> metric_values = 14;
- putModifiedConfigs(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<string, string> modified_configs = 6;
- putNull(String) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a null.
- putObservedMetrics(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> observed_metrics = 12;
- putProcessLogs(String, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
map<string, string> process_logs = 7;
- putResources(String, StoreTypes.ResourceInformation) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- putResourcesBuilderIfAbsent(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- putString(String) - Method in class org.apache.spark.util.sketch.BloomFilter
-
A specialized variant of
BloomFilter.put(Object)
that only supportsString
items. - putString(String, String) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a String.
- putStringArray(String, String[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a String array.
- putTaskResources(String, StoreTypes.TaskResourceRequest) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- putTaskResourcesBuilderIfAbsent(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- putTasks(long, StoreTypes.TaskData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- putTasksBuilderIfAbsent(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- pValue() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
- pValue() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
- pValue() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
The probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.
- pValues() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
- pValues() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- PYSPARK_MEM() - Static method in class org.apache.spark.resource.ResourceProfile
-
built-in executor resource: pyspark.memory
- pysparkMemory(String) - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
Specify pyspark memory.
- PYTHON_STREAM() - Static method in class org.apache.spark.storage.BlockId
- pythonDataSourceError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- PythonStreamBlockId - Class in org.apache.spark.storage
- PythonStreamBlockId(int, long) - Constructor for class org.apache.spark.storage.PythonStreamBlockId
- pythonStreamingDataSourceRuntimeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- PythonStreamingListener - Interface in org.apache.spark.streaming.api.java
- PythonStreamingQueryListener - Interface in org.apache.spark.sql.streaming
-
Py4J allows a pure interface so this proxy is required.
- pyUDT() - Method in class org.apache.spark.mllib.linalg.VectorUDT
- pyUDT() - Method in class org.apache.spark.sql.types.UserDefinedType
-
Paired Python UDT class, if exists.
Q
- Q() - Method in class org.apache.spark.mllib.linalg.QRDecomposition
- QRDecomposition<QType,
RType> - Class in org.apache.spark.mllib.linalg -
Represents QR factors.
- QRDecomposition(QType, RType) - Constructor for class org.apache.spark.mllib.linalg.QRDecomposition
- QUANTILE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- quantileCalculationStrategy() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- QuantileDiscretizer - Class in org.apache.spark.ml.feature
-
QuantileDiscretizer
takes a column with continuous features and outputs a column with binned categorical features. - QuantileDiscretizer() - Constructor for class org.apache.spark.ml.feature.QuantileDiscretizer
- QuantileDiscretizer(String) - Constructor for class org.apache.spark.ml.feature.QuantileDiscretizer
- QuantileDiscretizerBase - Interface in org.apache.spark.ml.feature
-
Params for
QuantileDiscretizer
. - quantileProbabilities() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- quantileProbabilities() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- quantileProbabilities() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
-
Param for quantile probabilities array.
- quantiles() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- quantiles() - Method in class org.apache.spark.status.api.v1.ExecutorPeakMetricsDistributions
- quantiles() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- QUANTILES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- QUANTILES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- QUANTILES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- quantilesCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- quantilesCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- quantilesCol() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
-
Param for quantiles column name.
- QuantileStrategy - Class in org.apache.spark.mllib.tree.configuration
-
Enum for selecting the quantile calculation strategy
- QuantileStrategy() - Constructor for class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- quarter(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the quarter as an integer from a given date/timestamp/string.
- query() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The view query SQL text.
- queryColumnNames() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The output column names of the query that creates this view.
- queryColumnNames() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- QueryCompilationErrors - Class in org.apache.spark.sql.errors
-
Object for grouping error messages from exceptions thrown during query compilation.
- QueryCompilationErrors() - Constructor for class org.apache.spark.sql.errors.QueryCompilationErrors
- QueryContext - Interface in org.apache.spark
-
Query context of a
SparkThrowable
. - QueryContextType - Enum Class in org.apache.spark
-
The type of
QueryContext
. - QueryErrorsBase - Interface in org.apache.spark.sql.errors
-
The trait exposes util methods for preparing error messages such as quoting of error elements.
- queryExecution() - Method in class org.apache.spark.sql.Dataset
- queryExecution() - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- QueryExecutionErrors - Class in org.apache.spark.sql.errors
-
Object for grouping error messages from (most) exceptions thrown during query execution.
- QueryExecutionErrors() - Constructor for class org.apache.spark.sql.errors.QueryExecutionErrors
- QueryExecutionListener - Interface in org.apache.spark.sql.util
-
The interface of query execution listener that can be used to analyze execution metrics.
- queryFromRawFilesIncludeCorruptRecordColumnError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- queryId() - Method in interface org.apache.spark.sql.connector.write.LogicalWriteInfo
-
queryId
is a unique string of the query. - QueryIdleEvent$() - Constructor for class org.apache.spark.sql.streaming.StreamingQueryListener.QueryIdleEvent$
- QueryInfo - Interface in org.apache.spark.sql.streaming
-
Represents the query info provided to the stateful processor used in the arbitrary state API v2 to easily identify task retries on the same partition.
- queryName(String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Specifies the name of the
StreamingQuery
that can be started withstart()
. - queryNameNotSpecifiedForMemorySinkError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- QueryParsingErrors - Class in org.apache.spark.sql.errors
-
Object for grouping all error messages of the query parsing.
- QueryParsingErrors() - Constructor for class org.apache.spark.sql.errors.QueryParsingErrors
- QueryProgressEvent$() - Constructor for class org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent$
- QueryStartedEvent$() - Constructor for class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent$
- QueryTerminatedEvent(UUID, UUID, Option<String>) - Constructor for class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
- QueryTerminatedEvent$() - Constructor for class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent$
- QUEUED - Enum constant in enum class org.apache.spark.status.api.v1.streaming.BatchStatus
- queueStream(Queue<JavaRDD<T>>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from a queue of RDDs.
- queueStream(Queue<JavaRDD<T>>, boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from a queue of RDDs.
- queueStream(Queue<JavaRDD<T>>, boolean, JavaRDD<T>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from a queue of RDDs.
- queueStream(Queue<RDD<T>>, boolean, RDD<T>, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream from a queue of RDDs.
- queueStream(Queue<RDD<T>>, boolean, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream from a queue of RDDs.
- quot(byte, byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- quot(double, double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral
- quot(float, float) - Method in interface org.apache.spark.sql.types.FloatType.FloatAsIfIntegral
- quot(int, int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- quot(long, long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- quot(short, short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- quot(Decimal) - Method in class org.apache.spark.sql.types.Decimal
- quot(Decimal, Decimal) - Method in class org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
- quoteByDefault(String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- quoted() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- quoted() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
- quoted() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.NamespaceHelper
- quoted() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TableIdentifierHelper
- quoteIdentifier(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
- quoteIdentifier(String) - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- quoteIdentifier(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Quotes the identifier.
- quoteIdentifier(String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- quoteIdentifier(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
R
- R() - Method in class org.apache.spark.mllib.linalg.QRDecomposition
- r2() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns R^2^, the coefficient of determination.
- r2() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns R^2^, the unadjusted coefficient of determination.
- r2adj() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns Adjusted R^2^, the adjusted coefficient of determination.
- RACK_LOCAL() - Static method in class org.apache.spark.scheduler.TaskLocality
- radians(String) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in degrees to an approximately equivalent angle measured in radians.
- radians(Column) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in degrees to an approximately equivalent angle measured in radians.
- raise_error(Column) - Static method in class org.apache.spark.sql.functions
-
Throws an exception with the provided error message.
- raiseError(UTF8String, MapData) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- rand() - Static method in class org.apache.spark.sql.functions
-
Generate a random column with independent and identically distributed (i.i.d.) samples uniformly distributed in [0.0, 1.0).
- rand(int, int, Random) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting ofi.i.d.
uniform random numbers. - rand(int, int, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a
DenseMatrix
consisting ofi.i.d.
uniform random numbers. - rand(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting ofi.i.d.
uniform random numbers. - rand(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a
DenseMatrix
consisting ofi.i.d.
uniform random numbers. - rand(long) - Static method in class org.apache.spark.sql.functions
-
Generate a random column with independent and identically distributed (i.i.d.) samples uniformly distributed in [0.0, 1.0).
- randn() - Static method in class org.apache.spark.sql.functions
-
Generate a column with independent and identically distributed (i.i.d.) samples from the standard normal distribution.
- randn(int, int, Random) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting ofi.i.d.
gaussian random numbers. - randn(int, int, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a
DenseMatrix
consisting ofi.i.d.
gaussian random numbers. - randn(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting ofi.i.d.
gaussian random numbers. - randn(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a
DenseMatrix
consisting ofi.i.d.
gaussian random numbers. - randn(long) - Static method in class org.apache.spark.sql.functions
-
Generate a column with independent and identically distributed (i.i.d.) samples from the standard normal distribution.
- random() - Method in class org.apache.spark.ml.image.SamplePathFilter
- random() - Static method in class org.apache.spark.sql.functions
-
Returns a random value with independent and identically distributed (i.i.d.) uniformly distributed values in [0, 1).
- random() - Method in interface org.apache.spark.util.SparkClassUtils
- random() - Static method in class org.apache.spark.util.Utils
- random(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a random value with independent and identically distributed (i.i.d.) uniformly distributed values in [0, 1).
- RANDOM() - Static method in class org.apache.spark.mllib.clustering.KMeans
- RandomBlockReplicationPolicy - Class in org.apache.spark.storage
- RandomBlockReplicationPolicy() - Constructor for class org.apache.spark.storage.RandomBlockReplicationPolicy
- RandomDataGenerator<T> - Interface in org.apache.spark.mllib.random
-
Trait for random data generators that generate i.i.d.
- RandomForest - Class in org.apache.spark.ml.tree.impl
-
ALGORITHM
- RandomForest - Class in org.apache.spark.mllib.tree
-
A class that implements a Random Forest learning algorithm for classification and regression.
- RandomForest() - Constructor for class org.apache.spark.ml.tree.impl.RandomForest
- RandomForest(Strategy, int, String, int) - Constructor for class org.apache.spark.mllib.tree.RandomForest
- RandomForestClassificationModel - Class in org.apache.spark.ml.classification
-
Random Forest model for classification.
- RandomForestClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for multiclass RandomForestClassification results for a given model.
- RandomForestClassificationSummaryImpl - Class in org.apache.spark.ml.classification
-
Multiclass RandomForestClassification results for a given model.
- RandomForestClassificationSummaryImpl(Dataset<Row>, String, String, String) - Constructor for class org.apache.spark.ml.classification.RandomForestClassificationSummaryImpl
- RandomForestClassificationTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for multiclass RandomForestClassification training results.
- RandomForestClassificationTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Multiclass RandomForestClassification training results.
- RandomForestClassificationTrainingSummaryImpl(Dataset<Row>, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.RandomForestClassificationTrainingSummaryImpl
- RandomForestClassifier - Class in org.apache.spark.ml.classification
-
Random Forest learning algorithm for classification.
- RandomForestClassifier() - Constructor for class org.apache.spark.ml.classification.RandomForestClassifier
- RandomForestClassifier(String) - Constructor for class org.apache.spark.ml.classification.RandomForestClassifier
- RandomForestClassifierParams - Interface in org.apache.spark.ml.tree
- RandomForestModel - Class in org.apache.spark.mllib.tree.model
-
Represents a random forest model.
- RandomForestModel(Enumeration.Value, DecisionTreeModel[]) - Constructor for class org.apache.spark.mllib.tree.model.RandomForestModel
- RandomForestParams - Interface in org.apache.spark.ml.tree
-
Parameters for Random Forest algorithms.
- RandomForestRegressionModel - Class in org.apache.spark.ml.regression
-
Random Forest model for regression.
- RandomForestRegressor - Class in org.apache.spark.ml.regression
-
Random Forest learning algorithm for regression.
- RandomForestRegressor() - Constructor for class org.apache.spark.ml.regression.RandomForestRegressor
- RandomForestRegressor(String) - Constructor for class org.apache.spark.ml.regression.RandomForestRegressor
- RandomForestRegressorParams - Interface in org.apache.spark.ml.tree
- randomize(IterableOnce<T>, ClassTag<T>) - Static method in class org.apache.spark.util.Utils
-
Shuffle the elements of a collection into a random order, returning the result in a new collection.
- randomizeInPlace(Object, Random) - Static method in class org.apache.spark.util.Utils
-
Shuffle the elements of an array into a random order, modifying the original array.
- randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.randomJavaRDD
with the default seed & numPartitions - randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.randomJavaRDD
with the default seed. - randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples produced by the input RandomDataGenerator. - randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.randomJavaVectorRDD
with the default number of partitions and the default seed. - randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
::
RandomRDDs.randomJavaVectorRDD
with the default seed. - randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.randomVectorRDD
. - randomRDD(SparkContext, RandomDataGenerator<T>, long, int, long, ClassTag<T>) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples produced by the input RandomDataGenerator. - RandomRDDs - Class in org.apache.spark.mllib.random
-
Generator methods for creating RDDs comprised of
i.i.d.
samples from some distribution. - RandomRDDs() - Constructor for class org.apache.spark.mllib.random.RandomRDDs
- RandomSampler<T,
U> - Interface in org.apache.spark.util.random -
:: DeveloperApi :: A pseudorandom sampler.
- randomSplit(double[]) - Method in class org.apache.spark.api.java.JavaRDD
-
Randomly splits this RDD with the provided weights.
- randomSplit(double[]) - Method in class org.apache.spark.sql.api.Dataset
-
Randomly splits this Dataset with the provided weights.
- randomSplit(double[]) - Method in class org.apache.spark.sql.Dataset
- randomSplit(double[], long) - Method in class org.apache.spark.api.java.JavaRDD
-
Randomly splits this RDD with the provided weights.
- randomSplit(double[], long) - Method in class org.apache.spark.rdd.RDD
-
Randomly splits this RDD with the provided weights.
- randomSplit(double[], long) - Method in class org.apache.spark.sql.api.Dataset
-
Randomly splits this Dataset with the provided weights.
- randomSplit(double[], long) - Method in class org.apache.spark.sql.Dataset
- randomSplitAsList(double[], long) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a Java list that contains randomly split Dataset with the provided weights.
- randomSplitAsList(double[], long) - Method in class org.apache.spark.sql.Dataset
- randomVectorRDD(SparkContext, RandomDataGenerator<Object>, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing
i.i.d.
samples produced by the input RandomDataGenerator. - RandomVertexCut$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
- range() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- range(long) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a
Dataset
with a singleLongType
column namedid
, containing elements in a range from 0 toend
(exclusive) with step value 1. - range(long) - Method in class org.apache.spark.sql.SparkSession
- range(long) - Method in class org.apache.spark.sql.SQLContext
- range(long, long) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a
Dataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with step value 1. - range(long, long) - Method in class org.apache.spark.sql.SparkSession
- range(long, long) - Method in class org.apache.spark.sql.SQLContext
- range(long, long, long) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a
Dataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with a step value. - range(long, long, long) - Method in class org.apache.spark.sql.SparkSession
- range(long, long, long) - Method in class org.apache.spark.sql.SQLContext
- range(long, long, long, int) - Method in class org.apache.spark.SparkContext
-
Creates a new RDD[Long] containing elements from
start
toend
(exclusive), increased bystep
every element. - range(long, long, long, int) - Method in class org.apache.spark.sql.api.SparkSession
-
Creates a
Dataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with a step value, with partition number specified. - range(long, long, long, int) - Method in class org.apache.spark.sql.SparkSession
- range(long, long, long, int) - Method in class org.apache.spark.sql.SQLContext
- rangeBetween(long, long) - Static method in class org.apache.spark.sql.expressions.Window
- rangeBetween(long, long) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the frame boundaries, from
start
(inclusive) toend
(inclusive). - RangeDependency<T> - Class in org.apache.spark
-
:: DeveloperApi :: Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.
- RangeDependency(RDD<T>, int, int, int) - Constructor for class org.apache.spark.RangeDependency
- RangePartitioner<K,
V> - Class in org.apache.spark -
A
Partitioner
that partitions sortable records by range into roughly equal ranges. - RangePartitioner(int, RDD<? extends Product2<K, V>>, boolean, int, Ordering<K>, ClassTag<K>) - Constructor for class org.apache.spark.RangePartitioner
- RangePartitioner(int, RDD<? extends Product2<K, V>>, boolean, Ordering<K>, ClassTag<K>) - Constructor for class org.apache.spark.RangePartitioner
- rank() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
- rank() - Method in class org.apache.spark.ml.recommendation.ALS
- rank() - Method in class org.apache.spark.ml.recommendation.ALSModel
- rank() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for rank of the matrix factorization (positive).
- rank() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- rank() - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
- rank() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the rank of rows within a window partition.
- RankingEvaluator - Class in org.apache.spark.ml.evaluation
-
:: Experimental :: Evaluator for ranking, which expects two input columns: prediction and label.
- RankingEvaluator() - Constructor for class org.apache.spark.ml.evaluation.RankingEvaluator
- RankingEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.RankingEvaluator
- RankingMetrics<T> - Class in org.apache.spark.mllib.evaluation
-
Evaluator for ranking algorithms.
- RankingMetrics(RDD<? extends Product>, ClassTag<T>) - Constructor for class org.apache.spark.mllib.evaluation.RankingMetrics
- RateEstimator - Interface in org.apache.spark.streaming.scheduler.rate
-
A component that estimates the rate at which an
InputDStream
should ingest records, based on updates at every batch completion. - rating() - Method in class org.apache.spark.ml.recommendation.ALS.Rating
- rating() - Method in class org.apache.spark.mllib.recommendation.Rating
- Rating - Class in org.apache.spark.mllib.recommendation
-
A more compact class to represent a rating than Tuple3[Int, Int, Double].
- Rating(int, int, double) - Constructor for class org.apache.spark.mllib.recommendation.Rating
- Rating(ID, ID, float) - Constructor for class org.apache.spark.ml.recommendation.ALS.Rating
- Rating$() - Constructor for class org.apache.spark.ml.recommendation.ALS.Rating$
- RatingBlock$() - Constructor for class org.apache.spark.ml.recommendation.ALS.RatingBlock$
- ratingCol() - Method in class org.apache.spark.ml.recommendation.ALS
- ratingCol() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for the column name for ratings.
- ratioParam() - Static method in class org.apache.spark.ml.image.SamplePathFilter
- raw2ProbabilityInPlace(Vector) - Method in interface org.apache.spark.ml.ann.TopologyModel
-
Probability of the model.
- rawCount() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- rawPredictionCol() - Method in class org.apache.spark.ml.classification.ClassificationModel
- rawPredictionCol() - Method in class org.apache.spark.ml.classification.Classifier
- rawPredictionCol() - Method in class org.apache.spark.ml.classification.OneVsRest
- rawPredictionCol() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- rawPredictionCol() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- rawPredictionCol() - Method in interface org.apache.spark.ml.param.shared.HasRawPredictionCol
-
Param for raw prediction (a.k.a.
- rawSocketStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them.
- rawSocketStream(String, int, StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them.
- rawSocketStream(String, int, StorageLevel, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them.
- RawTextHelper - Class in org.apache.spark.streaming.util
- RawTextHelper() - Constructor for class org.apache.spark.streaming.util.RawTextHelper
- RawTextSender - Class in org.apache.spark.streaming.util
-
A helper program that sends blocks of Kryo-serialized text strings out on a socket at a specified rate.
- RawTextSender() - Constructor for class org.apache.spark.streaming.util.RawTextSender
- RBackendAuthHandler - Class in org.apache.spark.api.r
-
Authentication handler for connections from the R process.
- RBackendAuthHandler(String) - Constructor for class org.apache.spark.api.r.RBackendAuthHandler
- rdd() - Method in class org.apache.spark.api.java.JavaDoubleRDD
- rdd() - Method in class org.apache.spark.api.java.JavaPairRDD
- rdd() - Method in class org.apache.spark.api.java.JavaRDD
- rdd() - Method in interface org.apache.spark.api.java.JavaRDDLike
- rdd() - Method in class org.apache.spark.Dependency
- rdd() - Method in class org.apache.spark.NarrowDependency
- rdd() - Method in class org.apache.spark.ShuffleDependency
- rdd() - Method in class org.apache.spark.sql.Dataset
- RDD<T> - Class in org.apache.spark.rdd
-
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
- RDD(RDD<?>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.RDD
-
Construct an RDD with just a one-to-one dependency on one parent
- RDD(SparkContext, Seq<Dependency<?>>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.RDD
- RDD() - Static method in class org.apache.spark.api.r.RRunnerModes
- RDD() - Static method in class org.apache.spark.storage.BlockId
- RDD_BLOCKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- RDD_IDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- RDD_NAME() - Static method in class org.apache.spark.ui.storage.ToolTips
- RDDBarrier<T> - Class in org.apache.spark.rdd
-
:: Experimental :: Wraps an RDD in a barrier stage, which forces Spark to launch tasks of this stage together.
- RDDBlockId - Class in org.apache.spark.storage
- RDDBlockId(int, int) - Constructor for class org.apache.spark.storage.RDDBlockId
- rddBlockNotFoundError(BlockId, int) - Static method in class org.apache.spark.errors.SparkCoreErrors
- rddBlocks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- rddCleaned(int) - Method in interface org.apache.spark.CleanerListener
- RDDDataDistribution - Class in org.apache.spark.status.api.v1
- RDDFunctions<T> - Class in org.apache.spark.mllib.rdd
-
Machine learning specific RDD functions.
- RDDFunctions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.mllib.rdd.RDDFunctions
- rddId() - Method in class org.apache.spark.CleanCheckpoint
- rddId() - Method in class org.apache.spark.CleanRDD
- rddId() - Method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
- rddId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveRdd
- rddId() - Method in class org.apache.spark.storage.RDDBlockId
- rddIds() - Method in class org.apache.spark.status.api.v1.StageData
- RDDInfo - Class in org.apache.spark.storage
- RDDInfo(int, String, int, StorageLevel, boolean, Seq<Object>, String, Option<org.apache.spark.rdd.RDDOperationScope>, Enumeration.Value) - Constructor for class org.apache.spark.storage.RDDInfo
- rddInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- rddInfos() - Method in class org.apache.spark.scheduler.StageInfo
- rddInfoToJson(RDDInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- rddLacksSparkContextError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- RDDPartitionInfo - Class in org.apache.spark.status.api.v1
- RDDPartitionSeq - Class in org.apache.spark.status
-
A custom sequence of partitions based on a mutable linked list.
- RDDPartitionSeq() - Constructor for class org.apache.spark.status.RDDPartitionSeq
- rdds() - Method in class org.apache.spark.rdd.CoGroupedRDD
- rdds() - Method in class org.apache.spark.rdd.UnionRDD
- RDDStorageInfo - Class in org.apache.spark.status.api.v1
- rddToAsyncRDDActions(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.rdd.RDD
- rddToDatasetHolder(RDD<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLImplicits
-
Creates a
Dataset
from an RDD. - rddToOrderedRDDFunctions(RDD<Tuple2<K, V>>, Ordering<K>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.rdd.RDD
- rddToPairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Static method in class org.apache.spark.rdd.RDD
- rddToSequenceFileRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, <any>, <any>) - Static method in class org.apache.spark.rdd.RDD
- read() - Method in class org.apache.spark.io.NioBufferedFileInputStream
- read() - Method in class org.apache.spark.io.ReadAheadInputStream
- read() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- read() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- read() - Static method in class org.apache.spark.ml.classification.FMClassificationModel
- read() - Static method in class org.apache.spark.ml.classification.FMClassifier
- read() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
- read() - Static method in class org.apache.spark.ml.classification.GBTClassifier
- read() - Static method in class org.apache.spark.ml.classification.LinearSVC
- read() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
- read() - Static method in class org.apache.spark.ml.classification.LogisticRegression
- read() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
- read() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- read() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- read() - Static method in class org.apache.spark.ml.classification.NaiveBayes
- read() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
- read() - Static method in class org.apache.spark.ml.classification.OneVsRest
- read() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
- read() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- read() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
- read() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
- read() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- read() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
- read() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
- read() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- read() - Static method in class org.apache.spark.ml.clustering.KMeans
- read() - Static method in class org.apache.spark.ml.clustering.KMeansModel
- read() - Static method in class org.apache.spark.ml.clustering.LDA
- read() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
- read() - Static method in class org.apache.spark.ml.clustering.PowerIterationClustering
- read() - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- read() - Static method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- read() - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- read() - Static method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- read() - Static method in class org.apache.spark.ml.evaluation.RankingEvaluator
- read() - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- read() - Static method in class org.apache.spark.ml.feature.Binarizer
- read() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- read() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- read() - Static method in class org.apache.spark.ml.feature.Bucketizer
- read() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- read() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- read() - Static method in class org.apache.spark.ml.feature.ColumnPruner
- read() - Static method in class org.apache.spark.ml.feature.CountVectorizer
- read() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
- read() - Static method in class org.apache.spark.ml.feature.DCT
- read() - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
- read() - Static method in class org.apache.spark.ml.feature.FeatureHasher
- read() - Static method in class org.apache.spark.ml.feature.HashingTF
- read() - Static method in class org.apache.spark.ml.feature.IDF
- read() - Static method in class org.apache.spark.ml.feature.IDFModel
- read() - Static method in class org.apache.spark.ml.feature.Imputer
- read() - Static method in class org.apache.spark.ml.feature.ImputerModel
- read() - Static method in class org.apache.spark.ml.feature.IndexToString
- read() - Static method in class org.apache.spark.ml.feature.Interaction
- read() - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
- read() - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- read() - Static method in class org.apache.spark.ml.feature.MinHashLSH
- read() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
- read() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
- read() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
- read() - Static method in class org.apache.spark.ml.feature.NGram
- read() - Static method in class org.apache.spark.ml.feature.Normalizer
- read() - Static method in class org.apache.spark.ml.feature.OneHotEncoder
- read() - Static method in class org.apache.spark.ml.feature.OneHotEncoderModel
- read() - Static method in class org.apache.spark.ml.feature.PCA
- read() - Static method in class org.apache.spark.ml.feature.PCAModel
- read() - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
- read() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
- read() - Static method in class org.apache.spark.ml.feature.RegexTokenizer
- read() - Static method in class org.apache.spark.ml.feature.RFormula
- read() - Static method in class org.apache.spark.ml.feature.RFormulaModel
- read() - Static method in class org.apache.spark.ml.feature.RobustScaler
- read() - Static method in class org.apache.spark.ml.feature.RobustScalerModel
- read() - Static method in class org.apache.spark.ml.feature.SQLTransformer
- read() - Static method in class org.apache.spark.ml.feature.StandardScaler
- read() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
- read() - Static method in class org.apache.spark.ml.feature.StopWordsRemover
- read() - Static method in class org.apache.spark.ml.feature.StringIndexer
- read() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
- read() - Static method in class org.apache.spark.ml.feature.Tokenizer
- read() - Static method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- read() - Static method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- read() - Static method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- read() - Static method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- read() - Static method in class org.apache.spark.ml.feature.VectorAssembler
- read() - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- read() - Static method in class org.apache.spark.ml.feature.VectorIndexer
- read() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
- read() - Static method in class org.apache.spark.ml.feature.VectorSizeHint
- read() - Static method in class org.apache.spark.ml.feature.VectorSlicer
- read() - Static method in class org.apache.spark.ml.feature.Word2Vec
- read() - Static method in class org.apache.spark.ml.feature.Word2VecModel
- read() - Static method in class org.apache.spark.ml.fpm.FPGrowth
- read() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
- read() - Static method in class org.apache.spark.ml.Pipeline
- read() - Static method in class org.apache.spark.ml.PipelineModel
- read() - Static method in class org.apache.spark.ml.recommendation.ALS
- read() - Static method in class org.apache.spark.ml.recommendation.ALSModel
- read() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- read() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- read() - Static method in class org.apache.spark.ml.regression.FMRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.FMRegressor
- read() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.GBTRegressor
- read() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- read() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
- read() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.LinearRegression
- read() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- read() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
- read() - Static method in class org.apache.spark.ml.tuning.CrossValidator
- read() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
- read() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
- read() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- read() - Method in interface org.apache.spark.ml.util.DefaultParamsReadable
- read() - Method in interface org.apache.spark.ml.util.MLReadable
-
Returns an
MLReader
instance for this class. - read() - Method in class org.apache.spark.sql.api.SparkSession
-
Returns a
DataFrameReader
that can be used to read non-streaming data in as aDataFrame
. - read() - Method in class org.apache.spark.sql.SparkSession
- read() - Method in class org.apache.spark.sql.SQLContext
- read() - Method in class org.apache.spark.storage.BufferReleasingInputStream
- read(byte[]) - Method in class org.apache.spark.storage.BufferReleasingInputStream
- read(byte[], int, int) - Method in class org.apache.spark.io.NioBufferedFileInputStream
- read(byte[], int, int) - Method in class org.apache.spark.io.ReadAheadInputStream
- read(byte[], int, int) - Method in class org.apache.spark.storage.BufferReleasingInputStream
- read(Kryo, Input, Class<Iterable<?>>) - Method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
- read(String) - Static method in class org.apache.spark.streaming.CheckpointReader
-
Read checkpoint files present in the given checkpoint directory.
- read(String, SparkConf, Configuration, boolean) - Static method in class org.apache.spark.streaming.CheckpointReader
-
Read checkpoint files present in the given checkpoint directory.
- read(WriteAheadLogRecordHandle) - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Read a written record based on the given record handle.
- READ_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- READ_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- ReadableChannelFileRegion - Class in org.apache.spark.storage
- ReadableChannelFileRegion(ReadableByteChannel, long) - Constructor for class org.apache.spark.storage.ReadableChannelFileRegion
- ReadAheadInputStream - Class in org.apache.spark.io
-
InputStream
implementation which asynchronously reads ahead from the underlying input stream when specified amount of data has been read from the current buffer. - ReadAheadInputStream(InputStream, int) - Constructor for class org.apache.spark.io.ReadAheadInputStream
-
Creates a
ReadAheadInputStream
with the specified buffer size and read-ahead threshold - readAll() - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Read and return an iterator of all the records that have been written but not yet cleaned up.
- ReadAllAvailable - Class in org.apache.spark.sql.connector.read.streaming
-
Represents a
ReadLimit
where theMicroBatchStream
must scan all the data available at the streaming source. - readArray(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
- readArrowStreamFromFile(SparkSession, String) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
R callable function to read a file in Arrow stream format and create an
RDD
using each serialized ArrowRecordBatch as a partition. - readBoolean(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readBooleanArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readBytes() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- readBytes(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readBytesArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readDate(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readDouble(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readDoubleArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- reader() - Method in class org.apache.spark.ml.LoadInstanceEnd
- reader() - Method in class org.apache.spark.ml.LoadInstanceStart
- readExternal(ObjectInput) - Method in class org.apache.spark.serializer.JavaSerializer
- readExternal(ObjectInput) - Method in class org.apache.spark.storage.BlockManagerId
- readExternal(ObjectInput) - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- readExternal(ObjectInput) - Method in class org.apache.spark.storage.StorageLevel
- readFrom(byte[]) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Reads in a
BloomFilter
from a byte array. - readFrom(byte[]) - Static method in class org.apache.spark.util.sketch.CountMinSketch
-
Reads in a
CountMinSketch
from a byte array. - readFrom(InputStream) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Reads in a
BloomFilter
from an input stream. - readFrom(InputStream) - Static method in class org.apache.spark.util.sketch.CountMinSketch
-
Reads in a
CountMinSketch
from an input stream. - readInt(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readIntArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readKey(ClassTag<T>) - Method in class org.apache.spark.serializer.DeserializationStream
-
Reads the object representing the key of a key-value pair.
- ReadLimit - Interface in org.apache.spark.sql.connector.read.streaming
-
Interface representing limits on how much to read from a
MicroBatchStream
when it implementsSupportsAdmissionControl
. - readList(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
- readLockedBlockNotFoundError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- readMap(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
- ReadMaxBytes - Class in org.apache.spark.sql.connector.read.streaming
-
Represents a
ReadLimit
where theMicroBatchStream
should scan files which total size doesn't go beyond a given maximum total size. - ReadMaxFiles - Class in org.apache.spark.sql.connector.read.streaming
-
Represents a
ReadLimit
where theMicroBatchStream
should scan approximately the given maximum number of files. - ReadMaxRows - Class in org.apache.spark.sql.connector.read.streaming
-
Represents a
ReadLimit
where theMicroBatchStream
should scan approximately the given maximum number of rows. - ReadMinRows - Class in org.apache.spark.sql.connector.read.streaming
-
Represents a
ReadLimit
where theMicroBatchStream
should scan approximately at least the given minimum number of rows. - readNonStreamingTempViewError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- readObject(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
- readObject(ClassTag<T>) - Method in class org.apache.spark.serializer.DeserializationStream
-
The most general-purpose method to read an object.
- readObjectType(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readRecords() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- readSchema() - Method in interface org.apache.spark.sql.connector.read.Scan
-
Returns the actual schema of this data source scan, which may be different from the physical schema of the underlying storage, as column pruning or other optimizations may happen.
- readSqlObject(DataInputStream, char) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- readStream() - Method in class org.apache.spark.sql.SparkSession
-
Returns a
DataStreamReader
that can be used to read streaming data in as aDataFrame
. - readStream() - Method in class org.apache.spark.sql.SQLContext
- readString(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readStringArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readStringBytes(DataInputStream, int) - Static method in class org.apache.spark.api.r.SerDe
- readTime(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
- readTypedObject(DataInputStream, char, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
- readValue(ClassTag<T>) - Method in class org.apache.spark.serializer.DeserializationStream
-
Reads the object representing the value of a key-value pair.
- ready(Duration, CanAwait) - Method in class org.apache.spark.ComplexFutureAction
- ready(Duration, CanAwait) - Method in interface org.apache.spark.FutureAction
-
Blocks until this action completes.
- ready(Duration, CanAwait) - Method in class org.apache.spark.SimpleFutureAction
- reason() - Method in class org.apache.spark.ExecutorLostFailure
- reason() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
- reason() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
- reason() - Method in class org.apache.spark.scheduler.local.KillTask
- reason() - Method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
- reason() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- reason() - Method in class org.apache.spark.TaskKilled
- reason() - Method in exception org.apache.spark.TaskKilledException
- recall() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns document-based recall averaged by the number of documents
- recall(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns recall for a given label (category)
- recall(double) - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns recall for a given label (category)
- Recall - Class in org.apache.spark.mllib.evaluation.binary
-
Recall.
- Recall() - Constructor for class org.apache.spark.mllib.evaluation.binary.Recall
- recallAt(int) - Method in class org.apache.spark.mllib.evaluation.RankingMetrics
-
Compute the average recall of all the queries, truncated at ranking position k.
- recallByLabel() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns recall for each label (category).
- recallByThreshold() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Returns a dataframe with two fields (threshold, recall) curve.
- recallByThreshold() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
- recallByThreshold() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- recallByThreshold() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- recallByThreshold() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- recallByThreshold() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, recall) curve.
- receive(Object) - Method in interface org.apache.spark.api.plugin.DriverPlugin
-
RPC message handler.
- ReceivedBlock - Interface in org.apache.spark.streaming.receiver
-
Trait representing a received block
- ReceivedBlockHandler - Interface in org.apache.spark.streaming.receiver
-
Trait that represents a class that handles the storage of blocks received by receiver
- ReceivedBlockStoreResult - Interface in org.apache.spark.streaming.receiver
-
Trait that represents the metadata related to storage of blocks
- ReceivedBlockTrackerLogEvent - Interface in org.apache.spark.streaming.scheduler
-
Trait representing any event in the ReceivedBlockTracker that updates its state.
- Receiver<T> - Class in org.apache.spark.streaming.receiver
-
:: DeveloperApi :: Abstract class of a receiver that can be run on worker nodes to receive external data.
- Receiver(StorageLevel) - Constructor for class org.apache.spark.streaming.receiver.Receiver
- receiverInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
- receiverInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
- receiverInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
- ReceiverInfo - Class in org.apache.spark.status.api.v1.streaming
- ReceiverInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi :: Class having information about a receiver
- ReceiverInfo(int, String, boolean, String, String, String, String, long) - Constructor for class org.apache.spark.streaming.scheduler.ReceiverInfo
- receiverInputDStream() - Method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
- receiverInputDStream() - Method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
- ReceiverInputDStream<T> - Class in org.apache.spark.streaming.dstream
-
Abstract class for defining any
InputDStream
that has to start a receiver on worker nodes to receive external data. - ReceiverInputDStream(StreamingContext, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.ReceiverInputDStream
- ReceiverMessage - Interface in org.apache.spark.streaming.receiver
-
Messages sent to the Receiver.
- ReceiverState - Class in org.apache.spark.streaming.scheduler
-
Enumeration to identify current state of a Receiver
- ReceiverState() - Constructor for class org.apache.spark.streaming.scheduler.ReceiverState
- receiverStream(Receiver<T>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream with any arbitrary user implemented receiver.
- receiverStream(Receiver<T>, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream with any arbitrary user implemented receiver.
- ReceiverTrackerLocalMessage - Interface in org.apache.spark.streaming.scheduler
-
Messages used by the driver and ReceiverTrackerEndpoint to communicate locally.
- ReceiverTrackerMessage - Interface in org.apache.spark.streaming.scheduler
-
Messages used by the NetworkReceiver and the ReceiverTracker to communicate with each other.
- recentProgress() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns an array of the most recent
StreamingQueryProgress
updates for this query. - recommendForAllItems(int) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
Returns top
numUsers
users recommended for each item, for all items. - recommendForAllUsers(int) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
Returns top
numItems
items recommended for each user, for all users. - recommendForItemSubset(Dataset<?>, int) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
Returns top
numUsers
users recommended for each item id in the input data set. - recommendForUserSubset(Dataset<?>, int) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
Returns top
numItems
items recommended for each user id in the input data set. - recommendProducts(int, int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends products to a user.
- recommendProductsForUsers(int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends top products for all users.
- recommendUsers(int, int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends users to a product.
- recommendUsersForProducts(int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends top users for all products.
- RECORDS_READ() - Method in class org.apache.spark.InternalAccumulator.input$
- RECORDS_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- RECORDS_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- RECORDS_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- RECORDS_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- RECORDS_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- RECORDS_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.output$
- RECORDS_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.shuffleWrite$
- RECORDS_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- RECORDS_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- RECORDS_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- RECORDS_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- recordsRead() - Method in class org.apache.spark.status.api.v1.InputMetricDistributions
- recordsRead() - Method in class org.apache.spark.status.api.v1.InputMetrics
- recordsRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- recordsWritten() - Method in class org.apache.spark.status.api.v1.OutputMetricDistributions
- recordsWritten() - Method in class org.apache.spark.status.api.v1.OutputMetrics
- recordsWritten() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
- recoverPartitions(String) - Method in class org.apache.spark.sql.api.Catalog
-
Recovers all the partitions in the directory of a table and update the catalog.
- recoverQueryFromCheckpointUnsupportedError(Path) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- RecursiveFlag - Class in org.apache.spark.ml.image
- RecursiveFlag() - Constructor for class org.apache.spark.ml.image.RecursiveFlag
- recursiveList(File) - Static method in class org.apache.spark.TestUtils
-
Lists files recursively.
- recursiveList(File) - Method in interface org.apache.spark.util.SparkFileUtils
-
Lists files recursively.
- recursiveList(File) - Static method in class org.apache.spark.util.Utils
- recursiveViewDetectedError(TableIdentifier, Seq<TableIdentifier>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- redact(SparkConf, Seq<Tuple2<String, String>>) - Static method in class org.apache.spark.util.Utils
-
Redact the sensitive values in the given map.
- redact(Map<String, String>) - Static method in class org.apache.spark.util.Utils
-
Looks up the redaction regex from within the key value pairs and uses it to redact the rest of the key value pairs.
- redact(Option<Regex>, String) - Static method in class org.apache.spark.util.Utils
-
Redact the sensitive information in the given string.
- redact(Option<Regex>, Seq<Tuple2<K, V>>) - Static method in class org.apache.spark.util.Utils
-
Redact the sensitive values in the given map.
- redactCommandLineArgs(SparkConf, Seq<String>) - Static method in class org.apache.spark.util.Utils
- redactEvent(E) - Method in interface org.apache.spark.util.ListenerBus
- REDIRECT_CONNECTOR_NAME() - Static method in class org.apache.spark.ui.JettyUtils
- redirectableStream() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
- redirectError() - Method in class org.apache.spark.launcher.SparkLauncher
-
Specifies that stderr in spark-submit should be redirected to stdout.
- redirectError(File) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects error output to the specified File.
- redirectError(ProcessBuilder.Redirect) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects error output to the specified Redirect.
- redirectOutput(File) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects error output to the specified File.
- redirectOutput(ProcessBuilder.Redirect) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects standard output to the specified Redirect.
- redirectToLog(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Sets all output to be logged and redirected to a logger with the specified name.
- redirectUrl(URL, String, Seq<Tuple2<String, String>>) - Static method in class org.apache.spark.TestUtils
-
Returns the Location header from an HTTP(S) URL.
- reduce(BUF, IN) - Method in class org.apache.spark.sql.expressions.Aggregator
-
Combine two values to produce a new value.
- reduce(I) - Method in interface org.apache.spark.sql.connector.catalog.functions.Reducer
- reduce(Function2<T, T, T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Reduces the elements of this RDD using the specified commutative and associative binary operator.
- reduce(Function2<T, T, T>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream.
- reduce(ReduceFunction<T>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Reduces the elements of this Dataset using the specified binary function.
- reduce(Column, Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
- reduce(Column, Column, Function2<Column, Column, Column>, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
- reduce(Function2<T, T, T>) - Method in class org.apache.spark.rdd.RDD
-
Reduces the elements of this RDD using the specified commutative and associative binary operator.
- reduce(Function2<T, T, T>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Reduces the elements of this Dataset using the specified binary function.
- reduce(Function2<T, T, T>) - Method in class org.apache.spark.sql.Dataset
- reduce(Function2<T, T, T>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream.
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
reduceByKey
to each RDD. - reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
reduceByKey
to each RDD. - reduceByKey(Function2<V, V, V>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
reduceByKey
to each RDD. - reduceByKey(Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
reduceByKey
to each RDD. - reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
reduceByKey
to each RDD. - reduceByKey(Function2<V, V, V>, Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
reduceByKey
to each RDD. - reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by reducing over a using incremental computation.
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying incremental
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying incremental
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Create a new DStream by applying
reduceByKey
over a sliding window onthis
DStream. - reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
reduceByKey
over a sliding window onthis
DStream. - reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function1<Tuple2<K, V>, Object>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying incremental
reduceByKey
over a sliding window. - reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function1<Tuple2<K, V>, Object>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying incremental
reduceByKey
over a sliding window. - reduceByKeyLocally(Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function, but return the result immediately to the master as a Map.
- reduceByKeyLocally(Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function, but return the results immediately to the master as a Map.
- reduceByKeyLocallyNotSupportArrayKeysError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
- ReduceFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for function used in Dataset's reduce.
- reduceGroups(ReduceFunction<V>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Java-specific) Reduces the elements of each group of data using the specified binary function.
- reduceGroups(ReduceFunction<V>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- reduceGroups(Function2<V, V, V>) - Method in class org.apache.spark.sql.api.KeyValueGroupedDataset
-
(Scala-specific) Reduces the elements of each group of data using the specified binary function.
- reduceGroups(Function2<V, V, V>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- reduceId() - Method in class org.apache.spark.FetchFailed
- reduceId() - Method in class org.apache.spark.storage.ShuffleBlockChunkId
- reduceId() - Method in class org.apache.spark.storage.ShuffleBlockId
- reduceId() - Method in class org.apache.spark.storage.ShuffleChecksumBlockId
- reduceId() - Method in class org.apache.spark.storage.ShuffleDataBlockId
- reduceId() - Method in class org.apache.spark.storage.ShuffleIndexBlockId
- reduceId() - Method in class org.apache.spark.storage.ShuffleMergedBlockId
- reduceId() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
- reduceId() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
- reduceId() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
- reduceId() - Method in class org.apache.spark.storage.ShufflePushBlockId
- reducer(int, ReducibleFunction<?, ?>, int) - Method in interface org.apache.spark.sql.connector.catalog.functions.ReducibleFunction
-
This method is for the bucket function.
- reducer(ReducibleFunction<?, ?>) - Method in interface org.apache.spark.sql.connector.catalog.functions.ReducibleFunction
-
This method is for all other functions.
- Reducer<I,
O> - Interface in org.apache.spark.sql.connector.catalog.functions -
A 'reducer' for output of user-defined functions.
- ReducibleFunction<I,
O> - Interface in org.apache.spark.sql.connector.catalog.functions -
Base class for user-defined functions that can be 'reduced' on another function.
- Ref - Class in org.apache.spark.sql.connector.expressions
-
Convenience extractor for any NamedReference.
- Ref() - Constructor for class org.apache.spark.sql.connector.expressions.Ref
- reference(Seq<String>) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- referenceColNotFoundForAlterTableChangesError(String, String[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- references() - Method in class org.apache.spark.sql.connector.expressions.ClusterByTransform
- references() - Method in interface org.apache.spark.sql.connector.expressions.Expression
-
List of fields or columns that are referenced by this expression.
- references() - Method in interface org.apache.spark.sql.connector.expressions.NamedReference
- references() - Method in class org.apache.spark.sql.sources.AlwaysFalse
- references() - Method in class org.apache.spark.sql.sources.AlwaysTrue
- references() - Method in class org.apache.spark.sql.sources.And
- references() - Method in class org.apache.spark.sql.sources.CollatedFilter
- references() - Method in class org.apache.spark.sql.sources.EqualNullSafe
- references() - Method in class org.apache.spark.sql.sources.EqualTo
- references() - Method in class org.apache.spark.sql.sources.Filter
-
List of columns that are referenced by this filter.
- references() - Method in class org.apache.spark.sql.sources.GreaterThan
- references() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
- references() - Method in class org.apache.spark.sql.sources.In
- references() - Method in class org.apache.spark.sql.sources.IsNotNull
- references() - Method in class org.apache.spark.sql.sources.IsNull
- references() - Method in class org.apache.spark.sql.sources.LessThan
- references() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
- references() - Method in class org.apache.spark.sql.sources.Not
- references() - Method in class org.apache.spark.sql.sources.Or
- references() - Method in class org.apache.spark.sql.sources.StringContains
- references() - Method in class org.apache.spark.sql.sources.StringEndsWith
- references() - Method in class org.apache.spark.sql.sources.StringStartsWith
- reflect(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Calls a method with reflection.
- refreshByPath(String) - Method in class org.apache.spark.sql.api.Catalog
-
Invalidates and refreshes all the cached data (and the associated metadata) for any
Dataset
that contains the given data source path. - refreshTable(String) - Method in class org.apache.spark.sql.api.Catalog
-
Invalidates and refreshes all the cached data and metadata of the given table.
- regex(Regex) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- regexp(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if
str
matchesregexp
, or false otherwise. - regexp_count(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a count of the number of times that the regular expression pattern
regexp
is matched in the stringstr
. - regexp_extract(Column, String, int) - Static method in class org.apache.spark.sql.functions
-
Extract a specific group matched by a Java regex, from the specified string column.
- regexp_extract_all(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Extract all strings in the
str
that match theregexp
expression and corresponding to the first regex group index. - regexp_extract_all(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Extract all strings in the
str
that match theregexp
expression and corresponding to the regex group index. - regexp_instr(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Searches a string for a regular expression and returns an integer that indicates the beginning position of the matched substring.
- regexp_instr(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Searches a string for a regular expression and returns an integer that indicates the beginning position of the matched substring.
- regexp_like(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if
str
matchesregexp
, or false otherwise. - regexp_replace(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Replace all substrings of the specified string value that match regexp with rep.
- regexp_replace(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Replace all substrings of the specified string value that match regexp with rep.
- regexp_substr(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the substring that matches the regular expression
regexp
within the stringstr
. - RegexTokenizer - Class in org.apache.spark.ml.feature
-
A regex based tokenizer that extracts tokens either by using the provided regex pattern to split the text (default) or repeatedly matching the regex (if
gaps
is false). - RegexTokenizer() - Constructor for class org.apache.spark.ml.feature.RegexTokenizer
- RegexTokenizer(String) - Constructor for class org.apache.spark.ml.feature.RegexTokenizer
- register(String, String) - Static method in class org.apache.spark.sql.types.UDTRegistration
-
Registers an UserDefinedType to an user class.
- register(String, MessageWithContext, boolean, Function0<Object>) - Static method in class org.apache.spark.util.SignalUtils
-
Adds an action to be run when a given signal is received by this process.
- register(String, UDF0<?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF0 instance as user-defined function (UDF).
- register(String, UDF1<?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF1 instance as user-defined function (UDF).
- register(String, UDF10<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF10 instance as user-defined function (UDF).
- register(String, UDF11<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF11 instance as user-defined function (UDF).
- register(String, UDF12<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF12 instance as user-defined function (UDF).
- register(String, UDF13<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF13 instance as user-defined function (UDF).
- register(String, UDF14<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF14 instance as user-defined function (UDF).
- register(String, UDF15<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF15 instance as user-defined function (UDF).
- register(String, UDF16<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF16 instance as user-defined function (UDF).
- register(String, UDF17<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF17 instance as user-defined function (UDF).
- register(String, UDF18<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF18 instance as user-defined function (UDF).
- register(String, UDF19<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF19 instance as user-defined function (UDF).
- register(String, UDF2<?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF2 instance as user-defined function (UDF).
- register(String, UDF20<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF20 instance as user-defined function (UDF).
- register(String, UDF21<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF21 instance as user-defined function (UDF).
- register(String, UDF22<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF22 instance as user-defined function (UDF).
- register(String, UDF3<?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF3 instance as user-defined function (UDF).
- register(String, UDF4<?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF4 instance as user-defined function (UDF).
- register(String, UDF5<?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF5 instance as user-defined function (UDF).
- register(String, UDF6<?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF6 instance as user-defined function (UDF).
- register(String, UDF7<?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF7 instance as user-defined function (UDF).
- register(String, UDF8<?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF8 instance as user-defined function (UDF).
- register(String, UDF9<?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Register a deterministic Java UDF9 instance as user-defined function (UDF).
- register(String, UserDefinedAggregateFunction) - Method in class org.apache.spark.sql.UDFRegistration
-
Deprecated.this method and the use of UserDefinedAggregateFunction are deprecated. Aggregator[IN, BUF, OUT] should now be registered as a UDF via the functions.udaf(agg) method.
- register(String, UserDefinedFunction) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a user-defined function (UDF), for a UDF that's already defined using the Dataset API (i.e.
- register(String, Function0<Object>) - Static method in class org.apache.spark.util.SignalUtils
-
Adds an action to be run when a given signal is received by this process.
- register(String, Function0<RT>, TypeTags.TypeTag<RT>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 0 arguments as user-defined function (UDF).
- register(String, Function1<A1, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 1 arguments as user-defined function (UDF).
- register(String, Function10<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 10 arguments as user-defined function (UDF).
- register(String, Function11<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 11 arguments as user-defined function (UDF).
- register(String, Function12<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 12 arguments as user-defined function (UDF).
- register(String, Function13<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 13 arguments as user-defined function (UDF).
- register(String, Function14<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 14 arguments as user-defined function (UDF).
- register(String, Function15<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 15 arguments as user-defined function (UDF).
- register(String, Function16<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 16 arguments as user-defined function (UDF).
- register(String, Function17<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 17 arguments as user-defined function (UDF).
- register(String, Function18<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 18 arguments as user-defined function (UDF).
- register(String, Function19<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 19 arguments as user-defined function (UDF).
- register(String, Function2<A1, A2, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 2 arguments as user-defined function (UDF).
- register(String, Function20<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 20 arguments as user-defined function (UDF).
- register(String, Function21<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>, TypeTags.TypeTag<A21>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 21 arguments as user-defined function (UDF).
- register(String, Function22<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>, TypeTags.TypeTag<A21>, TypeTags.TypeTag<A22>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 22 arguments as user-defined function (UDF).
- register(String, Function3<A1, A2, A3, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 3 arguments as user-defined function (UDF).
- register(String, Function4<A1, A2, A3, A4, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 4 arguments as user-defined function (UDF).
- register(String, Function5<A1, A2, A3, A4, A5, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 5 arguments as user-defined function (UDF).
- register(String, Function6<A1, A2, A3, A4, A5, A6, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 6 arguments as user-defined function (UDF).
- register(String, Function7<A1, A2, A3, A4, A5, A6, A7, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 7 arguments as user-defined function (UDF).
- register(String, Function8<A1, A2, A3, A4, A5, A6, A7, A8, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 8 arguments as user-defined function (UDF).
- register(String, Function9<A1, A2, A3, A4, A5, A6, A7, A8, A9, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>) - Method in class org.apache.spark.sql.api.UDFRegistration
-
Registers a deterministic Scala closure of 9 arguments as user-defined function (UDF).
- register(SparkContext, Map<String, DoubleAccumulator>) - Static method in class org.apache.spark.metrics.source.DoubleAccumulatorSource
- register(SparkContext, Map<String, LongAccumulator>) - Static method in class org.apache.spark.metrics.source.LongAccumulatorSource
- register(QueryExecutionListener) - Method in class org.apache.spark.sql.util.ExecutionListenerManager
-
Registers the specified
QueryExecutionListener
. - register(AccumulatorV2<?, ?>) - Method in class org.apache.spark.SparkContext
-
Register the given accumulator.
- register(AccumulatorV2<?, ?>) - Static method in class org.apache.spark.util.AccumulatorContext
-
Registers an
AccumulatorV2
created on the driver such that it can be used on the executors. - register(AccumulatorV2<?, ?>, String) - Method in class org.apache.spark.SparkContext
-
Register the given accumulator with given name.
- registerAllExtensions(ExtensionRegistry) - Static method in class org.apache.spark.status.protobuf.StoreTypes
- registerAllExtensions(ExtensionRegistryLite) - Static method in class org.apache.spark.status.protobuf.StoreTypes
- registerAvroSchemas(Seq<Schema>) - Method in class org.apache.spark.SparkConf
-
Use Kryo serialization and register the given set of Avro schemas so that the generic record serializer can decrease network IO
- RegisterBlockManager(BlockManagerId, String[], long, long, RpcEndpointRef, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
- RegisterBlockManager$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager$
- registerClasses(Kryo) - Method in interface org.apache.spark.serializer.KryoRegistrator
- RegisterClusterManager(RpcEndpointRef) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
- RegisterClusterManager$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager$
- registerDialect(JdbcDialect) - Static method in class org.apache.spark.sql.jdbc.JdbcDialects
-
Register a dialect for use on all new matching jdbc
org.apache.spark.sql.DataFrame
. - RegisterExecutor(String, RpcEndpointRef, String, int, Map<String, String>, Map<String, String>, Map<String, ResourceInformation>, int) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- RegisterExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor$
- registeringStreamingQueryListenerError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- registerKryoClasses(Class<?>[]) - Method in class org.apache.spark.SparkConf
-
Use Kryo serialization and register the given set of classes with Kryo.
- registerKryoClasses(SparkConf) - Static method in class org.apache.spark.graphx.GraphXUtils
-
Registers classes that GraphX uses with Kryo.
- registerKryoClasses(SparkContext) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
-
This method registers the class
SquaredEuclideanSilhouette.ClusterStats
for kryo serialization. - registerLogger(Logger) - Static method in class org.apache.spark.util.SignalUtils
-
Register a signal handler to log signals on UNIX-like systems.
- registerMetrics(String, PluginContext) - Method in interface org.apache.spark.api.plugin.DriverPlugin
-
Register metrics published by the plugin with Spark's metrics system.
- registerShuffle(int) - Method in interface org.apache.spark.shuffle.api.ShuffleDriverComponents
-
Called once per shuffle id when the shuffle id is first generated for a shuffle stage.
- registerShuffleMergerLocations(Seq<BlockManagerId>) - Method in class org.apache.spark.ShuffleStatus
- registerShutdownDeleteDir(File) - Static method in class org.apache.spark.util.ShutdownHookManager
- registerStream(JavaDStream<BinarySample>) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Register a
JavaDStream
of values for significance testing. - registerStream(DStream<BinarySample>) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Register a
DStream
of values for significance testing. - registerTempTable(String) - Method in class org.apache.spark.sql.api.Dataset
-
Deprecated.Use createOrReplaceTempView(viewName) instead. Since 2.0.0.
- registerTimer(long) - Method in interface org.apache.spark.sql.streaming.StatefulProcessorHandle
-
Function to register a processing/event time based timer for given implicit grouping key and provided timestamp
- registrationTime() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- regParam() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- regParam() - Method in class org.apache.spark.ml.classification.FMClassifier
- regParam() - Method in class org.apache.spark.ml.classification.LinearSVC
- regParam() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- regParam() - Method in class org.apache.spark.ml.classification.LogisticRegression
- regParam() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- regParam() - Method in interface org.apache.spark.ml.optim.loss.DifferentiableRegularization
-
Magnitude of the regularization penalty.
- regParam() - Method in interface org.apache.spark.ml.param.shared.HasRegParam
-
Param for regularization parameter (>= 0).
- regParam() - Method in class org.apache.spark.ml.recommendation.ALS
- regParam() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- regParam() - Method in class org.apache.spark.ml.regression.FMRegressor
- regParam() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- regParam() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- regParam() - Method in class org.apache.spark.ml.regression.LinearRegression
- regParam() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- regr_avgx(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the independent variable for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_avgy(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the independent variable for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_count(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of non-null number pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_intercept(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the intercept of the univariate linear regression line for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_r2(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the coefficient of determination for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_slope(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the slope of the linear regression line for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_sxx(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns REGR_COUNT(y, x) * VAR_POP(x) for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_sxy(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns REGR_COUNT(y, x) * COVAR_POP(y, x) for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - regr_syy(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns REGR_COUNT(y, x) * VAR_POP(y) for non-null pairs in a group, where
y
is the dependent variable andx
is the independent variable. - Regression() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
- RegressionEvaluator - Class in org.apache.spark.ml.evaluation
-
Evaluator for regression, which expects input columns prediction, label and an optional weight column.
- RegressionEvaluator() - Constructor for class org.apache.spark.ml.evaluation.RegressionEvaluator
- RegressionEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.RegressionEvaluator
- RegressionMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for regression.
- RegressionMetrics(RDD<? extends Product>) - Constructor for class org.apache.spark.mllib.evaluation.RegressionMetrics
- RegressionMetrics(RDD<? extends Product>, boolean) - Constructor for class org.apache.spark.mllib.evaluation.RegressionMetrics
- RegressionModel<FeaturesType,
M extends RegressionModel<FeaturesType, M>> - Class in org.apache.spark.ml.regression -
Model produced by a
Regressor
. - RegressionModel - Interface in org.apache.spark.mllib.regression
- RegressionModel() - Constructor for class org.apache.spark.ml.regression.RegressionModel
- Regressor<FeaturesType,
Learner extends Regressor<FeaturesType, Learner, M>, M extends RegressionModel<FeaturesType, M>> - Class in org.apache.spark.ml.regression -
Single-label regression
- Regressor() - Constructor for class org.apache.spark.ml.regression.Regressor
- reindex() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- reindex() - Method in class org.apache.spark.graphx.VertexRDD
-
Construct a new VertexRDD that is indexed by only the visible vertices.
- RelationalGroupedDataset - Class in org.apache.spark.sql.api
- RelationalGroupedDataset - Class in org.apache.spark.sql
- RelationalGroupedDataset() - Constructor for class org.apache.spark.sql.api.RelationalGroupedDataset
- RelationalGroupedDataset.CubeType$ - Class in org.apache.spark.sql
-
To indicate it's the CUBE
- RelationalGroupedDataset.GroupByType$ - Class in org.apache.spark.sql
-
To indicate it's the GroupBy
- RelationalGroupedDataset.GroupingSetsType$ - Class in org.apache.spark.sql
- RelationalGroupedDataset.GroupType - Interface in org.apache.spark.sql
-
The Grouping Type
- RelationalGroupedDataset.PivotType$ - Class in org.apache.spark.sql
- RelationalGroupedDataset.RollupType$ - Class in org.apache.spark.sql
-
To indicate it's the ROLLUP
- RelationProvider - Interface in org.apache.spark.sql.sources
-
Implemented by objects that produce relations for a specific kind of data source.
- relativeDirection(long) - Method in class org.apache.spark.graphx.Edge
-
Return the relative direction of the edge to the corresponding vertex.
- relativeError() - Method in class org.apache.spark.ml.feature.Imputer
- relativeError() - Method in class org.apache.spark.ml.feature.ImputerModel
- relativeError() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- relativeError() - Method in class org.apache.spark.ml.feature.RobustScaler
- relativeError() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- relativeError() - Method in interface org.apache.spark.ml.param.shared.HasRelativeError
-
Param for the relative target precision for the approximate quantile algorithm.
- relativeError() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Returns the relative error (or
eps
) of thisCountMinSketch
. - release(Map<String, Object>) - Method in interface org.apache.spark.resource.ResourceAllocator
-
Release a sequence of resource addresses, these addresses must have been assigned.
- rem(byte, byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- rem(double, double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral
- rem(float, float) - Method in interface org.apache.spark.sql.types.FloatType.FloatAsIfIntegral
- rem(int, int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- rem(long, long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- rem(short, short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- rem(Decimal, Decimal) - Method in class org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
- remainder(Decimal) - Method in class org.apache.spark.sql.types.Decimal
- remember(Duration) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Sets each DStreams in this context to remember RDDs it generated in the last given duration.
- remember(Duration) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Set each DStream in this context to remember RDDs it generated in the last given duration.
- REMOTE_BLOCKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- REMOTE_BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- REMOTE_BYTES_READ_TO_DISK() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_BYTES_READ_TO_DISK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- REMOTE_BYTES_READ_TO_DISK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- REMOTE_MERGED_BLOCKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_MERGED_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- REMOTE_MERGED_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- REMOTE_MERGED_BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_MERGED_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- REMOTE_MERGED_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- REMOTE_MERGED_CHUNKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_MERGED_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- REMOTE_MERGED_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- REMOTE_MERGED_REQS_DURATION() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_MERGED_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- REMOTE_MERGED_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- REMOTE_REQS_DURATION() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
- REMOTE_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- REMOTE_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- remoteBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- remoteBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- remoteBytesRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- remoteBytesRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- remoteBytesReadToDisk() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- remoteBytesReadToDisk() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- remoteMergedBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- remoteMergedBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- remoteMergedBytesRead() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- remoteMergedBytesRead() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- remoteMergedChunksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- remoteMergedChunksFetched() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- remoteMergedReqsDuration() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetricDistributions
- remoteMergedReqsDuration() - Method in class org.apache.spark.status.api.v1.ShufflePushReadMetrics
- remoteReqsDuration() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- remoteReqsDuration() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- remove() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Remove this state.
- remove() - Method in class org.apache.spark.streaming.State
-
Remove the state if it exists.
- remove(long) - Static method in class org.apache.spark.util.AccumulatorContext
-
Unregisters the
AccumulatorV2
with the given ID, if any. - remove(Object) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- remove(String) - Method in class org.apache.spark.SparkConf
-
Remove a parameter from the configuration
- remove(String) - Method in class org.apache.spark.sql.types.MetadataBuilder
- remove(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Removes a key from this map and returns its value associated previously as an option.
- REMOVE_REASON_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- REMOVE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- REMOVE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- removeAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- removeAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- removeAccumulatorUpdates(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- removeAllListeners() - Method in interface org.apache.spark.util.ListenerBus
-
Remove all listeners and they won't receive any events.
- removeAttempts(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- removeAttributes(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> attributes = 27;
- RemoveBlock(BlockId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBlock
- RemoveBlock$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBlock$
- RemoveBroadcast(long, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
- RemoveBroadcast$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast$
- removeChildClusters(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- removeChildNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- removeChunk(ShuffleBlockChunkId) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the
iterator.next()
is invoked and the iterator processes a response of typeShuffleBlockFetcherIterator.SuccessFetchResult
. - removeClasspathEntries(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- removeCustomMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
map<string, int64> custom_metrics = 12;
- removeDataDistribution(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- removedClassInSpark2Error(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- removeDistribution(LiveExecutor) - Method in class org.apache.spark.status.LiveRDD
- removeDurationMs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, int64> duration_ms = 7;
- removeEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- removeEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- removeEventTime(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> event_time = 8;
- RemoveExecutor(String) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
- RemoveExecutor(String, org.apache.spark.scheduler.ExecutorLossReason) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
- RemoveExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor$
- RemoveExecutor$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveExecutor$
- removeExecutorLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, string> executor_logs = 23;
- removeExecutorLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
map<string, string> executor_logs = 16;
- removeExecutorMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- removeExecutorResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorResourceRequest> executor_resources = 2;
- removeExecutorSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, .org.apache.spark.status.protobuf.ExecutorStageSummary> executor_summary = 46;
- removeFromDriver() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
- removeHadoopProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- removeIncomingEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- removeJobs(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, .org.apache.spark.status.protobuf.JobExecutionStatus> jobs = 11;
- removeJobTag(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Remove a tag previously added to be assigned to all the jobs started by this thread.
- removeJobTag(String) - Method in class org.apache.spark.SparkContext
-
Remove a tag previously added to be assigned to all the jobs started by this thread.
- removeJobTags(Set<String>) - Method in class org.apache.spark.SparkContext
-
Remove multiple tags to be assigned to all the jobs started by this thread.
- removeKey(K) - Method in interface org.apache.spark.sql.streaming.MapState
-
Remove user key from map state
- removeKilledTasksSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<string, int32> killed_tasks_summary = 48;
- removeKillTasksSummary(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
map<string, int32> kill_tasks_summary = 20;
- removeListener(L) - Method in interface org.apache.spark.util.ListenerBus
-
Remove a listener and it won't receive any events.
- removeListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Deregister a
StreamingQueryListener
. - removeListenerOnError(L) - Method in interface org.apache.spark.util.ListenerBus
-
This can be overridden by subclasses if there is any extra cleanup to do when removing a listener.
- removeListenerOnError(SparkListenerInterface) - Method in class org.apache.spark.scheduler.AsyncEventQueue
- removeLocality(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
map<string, int64> locality = 3;
- removeMapOutput(int, BlockManagerId) - Method in class org.apache.spark.ShuffleStatus
-
Remove the map output which was served by the specified block manager.
- removeMergeResult(int, BlockManagerId) - Method in class org.apache.spark.ShuffleStatus
-
Remove the merge result which was served by the specified block manager.
- removeMergeResultsByFilter(Function1<BlockManagerId, Object>) - Method in class org.apache.spark.ShuffleStatus
-
Removes all shuffle merge result which satisfies the filter.
- removeMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- removeMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- removeMetrics(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- removeMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
map<string, int64> metrics = 1;
- removeMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
map<string, string> metrics = 3;
- removeMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
map<string, string> metrics = 8;
- removeMetricsProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- removeMetricValues(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<int64, string> metric_values = 14;
- removeModifiedConfigs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
map<string, string> modified_configs = 6;
- removeNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- removeNodes(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- removeObservedMetrics(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
map<string, string> observed_metrics = 12;
- removeOutgoingEdges(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- removeOutputsByFilter(Function1<BlockManagerId, Object>) - Method in class org.apache.spark.ShuffleStatus
-
Removes all shuffle outputs which satisfies the filter.
- removeOutputsOnExecutor(String) - Method in class org.apache.spark.ShuffleStatus
-
Removes all map outputs associated with the specified executor.
- removeOutputsOnHost(String) - Method in class org.apache.spark.ShuffleStatus
-
Removes all shuffle outputs associated with this host.
- removePartition(String) - Method in class org.apache.spark.status.LiveRDD
- removePartition(LiveRDDPartition) - Method in class org.apache.spark.status.RDDPartitionSeq
- removePartitions(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- removeProcessLogs(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
map<string, string> process_logs = 7;
- removeProperty(String) - Static method in interface org.apache.spark.sql.connector.catalog.NamespaceChange
-
Create a NamespaceChange for removing a namespace property.
- removeProperty(String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for removing a table property.
- removeProperty(String) - Static method in interface org.apache.spark.sql.connector.catalog.ViewChange
-
Create a ViewChange for removing a table property.
- RemoveRdd(int) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveRdd
- RemoveRdd$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveRdd$
- removeReason() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- removeResourceProfiles(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- removeResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
map<string, .org.apache.spark.status.protobuf.ResourceInformation> resources = 28;
- removeSchedulable(Schedulable) - Method in interface org.apache.spark.scheduler.Schedulable
- removeSchemaCommentQuery(String) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- removeSchemaCommentQuery(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- removeSchemaCommentQuery(String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- removeSchemaCommentQuery(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- removeSelfEdges() - Method in class org.apache.spark.graphx.GraphOps
-
Remove self edges.
- removeShuffle(int, boolean) - Method in interface org.apache.spark.shuffle.api.ShuffleDriverComponents
-
Removes shuffle data associated with the given shuffle.
- RemoveShuffle(int) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
- RemoveShuffle$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle$
- removeShuffleMergerLocations() - Method in class org.apache.spark.ShuffleStatus
- RemoveShufflePushMergerLocation(String) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveShufflePushMergerLocation
- RemoveShufflePushMergerLocation$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveShufflePushMergerLocation$
- removeShutdownDeleteDir(File) - Static method in class org.apache.spark.util.ShutdownHookManager
- removeShutdownHook(Object) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Remove a previously installed shutdown hook.
- removeSources(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- removeSparkListener(SparkListenerInterface) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Deregister the listener from Spark's listener bus.
- removeSparkProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- removeStateOperators(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- removeStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
- removeSystemProperties(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- removeTag(String) - Method in class org.apache.spark.sql.api.SparkSession
-
Remove a tag previously added to be assigned to all the operations started by this thread in this session.
- removeTag(String) - Method in class org.apache.spark.sql.SparkSession
- removeTaskResources(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
map<string, .org.apache.spark.status.protobuf.TaskResourceRequest> task_resources = 3;
- removeTasks(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
map<int64, .org.apache.spark.status.protobuf.TaskData> tasks = 45;
- removeTime() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- removeTime() - Method in class org.apache.spark.status.api.v1.ProcessSummary
- RemoveWorker(String, String, String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
- RemoveWorker$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker$
- renameAsExistsPathError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- renameColumn(String[], String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for renaming a field.
- renameColumnUnsupportedForOlderMySQLError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- renamePartition(InternalRow, InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Rename an existing partition of the table.
- renamePathAsExistsPathError(Path, Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- renameSrcPathNotFoundError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- renameTable(String, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Deprecated.Please override renameTable method with identifiers. Since 3.5.0.
- renameTable(String, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- renameTable(Identifier, Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- renameTable(Identifier, Identifier) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Renames a table in the catalog.
- renameTable(Identifier, Identifier) - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- renameTable(Identifier, Identifier) - Method in class org.apache.spark.sql.jdbc.DerbyDialect
- renameTable(Identifier, Identifier) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Rename an existing table.
- renameTable(Identifier, Identifier) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- renameTable(Identifier, Identifier) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- renameTable(Identifier, Identifier) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- renameTable(Identifier, Identifier) - Method in class org.apache.spark.sql.jdbc.TeradataDialect
- renameTableSourceAndDestinationMismatchError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- renameTempViewToExistingViewError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- renameView(Identifier, Identifier) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Rename a view in the catalog.
- rep(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- rep1(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- rep1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- rep1sep(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- repairTableNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- repartition(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset that has exactly
numPartitions
partitions. - repartition(int) - Method in class org.apache.spark.sql.Dataset
- repartition(int) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Return a new DStream with an increased or decreased level of parallelism.
- repartition(int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream with an increased or decreased level of parallelism.
- repartition(int) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream with an increased or decreased level of parallelism.
- repartition(int, Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions into
numPartitions
. - repartition(int, Column...) - Method in class org.apache.spark.sql.Dataset
- repartition(int, Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions into
numPartitions
. - repartition(int, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- repartition(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions, using
spark.sql.shuffle.partitions
as number of partitions. - repartition(Column...) - Method in class org.apache.spark.sql.Dataset
- repartition(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions, using
spark.sql.shuffle.partitions
as number of partitions. - repartition(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- repartitionAndSortWithinPartitions(Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
- repartitionAndSortWithinPartitions(Partitioner) - Method in class org.apache.spark.rdd.OrderedRDDFunctions
-
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
- repartitionAndSortWithinPartitions(Partitioner, Comparator<K>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
- repartitionByRange(int, Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions into
numPartitions
. - repartitionByRange(int, Column...) - Method in class org.apache.spark.sql.Dataset
- repartitionByRange(int, Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions into
numPartitions
. - repartitionByRange(int, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- repartitionByRange(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions, using
spark.sql.shuffle.partitions
as number of partitions. - repartitionByRange(Column...) - Method in class org.apache.spark.sql.Dataset
- repartitionByRange(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions, using
spark.sql.shuffle.partitions
as number of partitions. - repartitionByRange(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- repeat(Column, int) - Static method in class org.apache.spark.sql.functions
-
Repeats a string column n times, and returns it as a new string column.
- repeat(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Repeats a string column n times, and returns it as a new string column.
- repeatedPivotsUnsupportedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- RepeatStatementExec - Class in org.apache.spark.sql.scripting
-
Executable node for RepeatStatement.
- RepeatStatementExec(SingleStatementExec, CompoundBodyExec, Option<String>, SparkSession) - Constructor for class org.apache.spark.sql.scripting.RepeatStatementExec
- repetitiveWindowDefinitionError(String, SqlBaseParser.WindowClauseContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- replace() - Method in interface org.apache.spark.sql.CreateTableWriter
-
Replace an existing table with the contents of the data frame.
- replace(String[], Map<T, T>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Replaces values matching keys in
replacement
map with the corresponding values. - replace(String[], Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- replace(String, Map<T, T>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
Replaces values matching keys in
replacement
map with the corresponding values. - replace(String, Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- replace(String, Map<T, T>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Replaces values matching keys in
replacement
map. - replace(String, Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- replace(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Replaces all occurrences of
search
withreplace
. - replace(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Replaces all occurrences of
search
withreplace
. - replace(Seq<String>, Map<T, T>) - Method in class org.apache.spark.sql.api.DataFrameNaFunctions
-
(Scala-specific) Replaces values matching keys in
replacement
map. - replace(Seq<String>, Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
- replaceCollatedStringWithString(DataType) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Replaces any collated string type with non collated StringType recursively in the given data type.
- replacePartitionMetadata(InternalRow, Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Replace the partition metadata of the existing partition.
- replaceView(ViewInfo, boolean) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Replace a view in the catalog.
- replicas() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
- ReplicateBlock(BlockId, Seq<BlockManagerId>, int) - Constructor for class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
- ReplicateBlock$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock$
- replicatedVertexView() - Method in class org.apache.spark.graphx.impl.GraphImpl
- replication() - Method in class org.apache.spark.storage.StorageLevel
- repN(int, Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- repNM(int, int, Parsers.Parser<T>, Parsers.Parser<Object>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- repNM$default$4() - Static method in class org.apache.spark.ml.feature.RFormulaParser
- report() - Method in interface org.apache.spark.metrics.sink.Sink
- reportDriverMetrics() - Method in interface org.apache.spark.sql.connector.read.Scan
-
Returns an array of custom metrics which are collected with values at the driver side only.
- reportError(String, Throwable) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Report exceptions in receiving data.
- reportLatestOffset() - Method in interface org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControl
-
Returns the most recent offset available.
- ReportsSinkMetrics - Interface in org.apache.spark.sql.connector.read.streaming
-
A mix-in interface for streaming sinks to signal that they can report metrics.
- ReportsSourceMetrics - Interface in org.apache.spark.sql.connector.read.streaming
-
A mix-in interface for
SparkDataStream
streaming sources to signal that they can report metrics. - representUpdateAsDeleteAndInsert() - Method in interface org.apache.spark.sql.connector.write.SupportsDelta
-
Controls whether to represent updates as deletes and inserts.
- repsep(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- requestedPartitionsMismatchTablePartitionsError(String, Map<String, Option<String>>, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- requestedPartitionsMismatchTablePartitionsError(CatalogTable, Map<String, Option<String>>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- requesterHost() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
- requestExecutors(int) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Request an additional number of executors from the cluster manager.
- RequestExecutors(Map<ResourceProfile, Object>, Map<Object, Object>, Map<Object, Map<String, Object>>, Set<String>) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
- RequestExecutors$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors$
- RequestMethod - Class in org.apache.spark
- RequestMethod() - Constructor for class org.apache.spark.RequestMethod
- requests() - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
Returns all the resource requests for the executor.
- requests() - Method in class org.apache.spark.resource.TaskResourceRequests
-
Returns all the resource requests for the task.
- requestsJMap() - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
(Java-specific) Returns all the resource requests for the executor.
- requestsJMap() - Method in class org.apache.spark.resource.TaskResourceRequests
-
(Java-specific) Returns all the resource requests for the task.
- requestTime() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- requestTotalExecutors(int, int, Map<String, Object>) - Method in class org.apache.spark.SparkContext
-
Update the cluster manager on our scheduling needs.
- require(boolean, String, Function0<Map<String, String>>) - Static method in exception org.apache.spark.SparkException
-
This is like the Scala require precondition, except it uses SparkIllegalArgumentException.
- require(ExecutorResourceRequests) - Method in class org.apache.spark.resource.ResourceProfileBuilder
-
Add executor resource requests
- require(TaskResourceRequests) - Method in class org.apache.spark.resource.ResourceProfileBuilder
-
Add task resource requests
- requiredDistribution() - Method in interface org.apache.spark.sql.connector.write.RequiresDistributionAndOrdering
-
Returns the distribution required by this write.
- requiredMetadataAttributes() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperation
-
Returns metadata attributes that are required to perform this row-level operation.
- requiredNumPartitions() - Method in interface org.apache.spark.sql.connector.write.RequiresDistributionAndOrdering
-
Returns the number of partitions required by this write.
- requiredOrdering() - Method in interface org.apache.spark.sql.connector.write.RequiresDistributionAndOrdering
-
Returns the ordering required by this write.
- requiredParameterNotFound(String, String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- requireExactMatchedPartitionSpec(String, Map<String, String>, Seq<String>) - Static method in class org.apache.spark.sql.util.PartitioningUtils
-
Verify if the input partition spec exactly matches the existing defined partition spec The columns must be the same but the orders could be different.
- RequiresDistributionAndOrdering - Interface in org.apache.spark.sql.connector.write
-
A write that requires a specific distribution and ordering of data.
- requiresSinglePartNamespaceError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- res() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- RESERVED_PROPERTIES - Static variable in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
All reserved properties of the view.
- reservoirSampleAndCount(Iterator<T>, int, long, ClassTag<T>) - Static method in class org.apache.spark.util.random.SamplingUtils
-
Reservoir sampling implementation that also returns the input size.
- reset() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Resets the values of all metrics to zero.
- reset() - Method in class org.apache.spark.sql.scripting.CaseStatementExec
- reset() - Method in class org.apache.spark.sql.scripting.CompoundNestedStatementIteratorExec
- reset() - Method in interface org.apache.spark.sql.scripting.CompoundStatementExec
-
Reset execution of the current node.
- reset() - Method in class org.apache.spark.sql.scripting.IfElseStatementExec
- reset() - Method in class org.apache.spark.sql.scripting.IterateStatementExec
- reset() - Method in class org.apache.spark.sql.scripting.LeaveStatementExec
- reset() - Method in class org.apache.spark.sql.scripting.RepeatStatementExec
- reset() - Method in class org.apache.spark.sql.scripting.SingleStatementExec
- reset() - Method in class org.apache.spark.sql.scripting.WhileStatementExec
- reset() - Method in class org.apache.spark.sql.util.MapperRowCounter
- reset() - Method in class org.apache.spark.sql.util.NumericHistogram
-
Resets a histogram object to its initial state.
- reset() - Method in class org.apache.spark.storage.BufferReleasingInputStream
- reset() - Method in class org.apache.spark.util.AccumulatorV2
-
Resets this accumulator, which is zero value.
- reset() - Method in class org.apache.spark.util.CollectionAccumulator
- reset() - Method in class org.apache.spark.util.DoubleAccumulator
- reset() - Method in class org.apache.spark.util.LongAccumulator
- resetStructuredLogging() - Static method in class org.apache.spark.util.Utils
-
Utility function to enable or disable structured logging based on system properties.
- resetTerminated() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Forget about past terminated queries so that
awaitAnyTermination()
can be used again to wait for new terminations. - residualDegreeOfFreedom() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- residualDegreeOfFreedomNull() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
- residuals() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Get the default residuals (deviance residuals) of the fitted model.
- residuals() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- residuals(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Get the residuals of the fitted model by type.
- resolveAndDownloadJars(String, String, SparkConf, Configuration) - Static method in class org.apache.spark.util.DependencyUtils
- resolveCannotHandleNestedSchema(LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- resolveGlobPaths(String, Configuration) - Static method in class org.apache.spark.util.DependencyUtils
- resolveMavenCoordinates(String, IvySettings, Option<IvySettings>, boolean, Seq<String>, boolean, PrintStream) - Static method in class org.apache.spark.util.MavenUtils
-
Resolves any dependencies that were supplied through maven coordinates
- resolveMavenDependencies(boolean, String, String, String, String, Option<String>) - Static method in class org.apache.spark.util.DependencyUtils
- resolveMavenDependencies(URI) - Static method in class org.apache.spark.util.DependencyUtils
-
Download Ivy URI's dependency jars.
- resolveURI(String) - Method in interface org.apache.spark.util.SparkFileUtils
-
Return a well-formed URI for the file described by a user input string.
- resolveURI(String) - Static method in class org.apache.spark.util.Utils
- resolveURIs(String) - Static method in class org.apache.spark.util.Utils
-
Resolve a comma-separated list of paths.
- resource(String, double) - Method in class org.apache.spark.resource.TaskResourceRequests
-
Amount of a particular custom resource(GPU, FPGA, etc) to use.
- resource(String, long, String, String) - Method in class org.apache.spark.resource.ExecutorResourceRequests
-
Amount of a particular custom resource(GPU, FPGA, etc) to use.
- RESOURCE_NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- RESOURCE_NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- RESOURCE_PREFIX() - Static method in class org.apache.spark.resource.ResourceUtils
- RESOURCE_PROFILE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- RESOURCE_PROFILE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- RESOURCE_PROFILES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- resourceAddresses() - Method in interface org.apache.spark.resource.ResourceAllocator
- ResourceAllocator - Interface in org.apache.spark.resource
-
Trait used to help executor/worker allocate resources.
- ResourceAmountUtils - Class in org.apache.spark.resource
- ResourceAmountUtils() - Constructor for class org.apache.spark.resource.ResourceAmountUtils
- ResourceDiscoveryPlugin - Interface in org.apache.spark.api.resource
-
:: DeveloperApi :: A plugin that can be dynamically loaded into a Spark application to control how custom resources are discovered.
- ResourceDiscoveryScriptPlugin - Class in org.apache.spark.resource
-
The default plugin that is loaded into a Spark application to control how custom resources are discovered.
- ResourceDiscoveryScriptPlugin() - Constructor for class org.apache.spark.resource.ResourceDiscoveryScriptPlugin
- ResourceID - Class in org.apache.spark.resource
-
Resource identifier.
- ResourceID(String, String) - Constructor for class org.apache.spark.resource.ResourceID
- ResourceInformation - Class in org.apache.spark.resource
-
Class to hold information about a type of Resource.
- ResourceInformation(String, String[]) - Constructor for class org.apache.spark.resource.ResourceInformation
- ResourceInformationJson - Class in org.apache.spark.resource
-
A case class to simplify JSON serialization of
ResourceInformation
. - ResourceInformationJson(String, Seq<String>) - Constructor for class org.apache.spark.resource.ResourceInformationJson
- resourceName() - Method in class org.apache.spark.resource.ExecutorResourceRequest
- resourceName() - Method in interface org.apache.spark.resource.ResourceAllocator
- resourceName() - Method in class org.apache.spark.resource.ResourceID
- resourceName() - Method in class org.apache.spark.resource.TaskResourceRequest
- resourceProfile() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
- resourceProfile() - Method in class org.apache.spark.scheduler.SparkListenerResourceProfileAdded
- ResourceProfile - Class in org.apache.spark.resource
-
Resource profile to associate with an RDD.
- ResourceProfile(Map<String, ExecutorResourceRequest>, Map<String, TaskResourceRequest>) - Constructor for class org.apache.spark.resource.ResourceProfile
- ResourceProfile.DefaultProfileExecutorResources$ - Class in org.apache.spark.resource
- ResourceProfile.ExecutorResourcesOrDefaults$ - Class in org.apache.spark.resource
- resourceProfileAddedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- resourceProfileAddedToJson(SparkListenerResourceProfileAdded, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- ResourceProfileBuilder - Class in org.apache.spark.resource
-
Resource profile builder to build a
ResourceProfile
to associate with an RDD. - ResourceProfileBuilder() - Constructor for class org.apache.spark.resource.ResourceProfileBuilder
- resourceProfileId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- resourceProfileId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig
- resourceProfileId() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- resourceProfileId() - Method in class org.apache.spark.scheduler.StageInfo
- resourceProfileId() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- resourceProfileId() - Method in class org.apache.spark.status.api.v1.StageData
- resourceProfileId() - Method in class org.apache.spark.status.LiveResourceProfile
- ResourceProfileInfo - Class in org.apache.spark.status.api.v1
- resourceProfiles() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
- resourceProfileToTotalExecs() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
- ResourceRequest - Class in org.apache.spark.resource
-
Class that represents a resource request.
- ResourceRequest(ResourceID, long, Optional<String>, Optional<String>) - Constructor for class org.apache.spark.resource.ResourceRequest
- resources() - Method in class org.apache.spark.api.java.JavaSparkContext
- resources() - Method in interface org.apache.spark.api.plugin.PluginContext
-
The custom resources (GPUs, FPGAs, etc) allocated to driver or executor.
- resources() - Method in class org.apache.spark.BarrierTaskContext
- resources() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
- resources() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
- resources() - Method in class org.apache.spark.SparkContext
- resources() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- resources() - Method in class org.apache.spark.TaskContext
-
Resources allocated to the task.
- RESOURCES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- resourcesAmounts() - Method in interface org.apache.spark.resource.ResourceAllocator
-
Get the amounts of resources that have been multiplied by ONE_ENTIRE_RESOURCE.
- resourcesInfo() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- resourcesJMap() - Method in class org.apache.spark.BarrierTaskContext
- resourcesJMap() - Method in class org.apache.spark.TaskContext
-
(java-specific) Resources allocated to the task.
- resourcesMapFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- resourcesMeetRequirements(Map<String, Object>, Seq<ResourceRequirement>) - Static method in class org.apache.spark.resource.ResourceUtils
- resourceTypeNotSupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ResourceUtils - Class in org.apache.spark.resource
- ResourceUtils() - Constructor for class org.apache.spark.resource.ResourceUtils
- responder() - Method in class org.apache.spark.ui.JettyUtils.ServletParams
- responseFromBackup(String) - Static method in class org.apache.spark.util.Utils
-
Return true if the response message is sent from a backup Master on standby.
- restart(String) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Restart the receiver.
- restart(String, Throwable) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Restart the receiver.
- restart(String, Throwable, int) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Restart the receiver.
- restoreOriginalOutputNames(Seq<NamedExpression>, Seq<String>) - Static method in class org.apache.spark.sql.util.SchemaUtils
- ResubmitFailedStages - Class in org.apache.spark.scheduler
- ResubmitFailedStages() - Constructor for class org.apache.spark.scheduler.ResubmitFailedStages
- Resubmitted - Class in org.apache.spark
-
:: DeveloperApi :: A
org.apache.spark.scheduler.ShuffleMapTask
that completed successfully earlier, but we lost the executor before the stage completed. - Resubmitted() - Constructor for class org.apache.spark.Resubmitted
- result() - Method in class org.apache.spark.types.variant.VariantBuilder
- result(Duration, CanAwait) - Method in class org.apache.spark.ComplexFutureAction
- result(Duration, CanAwait) - Method in interface org.apache.spark.FutureAction
-
Awaits and returns the result (of type T) of this action.
- result(Duration, CanAwait) - Method in class org.apache.spark.SimpleFutureAction
- RESULT_FETCH_START_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- RESULT_FETCH_START_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- RESULT_SERIALIZATION_TIME() - Static method in class org.apache.spark.InternalAccumulator
- RESULT_SERIALIZATION_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
- RESULT_SERIALIZATION_TIME() - Static method in class org.apache.spark.ui.ToolTips
- RESULT_SERIALIZATION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- RESULT_SERIALIZATION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- RESULT_SERIALIZATION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- RESULT_SERIALIZATION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- RESULT_SERIALIZATION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- RESULT_SIZE() - Static method in class org.apache.spark.InternalAccumulator
- RESULT_SIZE() - Static method in class org.apache.spark.status.TaskIndexNames
- RESULT_SIZE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- RESULT_SIZE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- RESULT_SIZE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- RESULT_SIZE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- RESULT_SIZE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- resultFetchStart() - Method in class org.apache.spark.status.api.v1.TaskData
- resultSerializationTime() - Method in class org.apache.spark.status.api.v1.StageData
- resultSerializationTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- resultSerializationTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- resultSetToObjectArray(ResultSet) - Static method in class org.apache.spark.rdd.JdbcRDD
- resultSize() - Method in class org.apache.spark.status.api.v1.StageData
- resultSize() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- resultSize() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- resultType() - Method in interface org.apache.spark.sql.connector.catalog.functions.BoundFunction
-
Returns the
data type
of values produced by this function. - RetrieveDelegationTokens$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveDelegationTokens$
- RetrieveLastAllocatedExecutorId$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$
- RetrieveSparkAppConfig(int) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig
- RetrieveSparkAppConfig$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig$
- ReturnStatementFinder - Class in org.apache.spark.util
- ReturnStatementFinder(Option<String>) - Constructor for class org.apache.spark.util.ReturnStatementFinder
- reverse() - Method in class org.apache.spark.graphx.EdgeDirection
-
Reverse the direction of an edge.
- reverse() - Method in class org.apache.spark.graphx.EdgeRDD
-
Reverse all the edges in this RDD.
- reverse() - Method in class org.apache.spark.graphx.Graph
-
Reverses all edges in the graph.
- reverse() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- reverse() - Method in class org.apache.spark.graphx.impl.GraphImpl
- reverse() - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- reverse() - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- reverse() - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- reverse() - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- reverse() - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- reverse() - Static method in class org.apache.spark.sql.types.LongExactNumeric
- reverse() - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- reverse(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a reversed string or an array with reverse order of elements.
- reversed() - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- reversed() - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- reversed() - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- reversed() - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- reversed() - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- reversed() - Static method in class org.apache.spark.sql.types.LongExactNumeric
- reversed() - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- reverseRoutingTables() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- reverseRoutingTables() - Method in class org.apache.spark.graphx.VertexRDD
-
Returns a new
VertexRDD
reflecting a reversal of all edge directions in the correspondingEdgeRDD
. - reviveOffers() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Update the current offers and schedule tasks
- ReviveOffers - Class in org.apache.spark.scheduler.local
- ReviveOffers() - Constructor for class org.apache.spark.scheduler.local.ReviveOffers
- ReviveOffers$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers$
- RewritableTransform - Interface in org.apache.spark.sql.connector.expressions
-
Allows Spark to rewrite the given references of the transform during analysis.
- RFormula - Class in org.apache.spark.ml.feature
-
Implements the transforms required for fitting a dataset against an R model formula.
- RFormula() - Constructor for class org.apache.spark.ml.feature.RFormula
- RFormula(String) - Constructor for class org.apache.spark.ml.feature.RFormula
- RFormulaBase - Interface in org.apache.spark.ml.feature
-
Base trait for
RFormula
andRFormulaModel
. - RFormulaModel - Class in org.apache.spark.ml.feature
-
Model fitted by
RFormula
. - RFormulaParser - Class in org.apache.spark.ml.feature
-
Limited implementation of R formula parsing.
- RFormulaParser() - Constructor for class org.apache.spark.ml.feature.RFormulaParser
- RidgeRegressionModel - Class in org.apache.spark.mllib.regression
-
Regression model trained using RidgeRegression.
- RidgeRegressionModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.RidgeRegressionModel
- RidgeRegressionWithSGD - Class in org.apache.spark.mllib.regression
-
Train a regression model with L2-regularization using Stochastic Gradient Descent.
- right() - Method in class org.apache.spark.sql.connector.expressions.filter.And
- right() - Method in class org.apache.spark.sql.connector.expressions.filter.Or
- right() - Method in class org.apache.spark.sql.sources.And
- right() - Method in class org.apache.spark.sql.sources.Or
- right(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the rightmost
len
(len
can be string type) characters from the stringstr
, iflen
is less or equal than 0 the result is an empty string. - rightCategories() - Method in class org.apache.spark.ml.tree.CategoricalSplit
-
Get sorted categories which split to the right
- rightChild() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- rightChild() - Method in class org.apache.spark.ml.tree.InternalNode
- rightChildIndex(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the index of the right child of this node.
- rightImpurity() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- rightNode() - Method in class org.apache.spark.mllib.tree.model.Node
- rightNodeId() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- rightOuterJoin(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a right outer join of
this
andother
. - rightOuterJoin(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a right outer join of
this
andother
. - rightOuterJoin(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a right outer join of
this
andother
. - rightOuterJoin(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a right outer join of
this
andother
. - rightOuterJoin(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a right outer join of
this
andother
. - rightOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a right outer join of
this
andother
. - rightOuterJoin(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'right outer join' between RDDs of
this
DStream andother
DStream. - rightOuterJoin(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'right outer join' between RDDs of
this
DStream andother
DStream. - rightOuterJoin(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'right outer join' between RDDs of
this
DStream andother
DStream. - rightOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'right outer join' between RDDs of
this
DStream andother
DStream. - rightOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'right outer join' between RDDs of
this
DStream andother
DStream. - rightOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'right outer join' between RDDs of
this
DStream andother
DStream. - rightPredict() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- rint(String) - Static method in class org.apache.spark.sql.functions
-
Returns the double value that is closest in value to the argument and is equal to a mathematical integer.
- rint(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the double value that is closest in value to the argument and is equal to a mathematical integer.
- rlike(String) - Method in class org.apache.spark.sql.Column
-
SQL RLIKE expression (LIKE with Regex).
- rlike(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if
str
matchesregexp
, or false otherwise. - RMATa() - Static method in class org.apache.spark.graphx.util.GraphGenerators
- RMATb() - Static method in class org.apache.spark.graphx.util.GraphGenerators
- RMATc() - Static method in class org.apache.spark.graphx.util.GraphGenerators
- RMATd() - Static method in class org.apache.spark.graphx.util.GraphGenerators
- rmatGraph(SparkContext, int, int) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
A random graph generator using the R-MAT model, proposed in "R-MAT: A Recursive Model for Graph Mining" by Chakrabarti et al.
- rnd() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- RobustScaler - Class in org.apache.spark.ml.feature
-
Scale features using statistics that are robust to outliers.
- RobustScaler() - Constructor for class org.apache.spark.ml.feature.RobustScaler
- RobustScaler(String) - Constructor for class org.apache.spark.ml.feature.RobustScaler
- RobustScalerModel - Class in org.apache.spark.ml.feature
-
Model fitted by
RobustScaler
. - RobustScalerParams - Interface in org.apache.spark.ml.feature
-
Params for
RobustScaler
andRobustScalerModel
. - roc() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Returns the receiver operating characteristic (ROC) curve, which is a Dataframe having two fields (FPR, TPR) with (0.0, 0.0) prepended and (1.0, 1.0) appended to it.
- roc() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
- roc() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- roc() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- roc() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- roc() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the receiver operating characteristic (ROC) curve, which is an RDD of (false positive rate, true positive rate) with (0.0, 0.0) prepended and (1.0, 1.0) appended to it.
- rolledOver() - Method in interface org.apache.spark.util.logging.RollingPolicy
-
Notify that rollover has occurred
- RollingPolicy - Interface in org.apache.spark.util.logging
-
Defines the policy based on which
RollingFileAppender
will generate rolling files. - rollup(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
- rollup(String, String...) - Method in class org.apache.spark.sql.Dataset
- rollup(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
- rollup(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- rollup(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
- rollup(Column...) - Method in class org.apache.spark.sql.Dataset
- rollup(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns, so we can run aggregation on them.
- rollup(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- RollupType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.RollupType$
- ROOT_CLUSTER_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- ROOT_EXECUTION_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- rootAllocator() - Static method in class org.apache.spark.sql.util.ArrowUtils
- rootConverterReturnNullError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- rootMeanSquaredError() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns the root mean squared error, which is defined as the square root of the mean squared error.
- rootMeanSquaredError() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns the root mean squared error, which is defined as the square root of the mean squared error.
- rootNode() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- rootNode() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- rootNode() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
-
Root of the decision tree
- rootPool() - Method in interface org.apache.spark.scheduler.SchedulableBuilder
- rootPool() - Method in interface org.apache.spark.scheduler.TaskScheduler
- round(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the column
e
rounded to 0 decimal places with HALF_UP round mode. - round(Column, int) - Static method in class org.apache.spark.sql.functions
-
Round the value of
e
toscale
decimal places with HALF_UP round mode ifscale
is greater than or equal to 0 or at integral part whenscale
is less than 0. - round(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Round the value of
e
toscale
decimal places with HALF_UP round mode ifscale
is greater than or equal to 0 or at integral part whenscale
is less than 0. - ROUND_CEILING() - Static method in class org.apache.spark.sql.types.Decimal
- ROUND_FLOOR() - Static method in class org.apache.spark.sql.types.Decimal
- ROUND_HALF_EVEN() - Static method in class org.apache.spark.sql.types.Decimal
- ROUND_HALF_UP() - Static method in class org.apache.spark.sql.types.Decimal
- row(StructType) - Static method in class org.apache.spark.sql.Encoders
-
Creates a
Row
encoder for schemaschema
. - row(T) - Method in interface org.apache.spark.ui.PagedTable
- Row - Interface in org.apache.spark.sql
-
Represents one row of output from a relational operator.
- ROW() - Static method in class org.apache.spark.api.r.SerializationFormats
- row_number() - Static method in class org.apache.spark.sql.functions
-
Window function: returns a sequential number starting at 1 within a window partition.
- RowFactory - Class in org.apache.spark.sql
-
A factory class used to construct
Row
objects. - RowFactory() - Constructor for class org.apache.spark.sql.RowFactory
- rowFormatNotUsedWithStoredAsError(SqlBaseParser.CreateTableLikeContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- rowId - Variable in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- rowId() - Method in interface org.apache.spark.sql.connector.write.SupportsDelta
-
Returns the row ID column references that should be used for row equality.
- rowIdSchema() - Method in interface org.apache.spark.sql.connector.write.LogicalWriteInfo
-
the schema of the ID columns from Spark to data source.
- rowIndices() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- rowIndices() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- rowIter() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns an iterator of row vectors.
- rowIter() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Returns an iterator of row vectors.
- rowIterator() - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
-
Returns an iterator over the rows in this batch.
- rowLargerThan256MUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- RowLevelOperation - Interface in org.apache.spark.sql.connector.write
-
A logical representation of a data source DELETE, UPDATE, or MERGE operation that requires rewriting data.
- RowLevelOperation.Command - Enum Class in org.apache.spark.sql.connector.write
-
A row-level SQL command.
- RowLevelOperationBuilder - Interface in org.apache.spark.sql.connector.write
-
An interface for building a
RowLevelOperation
. - RowLevelOperationInfo - Interface in org.apache.spark.sql.connector.write
-
An interface with logical information for a row-level operation such as DELETE, UPDATE, MERGE.
- RowMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a row-oriented distributed Matrix with no meaningful row indices.
- RowMatrix(RDD<Vector>) - Constructor for class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Alternative constructor leaving matrix dimensions to be determined automatically.
- RowMatrix(RDD<Vector>, long, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.RowMatrix
- rows() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
- rows() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
- rows() - Method in interface org.apache.spark.sql.connector.read.LocalScan
- rowsBetween(long, long) - Static method in class org.apache.spark.sql.expressions.Window
- rowsBetween(long, long) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the frame boundaries, from
start
(inclusive) toend
(inclusive). - rowsPerBlock() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
- RP_INFO_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- rPackages() - Static method in class org.apache.spark.api.r.RUtils
- rpad(Column, int, byte[]) - Static method in class org.apache.spark.sql.functions
-
Right-pad the binary column with pad to a byte length of len.
- rpad(Column, int, String) - Static method in class org.apache.spark.sql.functions
-
Right-pad the string column with pad to a length of len.
- RpcUtils - Class in org.apache.spark.util
- RpcUtils() - Constructor for class org.apache.spark.util.RpcUtils
- RRDD<T> - Class in org.apache.spark.api.r
-
An RDD that stores serialized R objects as Array[Byte].
- RRDD(RDD<T>, byte[], String, String, byte[], Object[], ClassTag<T>) - Constructor for class org.apache.spark.api.r.RRDD
- RRunnerModes - Class in org.apache.spark.api.r
- RRunnerModes() - Constructor for class org.apache.spark.api.r.RRunnerModes
- rtrim(Column) - Static method in class org.apache.spark.sql.functions
-
Trim the spaces from right end for the specified string value.
- rtrim(Column, String) - Static method in class org.apache.spark.sql.functions
-
Trim the specified character string from right end for the specified string column.
- ruleIdNotFoundForRuleError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- run() - Method in class org.apache.spark.util.SparkShutdownHook
- run(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LDA
-
Java-friendly version of
run()
- run(JavaRDD<Basket>) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Java-friendly version of
run
. - run(JavaRDD<FPGrowth.FreqItemset<Item>>) - Method in class org.apache.spark.mllib.fpm.AssociationRules
-
Java-friendly version of
run
. - run(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Java-friendly version of
run()
. - run(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Java-friendly version of
run()
- run(JavaRDD<Rating>) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Java-friendly version of
ALS.run
. - run(JavaRDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Java-friendly API for
org.apache.spark.mllib.tree.GradientBoostedTrees.run
. - run(JavaRDD<Tuple3<Double, Double, Double>>) - Method in class org.apache.spark.mllib.regression.IsotonicRegression
-
Run pool adjacent violators algorithm to obtain isotonic regression model.
- run(JavaRDD<Tuple3<Long, Long, Double>>) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
A Java-friendly version of
PowerIterationClustering.run
. - run(JavaRDD<Sequence>) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
A Java-friendly version of
run()
that reads sequences from aJavaRDD
and returns frequent sequences in aPrefixSpanModel
. - run(Graph<Object, Object>) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Run the PIC algorithm on Graph.
- run(Graph<VD, ED>, int, double, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
- run(Graph<VD, ED>, int, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.LabelPropagation
-
Run static Label Propagation for detecting communities in networks.
- run(Graph<VD, ED>, int, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.ConnectedComponents
-
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
- run(Graph<VD, ED>, int, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.StronglyConnectedComponents
-
Compute the strongly connected component (SCC) of each vertex and return a graph with the vertex value containing the lowest vertex id in the SCC containing that vertex.
- run(Graph<VD, ED>, Seq<Object>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.ShortestPaths
-
Computes shortest paths to the given set of landmark vertices.
- run(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.ConnectedComponents
-
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
- run(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.TriangleCount
- run(RDD<Object[]>, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Finds the complete set of frequent sequential patterns in the input sequences of itemsets.
- run(RDD<Object>, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Computes an FP-Growth model that contains frequent itemsets.
- run(RDD<Edge<Object>>, SVDPlusPlus.Conf) - Static method in class org.apache.spark.graphx.lib.SVDPlusPlus
-
Implement SVD++ based on "Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model", available here.
- run(RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, long, String, Option<org.apache.spark.ml.util.Instrumentation>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to train a gradient boosting model
- run(RDD<org.apache.spark.ml.feature.Instance>, Strategy, int, String, long, Option<org.apache.spark.ml.util.Instrumentation>, boolean, Option<String>) - Static method in class org.apache.spark.ml.tree.impl.RandomForest
-
Train a random forest.
- run(RDD<FPGrowth.FreqItemset<Item>>, Map<Item, Object>, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.AssociationRules
-
Computes the association rules with confidence above
minConfidence
. - run(RDD<FPGrowth.FreqItemset<Item>>, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.AssociationRules
-
Computes the association rules with confidence above
minConfidence
. - run(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Runs the bisecting k-means algorithm.
- run(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Perform expectation maximization
- run(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Train a K-means model on the given set of points;
data
should be cached for high performance, because this is an iterative algorithm. - run(RDD<Rating>) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Run ALS with the configured parameters on an input RDD of
Rating
objects. - run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
-
Run Logistic Regression with the configured parameters on an input RDD of LabeledPoint entries.
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model over an RDD
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Method to train a gradient boosting model
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model over an RDD
- run(RDD<LabeledPoint>, Vector) - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
-
Run Logistic Regression with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
- run(RDD<LabeledPoint>, Vector) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
- run(RDD<LabeledPoint>, Strategy, int, String, long) - Static method in class org.apache.spark.ml.tree.impl.RandomForest
-
Train a random forest.
- run(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LDA
-
Learn an LDA model using the given dataset.
- run(RDD<Tuple3<Object, Object, Object>>) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Run the PIC algorithm.
- run(RDD<Tuple3<Object, Object, Object>>) - Method in class org.apache.spark.mllib.regression.IsotonicRegression
-
Run IsotonicRegression algorithm to obtain isotonic regression model.
- RUN_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- RUN_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- runApproximateJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, ApproximateEvaluator<U, R>, long) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi :: Run a job that can return approximate results.
- runBagged(RDD<BaggedPoint<TreePoint>>, DecisionTreeMetadata, Broadcast<Split[][]>, Strategy, int, String, long, Option<org.apache.spark.ml.util.Instrumentation>, boolean, Option<String>) - Static method in class org.apache.spark.ml.tree.impl.RandomForest
-
Train a random forest with metadata and splits.
- runId() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns the unique id of this run of the query.
- runId() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryIdleEvent
- runId() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
- runId() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
- runId() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- runInNewThread(String, boolean, Function0<T>) - Static method in class org.apache.spark.util.ThreadUtils
-
Run a piece of code in a new thread and return the result.
- runJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a function on a given set of partitions in an RDD and return the results as an array.
- runJob(RDD<T>, Function1<Iterator<T>, U>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and pass the results to a handler function.
- runJob(RDD<T>, Function1<Iterator<T>, U>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and return the results in an array.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a function on a given set of partitions in an RDD and pass the results to the given handler function.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Seq<Object>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a function on a given set of partitions in an RDD and return the results as an array.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and pass the results to a handler function.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and return the results in an array.
- runLBFGS(RDD<Tuple2<Object, Vector>>, Gradient, Updater, int, double, int, double, Vector) - Static method in class org.apache.spark.mllib.optimization.LBFGS
-
Run Limited-memory BFGS (L-BFGS) in parallel.
- runMiniBatchSGD(RDD<Tuple2<Object, Vector>>, Gradient, Updater, double, int, double, double, Vector) - Static method in class org.apache.spark.mllib.optimization.GradientDescent
-
Alias of
runMiniBatchSGD
with convergenceTol set to default value of 0.001. - runMiniBatchSGD(RDD<Tuple2<Object, Vector>>, Gradient, Updater, double, int, double, double, Vector, double) - Static method in class org.apache.spark.mllib.optimization.GradientDescent
-
Run stochastic gradient descent (SGD) in parallel using mini batches.
- running() - Method in class org.apache.spark.scheduler.TaskInfo
- RUNNING - Enum constant in enum class org.apache.spark.JobExecutionStatus
- RUNNING - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The application is running.
- RUNNING - Enum constant in enum class org.apache.spark.status.api.v1.ApplicationStatus
- RUNNING - Enum constant in enum class org.apache.spark.status.api.v1.TaskStatus
- RUNNING() - Static method in class org.apache.spark.TaskState
- runningJobIds() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- runningTasks() - Method in interface org.apache.spark.scheduler.Schedulable
- runParallelPersonalizedPageRank(Graph<VD, ED>, int, double, long[], ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run Personalized PageRank for a fixed number of iterations, for a set of starting nodes in parallel.
- runPreCanonicalized(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.TriangleCount
- runtime() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
- RUNTIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- RuntimeConfig - Class in org.apache.spark.sql
-
Runtime configuration interface for Spark.
- RuntimeConfig() - Constructor for class org.apache.spark.sql.RuntimeConfig
- RuntimeInfo - Class in org.apache.spark.status.api.v1
- RuntimePercentage - Class in org.apache.spark.scheduler
- RuntimePercentage(double, Option<Object>, double) - Constructor for class org.apache.spark.scheduler.RuntimePercentage
- runUntilConvergence(Graph<VD, ED>, double, double, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run a dynamic version of PageRank returning a graph with vertex attributes containing the PageRank and edge attributes containing the normalized edge weight.
- runUntilConvergenceWithOptions(Graph<VD, ED>, double, double, Option<Object>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run a dynamic version of PageRank returning a graph with vertex attributes containing the PageRank and edge attributes containing the normalized edge weight.
- runWithOptions(Graph<VD, ED>, int, double, Option<Object>, boolean, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
- runWithOptions(Graph<VD, ED>, int, double, Option<Object>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
- runWithOptionsWithPreviousPageRank(Graph<VD, ED>, int, double, Option<Object>, boolean, Graph<Object, Object>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
- runWithOptionsWithPreviousPageRank(Graph<VD, ED>, int, double, Option<Object>, Graph<Object, Object>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
- runWithValidation(JavaRDD<LabeledPoint>, JavaRDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Java-friendly API for
org.apache.spark.mllib.tree.GradientBoostedTrees.runWithValidation
. - runWithValidation(RDD<org.apache.spark.ml.feature.Instance>, RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, long, String, Option<org.apache.spark.ml.util.Instrumentation>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to validate a gradient boosting model
- runWithValidation(RDD<LabeledPoint>, RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Method to validate a gradient boosting model
- RUtils - Class in org.apache.spark.api.r
- RUtils() - Constructor for class org.apache.spark.api.r.RUtils
- RWrappers - Class in org.apache.spark.ml.r
-
This is the Scala stub of SparkR read.ml.
- RWrappers() - Constructor for class org.apache.spark.ml.r.RWrappers
- RWrapperUtils - Class in org.apache.spark.ml.r
- RWrapperUtils() - Constructor for class org.apache.spark.ml.r.RWrapperUtils
S
- s() - Method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
- safeDoubleToJValue(double) - Static method in class org.apache.spark.sql.streaming.SafeJsonSerializer
- SafeJsonSerializer - Class in org.apache.spark.sql.streaming
- SafeJsonSerializer() - Constructor for class org.apache.spark.sql.streaming.SafeJsonSerializer
- safeMapToJValue(Map<String, T>, Function1<T, JValue>) - Static method in class org.apache.spark.sql.streaming.SafeJsonSerializer
-
Convert map to JValue while handling empty maps.
- sameSemantics(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns
true
when the logical query plans inside bothDataset
s are equal and therefore return same results. - sameSemantics(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- sameThread() - Static method in class org.apache.spark.util.ThreadUtils
-
An
ExecutionContextExecutor
that runs each task in the thread that invokesexecute/submit
. - sameThreadExecutorService() - Static method in class org.apache.spark.util.ThreadUtils
- sample() - Method in class org.apache.spark.util.random.BernoulliCellSampler
- sample() - Method in class org.apache.spark.util.random.BernoulliSampler
- sample() - Method in class org.apache.spark.util.random.PoissonSampler
- sample() - Method in interface org.apache.spark.util.random.RandomSampler
-
Whether to sample the next item or not.
- sample(boolean, double) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a sampled subset of this RDD.
- sample(boolean, double) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a sampled subset of this RDD with a random seed.
- sample(boolean, double) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new
Dataset
by sampling a fraction of rows, using a random seed. - sample(boolean, double) - Method in class org.apache.spark.sql.Dataset
- sample(boolean, double, long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a sampled subset of this RDD.
- sample(boolean, double, long) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a sampled subset of this RDD, with a user-supplied seed.
- sample(boolean, double, long) - Method in class org.apache.spark.rdd.RDD
-
Return a sampled subset of this RDD.
- sample(boolean, double, long) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new
Dataset
by sampling a fraction of rows, using a user-supplied seed. - sample(boolean, double, long) - Method in class org.apache.spark.sql.Dataset
- sample(boolean, Double) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a sampled subset of this RDD.
- sample(boolean, Double, long) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a sampled subset of this RDD.
- sample(double) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new
Dataset
by sampling a fraction of rows (without replacement), using a random seed. - sample(double) - Method in class org.apache.spark.sql.Dataset
- sample(double, long) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new
Dataset
by sampling a fraction of rows (without replacement), using a user-supplied seed. - sample(double, long) - Method in class org.apache.spark.sql.Dataset
- sample(Iterator<T>) - Method in class org.apache.spark.util.random.PoissonSampler
- sample(Iterator<T>) - Method in interface org.apache.spark.util.random.RandomSampler
-
take a random sample
- sampleBy(String, Map<T, Double>, long) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Returns a stratified sample without replacement based on the fraction given on each stratum.
- sampleBy(String, Map<T, Double>, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- sampleBy(String, Map<T, Object>, long) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Returns a stratified sample without replacement based on the fraction given on each stratum.
- sampleBy(String, Map<T, Object>, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- sampleBy(Column, Map<T, Double>, long) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
(Java-specific) Returns a stratified sample without replacement based on the fraction given on each stratum.
- sampleBy(Column, Map<T, Double>, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- sampleBy(Column, Map<T, Object>, long) - Method in class org.apache.spark.sql.api.DataFrameStatFunctions
-
Returns a stratified sample without replacement based on the fraction given on each stratum.
- sampleBy(Column, Map<T, Object>, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
- sampleByKey(boolean, Map<K, Double>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling).
- sampleByKey(boolean, Map<K, Double>, long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling).
- sampleByKey(boolean, Map<K, Object>, long) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return a subset of this RDD sampled by key (via stratified sampling).
- sampleByKeyExact(boolean, Map<K, Double>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
- sampleByKeyExact(boolean, Map<K, Double>, long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
- sampleByKeyExact(boolean, Map<K, Object>, long) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
- SamplePathFilter - Class in org.apache.spark.ml.image
-
Filter that allows loading a fraction of HDFS files.
- SamplePathFilter() - Constructor for class org.apache.spark.ml.image.SamplePathFilter
- samplePointsPerPartitionHint() - Method in class org.apache.spark.RangePartitioner
- sampleRatio() - Method in class org.apache.spark.ml.image.SamplePathFilter
- sampleStdev() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the sample standard deviation of this RDD's elements (which corrects for bias in estimating the standard deviation by dividing by N-1 instead of N).
- sampleStdev() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the sample standard deviation of this RDD's elements (which corrects for bias in estimating the standard deviation by dividing by N-1 instead of N).
- sampleStdev() - Method in class org.apache.spark.util.StatCounter
-
Return the sample standard deviation of the values, which corrects for bias in estimating the variance by dividing by N-1 instead of N.
- sampleVariance() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the sample variance of this RDD's elements (which corrects for bias in estimating the standard variance by dividing by N-1 instead of N).
- sampleVariance() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the sample variance of this RDD's elements (which corrects for bias in estimating the variance by dividing by N-1 instead of N).
- sampleVariance() - Method in class org.apache.spark.util.StatCounter
-
Return the sample variance, which corrects for bias in estimating the variance by dividing by N-1 instead of N.
- SamplingUtils - Class in org.apache.spark.util.random
- SamplingUtils() - Constructor for class org.apache.spark.util.random.SamplingUtils
- sanitizeDirName(String) - Static method in class org.apache.spark.util.Utils
- save() - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
as the specified table. - save(String) - Method in class org.apache.spark.ml.feature.HashingTF
- save(String) - Method in interface org.apache.spark.ml.util.MLWritable
-
Saves this ML instance to the input path, a shortcut of
write.save(path)
. - save(String) - Method in class org.apache.spark.ml.util.MLWriter
-
Saves the ML instances to the input path.
- save(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
at the specified path. - save(FPGrowthModel<?>, String) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
- save(PrefixSpanModel<?>, String) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
- save(MatrixFactorizationModel, String) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
-
Saves a
MatrixFactorizationModel
, where user features are saved underdata/users
and product features are saved underdata/products
. - save(SparkContext, String) - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.classification.SVMModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.feature.Word2VecModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
-
Save this model to the given path.
- save(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel
-
Save this model to the given path.
- save(SparkContext, String) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Save this model to the given path.
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.LassoModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.LinearRegressionModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.RidgeRegressionModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- save(SparkContext, String) - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
- save(SparkContext, String) - Method in interface org.apache.spark.mllib.util.Saveable
-
Save this model to the given path.
- save(SparkContext, String, String, int, int, Vector, double, Option<Object>) - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
Helper method for saving GLM classification model metadata and data.
- save(SparkContext, String, String, Vector, double) - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
Helper method for saving GLM regression model metadata and data.
- save(SparkContext, String, org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
- save(SparkContext, String, org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
- save(SparkContext, String, DecisionTreeModel) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
- save(SparkContext, BisectingKMeansModel, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
- save(SparkContext, BisectingKMeansModel, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
- save(SparkContext, BisectingKMeansModel, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
- save(SparkContext, KMeansModel, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
- save(SparkContext, KMeansModel, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
- save(SparkContext, PowerIterationClusteringModel, String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
- save(SparkContext, ChiSqSelectorModel, String) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
- Saveable - Interface in org.apache.spark.mllib.util
-
Trait for models and transformers which may be saved as files.
- saveAsHadoopDataset(JobConf) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system.
- saveAsHadoopDataset(JobConf) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Class<? extends CompressionCodec>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop
OutputFormat
class supporting the key and value types K and V in this RDD. - saveAsHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, JobConf, Option<Class<? extends CompressionCodec>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop
OutputFormat
class supporting the key and value types K and V in this RDD. - saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>, Class<? extends CompressionCodec>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system, compressing with the supplied codec.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>, JobConf) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsHadoopFile(String, Class<? extends CompressionCodec>, ClassTag<F>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop
OutputFormat
class supporting the key and value types K and V in this RDD. - saveAsHadoopFile(String, ClassTag<F>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop
OutputFormat
class supporting the key and value types K and V in this RDD. - saveAsHadoopFiles(String, String) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, JobConf) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, JobConf) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsHadoopFiles(String, String, ClassTag<F>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsLibSVMFile(RDD<LabeledPoint>, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Save labeled data in LIBSVM format.
- saveAsNewAPIHadoopDataset(Configuration) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported storage system, using a Configuration object for that storage system.
- saveAsNewAPIHadoopDataset(Configuration) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported storage system with new Hadoop API, using a Hadoop Configuration object for that storage system.
- saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Configuration) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a new Hadoop API
OutputFormat
(mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD. - saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<F>, Configuration) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsNewAPIHadoopFile(String, ClassTag<F>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a new Hadoop API
OutputFormat
(mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD. - saveAsNewAPIHadoopFiles(String, String) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Configuration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, Configuration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsNewAPIHadoopFiles(String, String, ClassTag<F>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in
this
DStream as a Hadoop file. - saveAsObjectFile(String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Save this RDD as a SequenceFile of serialized objects.
- saveAsObjectFile(String) - Method in class org.apache.spark.rdd.RDD
-
Save this RDD as a SequenceFile of serialized objects.
- saveAsObjectFiles(String, String) - Method in class org.apache.spark.streaming.dstream.DStream
-
Save each RDD in this DStream as a Sequence file of serialized objects.
- saveAsSequenceFile(String, Option<Class<? extends CompressionCodec>>) - Method in class org.apache.spark.rdd.SequenceFileRDDFunctions
-
Output the RDD as a Hadoop SequenceFile using the Writable types we infer from the RDD's key and value types.
- saveAsTable(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
as the specified table. - saveAsTextFile(String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Save this RDD as a text file, using string representations of elements.
- saveAsTextFile(String) - Method in class org.apache.spark.rdd.RDD
-
Save this RDD as a text file, using string representations of elements.
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Save this RDD as a compressed text file, using string representations of elements.
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Method in class org.apache.spark.rdd.RDD
-
Save this RDD as a compressed text file, using string representations of elements.
- saveAsTextFiles(String, String) - Method in class org.apache.spark.streaming.dstream.DStream
-
Save each RDD in this DStream as at text file, using string representation of elements.
- saveDataIntoViewNotAllowedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- savedTasks() - Method in class org.apache.spark.status.LiveStage
- saveImpl(M, String, SparkSession, JObject) - Static method in class org.apache.spark.ml.tree.EnsembleModelReadWrite
-
Helper method for saving a tree ensemble to disk.
- saveImpl(Params, PipelineStage[], SparkContext, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Deprecated.use saveImpl with SparkSession. Since 4.0.0.
- saveImpl(Params, PipelineStage[], SparkSession, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
- SaveInstanceEnd - Class in org.apache.spark.ml
-
Event fired after
MLWriter.save
. - SaveInstanceEnd(String) - Constructor for class org.apache.spark.ml.SaveInstanceEnd
- SaveInstanceStart - Class in org.apache.spark.ml
-
Event fired before
MLWriter.save
. - SaveInstanceStart(String) - Constructor for class org.apache.spark.ml.SaveInstanceStart
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
- SaveLoadV2_0$() - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
- SaveLoadV2_0$() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0$
- SaveLoadV2_0$() - Constructor for class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0$
- SaveLoadV3_0$() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0$
- SaveMode - Enum Class in org.apache.spark.sql
-
SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.
- saveModeUnsupportedError(Object, boolean) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- sc() - Method in class org.apache.spark.api.java.JavaSparkContext
- sc() - Method in interface org.apache.spark.ml.util.BaseReadWrite
-
Returns the underlying `SparkContext`.
- sc() - Method in class org.apache.spark.sql.SQLImplicits.StringToColumn
- scal(double, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
x = a * x
- scal(double, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
x = a * x
- SCALA_VERSION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- scalaBoolean() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive boolean type.
- scalaByte() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive byte type.
- scalaDouble() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive double type.
- scalaFloat() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive float type.
- scalaInt() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive int type.
- scalaIntToJavaLong(DStream<Object>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
- scalaLong() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive long type.
- ScalarFunction<R> - Interface in org.apache.spark.sql.connector.catalog.functions
-
Interface for a function that produces a result value for each input row.
- scalarSubqueryReturnsMultipleRows() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- scalaShort() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive short type.
- scalaToJavaLong(JavaPairDStream<K, Object>, ClassTag<K>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
- scalaVersion() - Method in class org.apache.spark.status.api.v1.RuntimeInfo
- scale() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- scale() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- scale() - Method in class org.apache.spark.mllib.random.GammaGenerator
- scale() - Method in class org.apache.spark.sql.types.Decimal
- scale() - Method in class org.apache.spark.sql.types.DecimalType
- scalingVec() - Method in class org.apache.spark.ml.feature.ElementwiseProduct
-
the vector to multiply with input vectors
- scalingVec() - Method in class org.apache.spark.mllib.feature.ElementwiseProduct
- Scan - Interface in org.apache.spark.sql.connector.read
-
A logical representation of a data source scan.
- Scan.ColumnarSupportMode - Enum Class in org.apache.spark.sql.connector.read
-
This enum defines how the columnar support for the partitions of the data source should be determined.
- ScanBuilder - Interface in org.apache.spark.sql.connector.read
-
An interface for building the
Scan
. - Schedulable - Interface in org.apache.spark.scheduler
-
An interface for schedulable entities.
- SchedulableBuilder - Interface in org.apache.spark.scheduler
-
An interface to build Schedulable tree buildPools: build the tree nodes(pools) addTaskSetManager: build the leaf nodes(TaskSetManagers)
- schedulableQueue() - Method in interface org.apache.spark.scheduler.Schedulable
- SCHEDULED() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- SCHEDULER_DELAY() - Static method in class org.apache.spark.status.TaskIndexNames
- SCHEDULER_DELAY() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
- SCHEDULER_DELAY() - Static method in class org.apache.spark.ui.ToolTips
- SCHEDULER_DELAY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SCHEDULER_DELAY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- SCHEDULER_DELAY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- SchedulerBackend - Interface in org.apache.spark.scheduler
-
A backend interface for scheduling systems that allows plugging in different ones under TaskSchedulerImpl.
- SchedulerBackendUtils - Class in org.apache.spark.scheduler.cluster
- SchedulerBackendUtils() - Constructor for class org.apache.spark.scheduler.cluster.SchedulerBackendUtils
- schedulerDelay() - Method in class org.apache.spark.status.api.v1.TaskData
- schedulerDelay() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- schedulerDelay(long, long, long, long, long, long) - Static method in class org.apache.spark.status.AppStatusUtils
- schedulerDelay(TaskData) - Static method in class org.apache.spark.status.AppStatusUtils
- SchedulerPool - Class in org.apache.spark.status
- SchedulerPool(String) - Constructor for class org.apache.spark.status.SchedulerPool
- SCHEDULING_POOL_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SchedulingAlgorithm - Interface in org.apache.spark.scheduler
-
An interface for sort algorithm FIFO: FIFO algorithm between TaskSetManagers FS: FS algorithm between Pools, and FIFO or FS within Pools
- schedulingDelay() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- schedulingDelay() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
Time taken for the first job of this batch to start processing from the time this batch was submitted to the streaming scheduler.
- schedulingMode() - Method in interface org.apache.spark.scheduler.Schedulable
- schedulingMode() - Method in interface org.apache.spark.scheduler.TaskScheduler
- SchedulingMode - Class in org.apache.spark.scheduler
-
"FAIR" and "FIFO" determines which policy is used to order tasks amongst a Schedulable's sub-queues "NONE" is used when the a Schedulable has no sub-queues.
- SchedulingMode() - Constructor for class org.apache.spark.scheduler.SchedulingMode
- schedulingPool() - Method in class org.apache.spark.status.api.v1.StageData
- schedulingPool() - Method in class org.apache.spark.status.LiveStage
- schema() - Method in class org.apache.spark.sql.api.Dataset
-
Returns the schema of this Dataset.
- schema() - Method in interface org.apache.spark.sql.connector.catalog.Table
-
Deprecated.This is deprecated. Please override
Table.columns()
instead. - schema() - Method in interface org.apache.spark.sql.connector.catalog.View
-
The schema for the view when the view is created after applying column aliases.
- schema() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- schema() - Method in interface org.apache.spark.sql.connector.write.LogicalWriteInfo
-
the schema of the input data from Spark to data source.
- schema() - Method in class org.apache.spark.sql.Dataset
- schema() - Method in interface org.apache.spark.sql.Encoder
-
Returns the schema of encoding this type of object as a Row.
- schema() - Method in interface org.apache.spark.sql.Row
-
Schema for the row.
- schema() - Method in class org.apache.spark.sql.sources.BaseRelation
- schema(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Specifies the schema by using the input DDL-formatted string.
- schema(String) - Method in class org.apache.spark.sql.DataFrameReader
- schema(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Specifies the schema by using the input DDL-formatted string.
- schema(StructType) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Specifies the input schema.
- schema(StructType) - Method in class org.apache.spark.sql.DataFrameReader
- schema(StructType) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Specifies the input schema.
- schema_of_csv(String) - Static method in class org.apache.spark.sql.functions
-
Parses a CSV string and infers its schema in DDL format.
- schema_of_csv(Column) - Static method in class org.apache.spark.sql.functions
-
Parses a CSV string and infers its schema in DDL format.
- schema_of_csv(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
Parses a CSV string and infers its schema in DDL format using options.
- schema_of_json(String) - Static method in class org.apache.spark.sql.functions
-
Parses a JSON string and infers its schema in DDL format.
- schema_of_json(Column) - Static method in class org.apache.spark.sql.functions
-
Parses a JSON string and infers its schema in DDL format.
- schema_of_json(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
Parses a JSON string and infers its schema in DDL format using options.
- schema_of_variant(Column) - Static method in class org.apache.spark.sql.functions
-
Returns schema in the SQL format of a variant.
- schema_of_variant_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the merged schema in the SQL format of a variant column.
- schema_of_xml(String) - Static method in class org.apache.spark.sql.functions
-
Parses a XML string and infers its schema in DDL format.
- schema_of_xml(Column) - Static method in class org.apache.spark.sql.functions
-
Parses a XML string and infers its schema in DDL format.
- schema_of_xml(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
Parses a XML string and infers its schema in DDL format using options.
- SchemaConverters - Class in org.apache.spark.sql.avro
-
This object contains method that are used to convert sparkSQL schemas to avro schemas and vice versa.
- SchemaConverters() - Constructor for class org.apache.spark.sql.avro.SchemaConverters
- SchemaConverters.SchemaType - Class in org.apache.spark.sql.avro
-
Internal wrapper for SQL data type and nullability.
- SchemaConverters.SchemaType$ - Class in org.apache.spark.sql.avro
- schemaFailToParseError(String, Throwable) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- schemaIsNotStructTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- schemaIsNotStructTypeError(Expression, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- schemaNotSpecifiedForSchemaRelationProviderError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- SchemaRelationProvider - Interface in org.apache.spark.sql.sources
-
Implemented by objects that produce relations for a specific kind of data source with a given schema.
- schemasExists(Connection, JDBCOptions, String) - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- schemasExists(Connection, JDBCOptions, String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Check schema exists or not.
- schemasExists(Connection, JDBCOptions, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- schemasExists(Connection, JDBCOptions, String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- SchemaType(DataType, boolean) - Constructor for class org.apache.spark.sql.avro.SchemaConverters.SchemaType
- SchemaType$() - Constructor for class org.apache.spark.sql.avro.SchemaConverters.SchemaType$
- SchemaUtils - Class in org.apache.spark.ml.util
-
Utils for handling schemas.
- SchemaUtils - Class in org.apache.spark.sql.util
-
Utils for handling schemas.
- SchemaUtils() - Constructor for class org.apache.spark.ml.util.SchemaUtils
- SchemaUtils() - Constructor for class org.apache.spark.sql.util.SchemaUtils
- scope() - Method in class org.apache.spark.storage.RDDInfo
- scoreAndLabels() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
- scoreCol() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Field in "predictions" which gives the probability or rawPrediction of each class as a vector.
- scoreCol() - Method in interface org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
- scoreCol() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
- scoreCol() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- scoreCol() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- scratch() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- sec(Column) - Static method in class org.apache.spark.sql.functions
- second(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the seconds as an integer from a given date/timestamp/string.
- SECOND() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- secondArgumentInFunctionIsNotBooleanLiteralError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- secondArgumentNotDoubleLiteralError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- secondArgumentOfFunctionIsNotIntegerError(String, NumberFormatException) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- seconds() - Static method in class org.apache.spark.scheduler.StatsReportListener
- seconds(long) - Static method in class org.apache.spark.streaming.Durations
- Seconds - Class in org.apache.spark.streaming
-
Helper object that creates instance of
Duration
representing a given number of seconds. - Seconds() - Constructor for class org.apache.spark.streaming.Seconds
- SecurityConfigurationLock - Class in org.apache.spark.security
-
There are cases when global JVM security configuration must be modified.
- SecurityConfigurationLock() - Constructor for class org.apache.spark.security.SecurityConfigurationLock
- securityManager() - Method in class org.apache.spark.SparkEnv
- securityManager() - Method in interface org.apache.spark.status.api.v1.UIRoot
- SecurityUtils - Class in org.apache.spark.util
-
Various utility methods used by Spark Security.
- SecurityUtils() - Constructor for class org.apache.spark.util.SecurityUtils
- seed() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- seed() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- seed() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- seed() - Method in class org.apache.spark.ml.classification.FMClassifier
- seed() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- seed() - Method in class org.apache.spark.ml.classification.GBTClassifier
- seed() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- seed() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- seed() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- seed() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- seed() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- seed() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- seed() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- seed() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- seed() - Method in class org.apache.spark.ml.clustering.KMeans
- seed() - Method in class org.apache.spark.ml.clustering.KMeansModel
- seed() - Method in class org.apache.spark.ml.clustering.LDA
- seed() - Method in class org.apache.spark.ml.clustering.LDAModel
- seed() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- seed() - Method in class org.apache.spark.ml.feature.MinHashLSH
- seed() - Method in class org.apache.spark.ml.feature.Word2Vec
- seed() - Method in class org.apache.spark.ml.feature.Word2VecModel
- seed() - Method in interface org.apache.spark.ml.param.shared.HasSeed
-
Param for random seed.
- seed() - Method in class org.apache.spark.ml.recommendation.ALS
- seed() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- seed() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- seed() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- seed() - Method in class org.apache.spark.ml.regression.FMRegressor
- seed() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- seed() - Method in class org.apache.spark.ml.regression.GBTRegressor
- seed() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- seed() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- seed() - Method in class org.apache.spark.ml.tuning.CrossValidator
- seed() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- seed() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- seed() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- seedParam() - Static method in class org.apache.spark.ml.image.SamplePathFilter
- select(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Selects a set of columns.
- select(String, String...) - Method in class org.apache.spark.sql.Dataset
- select(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Selects a set of columns.
- select(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- select(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Selects a set of column based expressions.
- select(Column...) - Method in class org.apache.spark.sql.Dataset
- select(TypedColumn<T, U1>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by computing the given
Column
expression for each element. - select(TypedColumn<T, U1>) - Method in class org.apache.spark.sql.Dataset
- select(TypedColumn<T, U1>, TypedColumn<T, U2>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by computing the given
Column
expressions for each element. - select(TypedColumn<T, U1>, TypedColumn<T, U2>) - Method in class org.apache.spark.sql.Dataset
- select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by computing the given
Column
expressions for each element. - select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>) - Method in class org.apache.spark.sql.Dataset
- select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by computing the given
Column
expressions for each element. - select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>) - Method in class org.apache.spark.sql.Dataset
- select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>, TypedColumn<T, U5>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by computing the given
Column
expressions for each element. - select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>, TypedColumn<T, U5>) - Method in class org.apache.spark.sql.Dataset
- select(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Selects a set of column based expressions.
- select(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- selectedFeatures() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- selectedFeatures() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- selectedFeatures() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- selectedFeatures() - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
- selectExpr(String...) - Method in class org.apache.spark.sql.api.Dataset
-
Selects a set of SQL expressions.
- selectExpr(String...) - Method in class org.apache.spark.sql.Dataset
- selectExpr(Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Selects a set of SQL expressions.
- selectExpr(Seq<String>) - Method in class org.apache.spark.sql.Dataset
- selectExprNotInGroupByError(Expression, Seq<Alias>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- selectionMode() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- selectionMode() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- selectionMode() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
-
The selection mode.
- selectionThreshold() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- selectionThreshold() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- selectionThreshold() - Method in interface org.apache.spark.ml.feature.UnivariateFeatureSelectorParams
-
The upper bound of the features that selector will select.
- SelectorParams - Interface in org.apache.spark.ml.feature
-
Params for
Selector
andSelectorModel
. - selectorType() - Method in class org.apache.spark.ml.feature.ChiSqSelector
- selectorType() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- selectorType() - Method in interface org.apache.spark.ml.feature.SelectorParams
-
The selector type.
- selectorType() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- semanticHash() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a
hashCode
of the logical query plan against thisDataset
. - semanticHash() - Method in class org.apache.spark.sql.Dataset
- send(Object) - Method in interface org.apache.spark.api.plugin.PluginContext
-
Send a message to the plugin's driver-side component.
- sender() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
- sendFetchMergedStatusRequest(ShuffleBlockFetcherIterator.FetchRequest) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the iterator is initialized and only if it has push-merged blocks for which it needs to fetch the metadata.
- sendResubmittedTaskStatusForShuffleMapStagesOnlyError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- sendToDst(A) - Method in class org.apache.spark.graphx.EdgeContext
-
Sends a message to the destination vertex.
- sendToDst(A) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- sendToSrc(A) - Method in class org.apache.spark.graphx.EdgeContext
-
Sends a message to the source vertex.
- sendToSrc(A) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- sentences(Column) - Static method in class org.apache.spark.sql.functions
-
Splits a string into arrays of sentences, where each sentence is an array of words.
- sentences(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Splits a string into arrays of sentences, where each sentence is an array of words.
- sentences(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Splits a string into arrays of sentences, where each sentence is an array of words.
- sequence() - Method in class org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
- sequence(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Generate a sequence of integers from start to stop, incrementing by 1 if start is less than or equal to stop, otherwise -1.
- sequence(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Generate a sequence of integers from start to stop, incrementing by step.
- sequenceCol() - Method in class org.apache.spark.ml.fpm.PrefixSpan
-
Param for the name of the sequence column in dataset (default "sequence"), rows with nulls in this column are ignored.
- sequenceFile(String, int, ClassTag<K>, ClassTag<V>, Function0<WritableConverter<K>>, Function0<WritableConverter<V>>) - Method in class org.apache.spark.SparkContext
-
Version of sequenceFile() for types implicitly convertible to Writables through a WritableConverter.
- sequenceFile(String, Class<K>, Class<V>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop SequenceFile.
- sequenceFile(String, Class<K>, Class<V>) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop SequenceFile with given key and value types.
- sequenceFile(String, Class<K>, Class<V>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop SequenceFile with given key and value types.
- sequenceFile(String, Class<K>, Class<V>, int) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop SequenceFile with given key and value types.
- SequenceFileRDDFunctions<K,
V> - Class in org.apache.spark.rdd -
Extra functions available on RDDs of (key, value) pairs to create a Hadoop SequenceFile, through an implicit conversion.
- SequenceFileRDDFunctions(RDD<Tuple2<K, V>>, Class<? extends Writable>, Class<? extends Writable>, Function1<K, Writable>, ClassTag<K>, Function1<V, Writable>, ClassTag<V>) - Constructor for class org.apache.spark.rdd.SequenceFileRDDFunctions
- SER_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- SerDe - Class in org.apache.spark.api.r
-
Utility functions to serialize, deserialize objects to / from R
- SerDe() - Constructor for class org.apache.spark.api.r.SerDe
- serDeInterfaceNotFoundError(NoClassDefFoundError) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- SerializableConfiguration - Class in org.apache.spark.util
-
Hadoop configuration but serializable.
- SerializableConfiguration(Configuration) - Constructor for class org.apache.spark.util.SerializableConfiguration
- SerializableMapWrapper(Map<A, B>) - Constructor for class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
- SerializableWritable<T extends org.apache.hadoop.io.Writable> - Class in org.apache.spark
- SerializableWritable(T) - Constructor for class org.apache.spark.SerializableWritable
- SerializationDebugger - Class in org.apache.spark.serializer
- SerializationDebugger() - Constructor for class org.apache.spark.serializer.SerializationDebugger
- SerializationDebugger.ObjectStreamClassMethods - Class in org.apache.spark.serializer
-
An implicit class that allows us to call private methods of ObjectStreamClass.
- SerializationDebugger.ObjectStreamClassMethods$ - Class in org.apache.spark.serializer
- SerializationFormats - Class in org.apache.spark.api.r
- SerializationFormats() - Constructor for class org.apache.spark.api.r.SerializationFormats
- serializationStream() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
- SerializationStream - Class in org.apache.spark.serializer
-
:: DeveloperApi :: A stream for writing serialized objects.
- SerializationStream() - Constructor for class org.apache.spark.serializer.SerializationStream
- serialize(ExecutorMetrics) - Static method in class org.apache.spark.status.protobuf.ExecutorMetricsSerializer
- serialize(JobExecutionStatus) - Static method in class org.apache.spark.status.protobuf.JobExecutionStatusSerializer
- serialize(Vector) - Method in class org.apache.spark.mllib.linalg.VectorUDT
- serialize(SQLPlanMetric) - Static method in class org.apache.spark.status.protobuf.sql.SQLPlanMetricSerializer
- serialize(SinkProgress) - Static method in class org.apache.spark.status.protobuf.sql.SinkProgressSerializer
- serialize(SourceProgress) - Static method in class org.apache.spark.status.protobuf.sql.SourceProgressSerializer
- serialize(StateOperatorProgress) - Static method in class org.apache.spark.status.protobuf.sql.StateOperatorProgressSerializer
- serialize(StreamingQueryProgress) - Static method in class org.apache.spark.status.protobuf.sql.StreamingQueryProgressSerializer
- serialize(AccumulableInfo) - Static method in class org.apache.spark.status.protobuf.AccumulableInfoSerializer
- serialize(ExecutorStageSummary) - Static method in class org.apache.spark.status.protobuf.ExecutorStageSummarySerializer
- serialize(StageStatus) - Static method in class org.apache.spark.status.protobuf.StageStatusSerializer
- serialize(Enumeration.Value) - Static method in class org.apache.spark.status.protobuf.DeterministicLevelSerializer
- serialize(T) - Method in interface org.apache.spark.status.protobuf.ProtobufSerDe
-
Serialize the input data of the type
T
toArray[Byte]
. - serialize(T) - Method in interface org.apache.spark.util.SparkSerDeUtils
-
Serialize an object using Java serialization
- serialize(T) - Static method in class org.apache.spark.util.Utils
- serialize(T, ClassTag<T>) - Method in class org.apache.spark.serializer.DummySerializerInstance
- serialize(T, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
- serialize(UserType) - Method in class org.apache.spark.sql.types.UserDefinedType
-
Convert the user type to a SQL datum
- SERIALIZED_R_DATA_SCHEMA() - Static method in class org.apache.spark.sql.api.r.SQLUtils
- serializedData() - Method in class org.apache.spark.scheduler.local.StatusUpdate
- serializedMapAndMergeStatus(org.apache.spark.broadcast.BroadcastManager, boolean, int, SparkConf) - Method in class org.apache.spark.ShuffleStatus
-
Serializes the mapStatuses and mergeStatuses array into an efficient compressed format.
- serializedMapStatus(org.apache.spark.broadcast.BroadcastManager, boolean, int, SparkConf) - Method in class org.apache.spark.ShuffleStatus
-
Serializes the mapStatuses array into an efficient compressed format.
- SerializedMemoryEntry<T> - Class in org.apache.spark.storage.memory
- SerializedMemoryEntry(ChunkedByteBuffer, MemoryMode, ClassTag<T>) - Constructor for class org.apache.spark.storage.memory.SerializedMemoryEntry
- serializedPyClass() - Method in class org.apache.spark.sql.types.UserDefinedType
-
Serialized Python UDT class, if exists.
- SerializedValuesHolder<T> - Class in org.apache.spark.storage.memory
-
A holder for storing the serialized values.
- SerializedValuesHolder(BlockId, int, ClassTag<T>, MemoryMode, org.apache.spark.serializer.SerializerManager) - Constructor for class org.apache.spark.storage.memory.SerializedValuesHolder
- serializer() - Method in class org.apache.spark.ShuffleDependency
- serializer() - Method in class org.apache.spark.SparkEnv
- Serializer - Class in org.apache.spark.serializer
-
:: DeveloperApi :: A serializer.
- Serializer() - Constructor for class org.apache.spark.serializer.Serializer
- serializerForHistoryServer(SparkConf) - Static method in class org.apache.spark.status.KVUtils
- SerializerHelper - Class in org.apache.spark.serializer
- SerializerHelper() - Constructor for class org.apache.spark.serializer.SerializerHelper
- SerializerInstance - Class in org.apache.spark.serializer
-
:: DeveloperApi :: An instance of a serializer, for use by one thread at a time.
- SerializerInstance() - Constructor for class org.apache.spark.serializer.SerializerInstance
- serializerManager() - Method in class org.apache.spark.SparkEnv
- serializeStream(OutputStream) - Method in class org.apache.spark.serializer.DummySerializerInstance
- serializeStream(OutputStream) - Method in class org.apache.spark.serializer.SerializerInstance
- serializeToChunkedBuffer(SerializerInstance, T, long, ClassTag<T>) - Static method in class org.apache.spark.serializer.SerializerHelper
- serializeViaNestedStream(OutputStream, SerializerInstance, Function1<SerializationStream, BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Serialize via nested stream using specific serializer
- serviceName() - Method in interface org.apache.spark.security.HadoopDelegationTokenProvider
-
Name of the service to provide delegation tokens.
- servletContext() - Method in interface org.apache.spark.status.api.v1.ApiRequestContext
- ServletParams(Function1<HttpServletRequest, T>, String, Function1<T, String>) - Constructor for class org.apache.spark.ui.JettyUtils.ServletParams
- ServletParams$() - Constructor for class org.apache.spark.ui.JettyUtils.ServletParams$
- session(SparkSession) - Static method in class org.apache.spark.ml.r.RWrappers
- session(SparkSession) - Method in interface org.apache.spark.ml.util.BaseReadWrite
-
Sets the Spark Session to use for saving/loading.
- session(SparkSession) - Method in class org.apache.spark.ml.util.GeneralMLWriter
- session(SparkSession) - Method in class org.apache.spark.ml.util.MLReader
- session(SparkSession) - Method in class org.apache.spark.ml.util.MLWriter
- session_user() - Static method in class org.apache.spark.sql.functions
-
Returns the user name of current execution context.
- session_window(Column, String) - Static method in class org.apache.spark.sql.functions
-
Generates session window given a timestamp specifying column.
- session_window(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Generates session window given a timestamp specifying column.
- SessionCatalogAndIdentifier() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.SessionCatalogAndIdentifier
- SessionCatalogAndIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
- SessionCatalogAndIdentifier$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.SessionCatalogAndIdentifier$
- SessionConfigSupport - Interface in org.apache.spark.sql.connector.catalog
-
A mix-in interface for
TableProvider
. - sessionState() - Method in class org.apache.spark.sql.SparkSession
- sessionUUID() - Method in class org.apache.spark.storage.CacheId
- sessionWindowGapDurationDataTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- set(int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given Int.
- set(long) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given Long.
- set(long, int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given unscaled Long, with a given precision and scale.
- set(long, long, int, int, VD, VD, ED) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- set(String, boolean) - Method in class org.apache.spark.sql.RuntimeConfig
-
Sets the given Spark runtime configuration property.
- set(String, long) - Method in class org.apache.spark.sql.RuntimeConfig
-
Sets the given Spark runtime configuration property.
- set(String, long, long) - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Sets the thread-local input block.
- set(String, Object) - Method in interface org.apache.spark.ml.param.Params
-
Sets a parameter (by name) in the embedded param map.
- set(String, String) - Method in class org.apache.spark.SparkConf
-
Set a configuration variable.
- set(String, String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Sets the given Spark runtime configuration property.
- set(BigInteger) - Method in class org.apache.spark.sql.types.Decimal
-
If the value is not in the range of long, convert it to BigDecimal and the precision and scale are based on the converted value.
- set(Param<T>, T) - Method in interface org.apache.spark.ml.param.Params
-
Sets a parameter in the embedded param map.
- set(ParamPair<?>) - Method in interface org.apache.spark.ml.param.Params
-
Sets a parameter in the embedded param map.
- set(SparkEnv) - Static method in class org.apache.spark.SparkEnv
- set(Decimal) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given Decimal value.
- set(BigDecimal) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given BigDecimal value, inheriting its precision and scale.
- set(BigDecimal, int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given BigDecimal value, with a given precision and scale.
- Set() - Static method in class org.apache.spark.metrics.sink.StatsdMetricType
- setAccumulatorId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
int64 accumulator_id = 2;
- setAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- setAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- setAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- setAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- setAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- setAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- setActive(SQLContext) - Static method in class org.apache.spark.sql.SQLContext
-
Deprecated.Use SparkSession.setActiveSession instead. Since 2.0.0.
- setActiveSession(SparkSession) - Static method in class org.apache.spark.sql.SparkSession
-
Changes the SparkSession that will be returned in this thread and its children when SparkSession.getOrCreate() is called.
- setActiveTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 active_tasks = 9;
- setAddress(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional string address = 1;
- setAddressBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional string address = 1;
- setAddresses(int, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- setAddTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 add_time = 20;
- setAddTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
int64 add_time = 5;
- setAggregationDepth(int) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregationDepth(int) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregationDepth(int) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setAggregationDepth(int) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregationDepth(int) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- setAggregationDepth(int) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregator(Aggregator<K, V, C>) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set aggregator for RDD's shuffle.
- setAlgo(String) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Sets Algorithm using a String.
- setAlgo(Enumeration.Value) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setAll(Iterable<Tuple2<String, String>>) - Method in class org.apache.spark.SparkConf
-
Set multiple parameters together
- setAllRemovalsTimeMs(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 all_removals_time_ms = 6;
- setAllUpdatesTimeMs(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 all_updates_time_ms = 4;
- setAlpha(double) - Method in class org.apache.spark.ml.recommendation.ALS
- setAlpha(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for
setDocConcentration()
- setAlpha(double) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets the constant used in computing confidence in implicit ALS.
- setAlpha(Vector) - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for
setDocConcentration()
- setAmount(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
double amount = 2;
- setAmount(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
int64 amount = 2;
- setAppName(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Set the application name.
- setAppName(String) - Method in class org.apache.spark.launcher.SparkLauncher
- setAppName(String) - Method in class org.apache.spark.SparkConf
-
Set a name for your application.
- setAppResource(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Set the main application resource.
- setAppResource(String) - Method in class org.apache.spark.launcher.SparkLauncher
- setAppSparkVersion(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string app_spark_version = 8;
- setAppSparkVersionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string app_spark_version = 8;
- setAttempt(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 attempt = 3;
- setAttempt(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 attempt = 3;
- setAttemptId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 attempt_id = 3;
- setAttemptId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string attempt_id = 1;
- setAttemptIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string attempt_id = 1;
- setAttempts(int, StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- setAttempts(int, StoreTypes.ApplicationAttemptInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- setBandwidth(double) - Method in class org.apache.spark.mllib.stat.KernelDensity
-
Sets the bandwidth (standard deviation) of the Gaussian kernel (default:
1.0
). - setBarrier(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
bool barrier = 4;
- setBatchDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
int64 batch_duration = 6;
- setBatchId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
int64 batch_id = 5;
- setBeta(double) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setBeta(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for
setTopicConcentration()
- setBinary(boolean) - Method in class org.apache.spark.ml.feature.CountVectorizer
- setBinary(boolean) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- setBinary(boolean) - Method in class org.apache.spark.ml.feature.HashingTF
- setBinary(boolean) - Method in class org.apache.spark.mllib.feature.HashingTF
-
If true, term frequency vector will be binary such that non-zero term counts will be set to 1 (default: false)
- setBlacklistedInStages(int, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- setBlockName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string block_name = 1;
- setBlockNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string block_name = 1;
- setBlocks(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of blocks for both user blocks and product blocks to parallelize the computation into; pass -1 for an auto-configured number of blocks.
- setBlockSize(int) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param
MultilayerPerceptronClassifier.blockSize()
. - setBlockSize(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
Set block size for stacking input data in matrices.
- setBlockSize(int) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
Set block size for stacking input data in matrices.
- setBootstrap(boolean) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setBootstrap(boolean) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setBucketLength(double) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- setBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double bytes_read = 18;
- setBytesRead(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- setBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
int64 bytes_read = 1;
- setBytesWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double bytes_written = 20;
- setBytesWritten(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- setBytesWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
int64 bytes_written = 1;
- setBytesWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 bytes_written = 1;
- setCached(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
bool cached = 3;
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setCallsite(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string callsite = 5;
- setCallSite(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Pass-through to SparkContext.setCallSite.
- setCallSite(String) - Method in class org.apache.spark.SparkContext
-
Set the thread-local property for overriding the call sites of actions and RDDs.
- setCallsiteBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string callsite = 5;
- setCaseSensitive(boolean) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- setCategoricalCols(String[]) - Method in class org.apache.spark.ml.feature.FeatureHasher
- setCategoricalFeaturesInfo(Map<Integer, Integer>) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Sets categoricalFeaturesInfo using a Java Map.
- setCategoricalFeaturesInfo(Map<Object, Object>) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setCensorCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- setCheckpointDir(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Set the directory under which RDDs are going to be checkpointed.
- setCheckpointDir(String) - Method in class org.apache.spark.SparkContext
-
Set the directory under which RDDs are going to be checkpointed.
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.clustering.LDA
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.recommendation.ALS
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.mllib.clustering.LDA
-
Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1).
- setCheckpointInterval(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set period (in iterations) between checkpoints (default = 10).
- setCheckpointInterval(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setChildClusters(int, StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- setChildClusters(int, StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- setChildNodes(int, StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- setChildNodes(int, StoreTypes.RDDOperationNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- setClassifier(Classifier<?, ?, ?>) - Method in class org.apache.spark.ml.classification.OneVsRest
- setClasspathEntries(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- setClasspathEntries(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- setCluster(StoreTypes.SparkPlanGraphClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- setCluster(StoreTypes.SparkPlanGraphClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper cluster = 2;
- setColdStartStrategy(String) - Method in class org.apache.spark.ml.recommendation.ALS
- setColdStartStrategy(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
- setCollectSubModels(boolean) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
Whether to collect submodels when fitting.
- setCollectSubModels(boolean) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
Whether to collect submodels when fitting.
- setCommitTimeMs(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 commit_time_ms = 7;
- setCompleted(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
bool completed = 7;
- setCompletedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 completed_tasks = 11;
- setCompletionTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 completion_time = 5;
- setCompletionTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional int64 completion_time = 9;
- setCompletionTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 completion_time = 12;
- setConf(String, String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Set a single configuration value for the application.
- setConf(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
- setConf(String, String) - Method in class org.apache.spark.sql.SQLContext
-
Set the given Spark SQL configuration property.
- setConf(Properties) - Method in class org.apache.spark.sql.SQLContext
-
Set Spark SQL configuration properties.
- setConf(Configuration) - Method in interface org.apache.spark.input.Configurable
- setConf(Configuration) - Method in class org.apache.spark.ml.image.SamplePathFilter
- setConfig(String, String) - Static method in class org.apache.spark.launcher.SparkLauncher
-
Set a configuration value for the launcher library.
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the largest change in log-likelihood at which convergence is considered to have occurred.
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the convergence tolerance.
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the convergence tolerance of iterations for L-BFGS.
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the convergence tolerance.
- setCoresGranted(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_granted = 3;
- setCoresPerExecutor(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 cores_per_executor = 5;
- setCorruptMergedBlockChunks(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- setCorruptMergedBlockChunks(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 corrupt_merged_block_chunks = 1;
- setCurrentCatalog(String) - Method in class org.apache.spark.sql.api.Catalog
-
Sets the current catalog in this session.
- setCurrentDatabase(String) - Method in class org.apache.spark.sql.api.Catalog
-
Sets the current database (namespace) in this session.
- setCustomHostname(String) - Static method in class org.apache.spark.util.Utils
-
Allow setting a custom host name
- setDAGScheduler(DAGScheduler) - Method in interface org.apache.spark.scheduler.TaskScheduler
- setDataDistribution(int, StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- setDataDistribution(int, StoreTypes.RDDDataDistribution.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- setDecayFactor(double) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Set the forgetfulness of the previous centroids.
- setDefault(Param<T>, T) - Method in interface org.apache.spark.ml.param.Params
-
Sets a default value for a param.
- setDefault(Seq<ParamPair<?>>) - Method in interface org.apache.spark.ml.param.Params
-
Sets default values for a list of params.
- setDefaultClassLoader(ClassLoader) - Method in class org.apache.spark.serializer.KryoSerializer
- setDefaultClassLoader(ClassLoader) - Method in class org.apache.spark.serializer.Serializer
-
Sets a class loader for the serializer to use in deserialization.
- setDefaultSession(SparkSession) - Static method in class org.apache.spark.sql.SparkSession
-
Sets the default SparkSession that is returned by the builder.
- setDegree(int) - Method in class org.apache.spark.ml.feature.PolynomialExpansion
- setDelegateCatalog(CatalogPlugin) - Method in interface org.apache.spark.sql.connector.catalog.CatalogExtension
-
This will be called only once by Spark to pass in the Spark built-in session catalog, after
CatalogPlugin.initialize(String, CaseInsensitiveStringMap)
is called. - setDelegateCatalog(CatalogPlugin) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- setDeployMode(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Set the deploy mode for the application.
- setDeployMode(String) - Method in class org.apache.spark.launcher.SparkLauncher
- setDesc(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string desc = 3;
- setDesc(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string desc = 3;
- setDescBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string desc = 3;
- setDescBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string desc = 3;
- setDescription(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string description = 3;
- setDescription(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
optional string description = 1;
- setDescription(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string description = 1;
- setDescription(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string description = 3;
- setDescription(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string description = 40;
- setDescriptionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string description = 3;
- setDescriptionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
optional string description = 1;
- setDescriptionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string description = 1;
- setDescriptionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string description = 3;
- setDescriptionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string description = 40;
- setDeserialized(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool deserialized = 7;
- setDest(long, int, VD, ED) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- setDetails(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string details = 4;
- setDetails(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string details = 41;
- setDetailsBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string details = 4;
- setDetailsBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string details = 41;
- setDiscoveryScript(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string discovery_script = 3;
- setDiscoveryScriptBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string discovery_script = 3;
- setDiskBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double disk_bytes_spilled = 17;
- setDiskBytesSpilled(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- setDiskBytesSpilled(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- setDiskBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 disk_bytes_spilled = 14;
- setDiskBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 disk_bytes_spilled = 22;
- setDiskBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 disk_bytes_spilled = 24;
- setDiskBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 disk_bytes_spilled = 9;
- setDiskSize(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
int64 disk_size = 9;
- setDiskUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 disk_used = 6;
- setDiskUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 disk_used = 4;
- setDiskUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
int64 disk_used = 4;
- setDiskUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int64 disk_used = 7;
- setDistanceMeasure(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- setDistanceMeasure(String) - Method in class org.apache.spark.ml.clustering.KMeans
- setDistanceMeasure(String) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- setDistanceMeasure(String) - Method in class org.apache.spark.ml.evaluation.ClusteringMetrics
- setDistanceMeasure(String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Set the distance suite used by the algorithm.
- setDistanceMeasure(String) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the distance suite used by the algorithm.
- setDocConcentration(double) - Method in class org.apache.spark.ml.clustering.LDA
- setDocConcentration(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Replicates a
Double
docConcentration to create a symmetric prior. - setDocConcentration(double[]) - Method in class org.apache.spark.ml.clustering.LDA
- setDocConcentration(Vector) - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
- setDropLast(boolean) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- setDropLast(boolean) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- setDstCol(String) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- setDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double duration = 5;
- setDuration(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- setDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 duration = 5;
- setDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 duration = 7;
- setDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 duration = 7;
- setEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- setEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- setEdges(int, StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- setEdges(int, StoreTypes.SparkPlanGraphEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- setElasticNetParam(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the ElasticNet mixing parameter.
- setElasticNetParam(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the ElasticNet mixing parameter.
- setEndOffset(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string end_offset = 3;
- setEndOffsetBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string end_offset = 3;
- setEndTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 end_time = 3;
- setEndTimestamp(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional int64 end_timestamp = 7;
- setEps(double) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setEpsilon(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Sets the value of param
LinearRegression.epsilon()
. - setEpsilon(double) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the distance threshold within which we've consider centers to have converged.
- setErrorMessage(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string error_message = 10;
- setErrorMessage(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string error_message = 14;
- setErrorMessage(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string error_message = 14;
- setErrorMessageBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string error_message = 10;
- setErrorMessageBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string error_message = 14;
- setErrorMessageBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string error_message = 14;
- setEstimator(Estimator<?>) - Method in class org.apache.spark.ml.tuning.CrossValidator
- setEstimator(Estimator<?>) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- setEstimatorParamMaps(ParamMap[]) - Method in class org.apache.spark.ml.tuning.CrossValidator
- setEstimatorParamMaps(ParamMap[]) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- setEvaluator(Evaluator) - Method in class org.apache.spark.ml.tuning.CrossValidator
- setEvaluator(Evaluator) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- setException(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string exception = 5;
- setExceptionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string exception = 5;
- setExcludedInStages(int, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- setExecutionId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
int64 execution_id = 1;
- setExecutionId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 execution_id = 1;
- setExecutorCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_cpu_time = 9;
- setExecutorCpuTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- setExecutorCpuTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_cpu_time = 17;
- setExecutorCpuTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_cpu_time = 19;
- setExecutorCpuTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_cpu_time = 4;
- setExecutorDeserializeCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_deserialize_cpu_time = 7;
- setExecutorDeserializeCpuTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- setExecutorDeserializeCpuTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_deserialize_cpu_time = 15;
- setExecutorDeserializeCpuTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_deserialize_cpu_time = 17;
- setExecutorDeserializeCpuTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_deserialize_cpu_time = 2;
- setExecutorDeserializeTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_deserialize_time = 6;
- setExecutorDeserializeTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- setExecutorDeserializeTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_deserialize_time = 14;
- setExecutorDeserializeTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_deserialize_time = 16;
- setExecutorDeserializeTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_deserialize_time = 1;
- setExecutorEnv(String, String) - Method in class org.apache.spark.SparkConf
-
Set an environment variable to be used when launching executors for this application.
- setExecutorEnv(Seq<Tuple2<String, String>>) - Method in class org.apache.spark.SparkConf
-
Set multiple environment variables to be used when launching executors.
- setExecutorEnv(Tuple2<String, String>[]) - Method in class org.apache.spark.SparkConf
-
Set multiple environment variables to be used when launching executors.
- setExecutorId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
optional string executor_id = 3;
- setExecutorId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string executor_id = 2;
- setExecutorId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string executor_id = 8;
- setExecutorId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string executor_id = 8;
- setExecutorIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
optional string executor_id = 3;
- setExecutorIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string executor_id = 2;
- setExecutorIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string executor_id = 8;
- setExecutorIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string executor_id = 8;
- setExecutorMetrics(int, StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- setExecutorMetrics(int, StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- setExecutorMetricsDistributions(StoreTypes.ExecutorMetricsDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- setExecutorMetricsDistributions(StoreTypes.ExecutorMetricsDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetricsDistributions executor_metrics_distributions = 52;
- setExecutorRunTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double executor_run_time = 8;
- setExecutorRunTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- setExecutorRunTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 executor_run_time = 16;
- setExecutorRunTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 executor_run_time = 18;
- setExecutorRunTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 executor_run_time = 3;
- setExecutors(int, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- setFactorSize(int) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the dimensionality of the factors.
- setFactorSize(int) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the dimensionality of the factors.
- setFailedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 failed_tasks = 2;
- setFailedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 failed_tasks = 10;
- setFailedTasks(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- setFailureReason(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string failure_reason = 13;
- setFailureReasonBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string failure_reason = 13;
- setFamily(String) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Sets the value of param
LogisticRegression.family()
. - setFamily(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param
GeneralizedLinearRegression.family()
. - setFdr(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setFdr(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- setFeatureIndex(int) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- setFeatureIndex(int) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
- setFeaturesCol(String) - Method in class org.apache.spark.ml.classification.OneVsRestModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.KMeans
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.KMeansModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.LDA
-
The features for LDA should be a
Vector
representing the word counts in a document. - setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.LDAModel
-
The features for LDA should be a
Vector
representing the word counts in a document. - setFeaturesCol(String) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.RFormula
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.PredictionModel
- setFeaturesCol(String) - Method in class org.apache.spark.ml.Predictor
- setFeaturesCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- setFeaturesCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- setFeatureSubsetStrategy(String) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setFeatureSubsetStrategy(String) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setFeatureSubsetStrategy(String) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setFeatureSubsetStrategy(String) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setFeatureType(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- setFetchWaitTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- setFetchWaitTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 fetch_wait_time = 3;
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- setField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- setFinalRDDStorageLevel(StorageLevel) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets storage level for final RDDs (user/product used in MatrixFactorizationModel).
- setFinalStorageLevel(String) - Method in class org.apache.spark.ml.recommendation.ALS
- setFirstTaskLaunchedTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 first_task_launched_time = 11;
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set whether to fit intercept term.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Whether to fit an intercept term.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Whether to fit an intercept term.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Set if we should fit the intercept Default is true.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set whether to fit intercept term.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets if we should fit the intercept.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set if we should fit the intercept.
- setFitLinear(boolean) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set whether to fit linear term.
- setFitLinear(boolean) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set whether to fit linear term.
- setFoldCol(String) - Method in class org.apache.spark.ml.tuning.CrossValidator
- setForceIndexLabel(boolean) - Method in class org.apache.spark.ml.feature.RFormula
- setFormula(String) - Method in class org.apache.spark.ml.feature.RFormula
-
Sets the formula to use for this transformer.
- setFpr(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setFpr(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- setFromId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
int32 from_id = 1;
- setFromId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
int64 from_id = 1;
- setFwe(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setFwe(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- setGaps(boolean) - Method in class org.apache.spark.ml.feature.RegexTokenizer
- setGettingResultTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double getting_result_time = 13;
- setGettingResultTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- setGettingResultTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 getting_result_time = 18;
- setGlobalKrbDebug(boolean) - Static method in class org.apache.spark.util.SecurityUtils
- setGradient(Gradient) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the gradient function (of the loss function of one single data example) to be used for SGD.
- setGradient(Gradient) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the gradient function (of the loss function of one single data example) to be used for L-BFGS.
- setHadoopProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- setHadoopProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- setHalfLife(double, String) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Set the half life and time unit ("batches" or "points").
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.Bucketizer
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.RFormula
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.StringIndexer
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.VectorAssembler
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.VectorIndexer
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.VectorSizeHint
- setHashAlgorithm(String) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Set the hash algorithm used when mapping term to integer.
- setHasMetrics(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
bool has_metrics = 15;
- setHost(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string host = 9;
- setHost(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string host = 9;
- setHostBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string host = 9;
- setHostBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string host = 9;
- setHostPort(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string host_port = 2;
- setHostPort(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string host_port = 2;
- setHostPort(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string host_port = 3;
- setHostPortBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string host_port = 2;
- setHostPortBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string host_port = 2;
- setHostPortBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string host_port = 3;
- setId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
int32 id = 1;
- setId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 id = 1;
- setId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
int32 id = 1;
- setId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
int64 id = 1;
- setId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
int64 id = 1;
- setId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
int64 id = 1;
- setId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string id = 1;
- setId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string id = 1;
- setId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string id = 1;
- setId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string id = 1;
- setId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string id = 2;
- setId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string id = 1;
- setIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string id = 1;
- setIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string id = 1;
- setIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional string id = 1;
- setIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string id = 1;
- setIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string id = 2;
- setIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string id = 1;
- setIfMissing(String, String) - Method in class org.apache.spark.SparkConf
-
Set a parameter if it isn't already configured
- setImplicitPrefs(boolean) - Method in class org.apache.spark.ml.recommendation.ALS
- setImplicitPrefs(boolean) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets whether to use implicit preference.
- setImpurity(String) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setImpurity(String) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
The impurity setting is ignored for GBT models.
- setImpurity(String) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setImpurity(String) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setImpurity(String) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
The impurity setting is ignored for GBT models.
- setImpurity(String) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setImpurity(Impurity) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setIncomingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- setIncomingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- setIndex(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 index = 2;
- setIndex(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 index = 2;
- setIndices(int[]) - Method in class org.apache.spark.ml.feature.VectorSlicer
- setInfo(StoreTypes.ApplicationEnvironmentInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- setInfo(StoreTypes.ApplicationEnvironmentInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationEnvironmentInfo info = 1;
- setInfo(StoreTypes.ApplicationInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- setInfo(StoreTypes.ApplicationInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.ApplicationInfo info = 1;
- setInfo(StoreTypes.ExecutorStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- setInfo(StoreTypes.ExecutorStageSummary.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorStageSummary info = 4;
- setInfo(StoreTypes.ExecutorSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- setInfo(StoreTypes.ExecutorSummary.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ExecutorSummary info = 1;
- setInfo(StoreTypes.JobData) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- setInfo(StoreTypes.JobData.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
.org.apache.spark.status.protobuf.JobData info = 1;
- setInfo(StoreTypes.ProcessSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- setInfo(StoreTypes.ProcessSummary.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.ProcessSummary info = 1;
- setInfo(StoreTypes.RDDStorageInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- setInfo(StoreTypes.RDDStorageInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDStorageInfo info = 1;
- setInfo(StoreTypes.SpeculationStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- setInfo(StoreTypes.SpeculationStageSummary.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
.org.apache.spark.status.protobuf.SpeculationStageSummary info = 3;
- setInfo(StoreTypes.StageData) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- setInfo(StoreTypes.StageData.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
.org.apache.spark.status.protobuf.StageData info = 1;
- setInitialCenters(Vector[], double[]) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Specify initial centers directly.
- setInitializationMode(String) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the initialization algorithm.
- setInitializationMode(String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Set the initialization mode.
- setInitializationSteps(int) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the number of steps for the k-means|| initialization mode.
- setInitialModel(LogisticRegressionModel) - Method in class org.apache.spark.ml.classification.LogisticRegression
- setInitialModel(GaussianMixtureModel) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the initial GMM starting point, bypassing the random initialization.
- setInitialModel(KMeansModel) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the initial starting point, bypassing the random initialization or k-means|| The condition model.k == this.k must be met, failure results in an IllegalArgumentException.
- setInitialWeights(Vector) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param
MultilayerPerceptronClassifier.initialWeights()
. - setInitialWeights(Vector) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the initial weights.
- setInitialWeights(Vector) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the initial weights.
- setInitMode(String) - Method in class org.apache.spark.ml.clustering.KMeans
- setInitMode(String) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- setInitStd(double) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the standard deviation of initial coefficients.
- setInitStd(double) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the standard deviation of initial coefficients.
- setInitSteps(int) - Method in class org.apache.spark.ml.clustering.KMeans
- setInputBytes(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- setInputBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 input_bytes = 5;
- setInputBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 input_bytes = 24;
- setInputBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 input_bytes_read = 26;
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Binarizer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- setInputCol(String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Bucketizer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.HashingTF
- setInputCol(String) - Method in class org.apache.spark.ml.feature.IDF
- setInputCol(String) - Method in class org.apache.spark.ml.feature.IDFModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Imputer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.ImputerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.IndexToString
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MinHashLSH
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MinHashLSHModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScaler
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- setInputCol(String) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.PCA
- setInputCol(String) - Method in class org.apache.spark.ml.feature.PCAModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.RobustScaler
- setInputCol(String) - Method in class org.apache.spark.ml.feature.RobustScalerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StandardScaler
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StandardScalerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- setInputCol(String) - Method in class org.apache.spark.ml.feature.VectorSizeHint
- setInputCol(String) - Method in class org.apache.spark.ml.feature.VectorSlicer
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Word2Vec
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Word2VecModel
- setInputCol(String) - Method in class org.apache.spark.ml.UnaryTransformer
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.Binarizer
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.Bucketizer
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.FeatureHasher
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.Imputer
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.ImputerModel
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.Interaction
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.StringIndexer
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.VectorAssembler
- setInputCols(Seq<String>) - Method in class org.apache.spark.ml.feature.FeatureHasher
- setInputMetrics(StoreTypes.InputMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- setInputMetrics(StoreTypes.InputMetricDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.InputMetricDistributions input_metrics = 15;
- setInputMetrics(StoreTypes.InputMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- setInputMetrics(StoreTypes.InputMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.InputMetrics input_metrics = 11;
- setInputRecords(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- setInputRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 input_records = 6;
- setInputRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 input_records = 25;
- setInputRecordsRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 input_records_read = 27;
- setInputRowsPerSecond(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
double input_rows_per_second = 6;
- setIntercept(boolean) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Set if the algorithm should add an intercept.
- setIntermediateRDDStorageLevel(StorageLevel) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets storage level for intermediate RDDs (user/product in/out links).
- setIntermediateStorageLevel(String) - Method in class org.apache.spark.ml.recommendation.ALS
- setInterruptOnCancel(boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Set the behavior of job cancellation from jobs started in this thread.
- setInterruptOnCancel(boolean) - Method in class org.apache.spark.SparkContext
-
Set the behavior of job cancellation from jobs started in this thread.
- setInverse(boolean) - Method in class org.apache.spark.ml.feature.DCT
- setIsActive(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_active = 3;
- setIsActive(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
bool is_active = 3;
- setIsActive(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
bool is_active = 4;
- setIsBlacklisted(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_blacklisted = 18;
- setIsBlacklistedForStage(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
bool is_blacklisted_for_stage = 15;
- setIsExcluded(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
bool is_excluded = 30;
- setIsExcludedForStage(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
bool is_excluded_for_stage = 17;
- setIsotonic(boolean) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- setIsotonic(boolean) - Method in class org.apache.spark.mllib.regression.IsotonicRegression
-
Sets the isotonic parameter.
- setIsShufflePushEnabled(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
bool is_shuffle_push_enabled = 63;
- setItemCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
- setItemCol(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
- setItemsCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowth
- setItemsCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- setIterations(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of iterations to run.
- setJars(String[]) - Method in class org.apache.spark.SparkConf
-
Set JAR files to distribute to the cluster.
- setJars(Seq<String>) - Method in class org.apache.spark.SparkConf
-
Set JAR files to distribute to the cluster.
- setJavaHome(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set a custom JAVA_HOME for launching the Spark application.
- setJavaHome(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_home = 2;
- setJavaHomeBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_home = 2;
- setJavaVersion(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_version = 1;
- setJavaVersionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string java_version = 1;
- setJMapField(Map<K, V>, Function1<Map<K, V>, Object>) - Static method in class org.apache.spark.status.protobuf.Utils
- setJobDescription(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Set a human readable description of the current job.
- setJobDescription(String) - Method in class org.apache.spark.SparkContext
-
Set a human readable description of the current job.
- setJobGroup(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string job_group = 7;
- setJobGroup(String, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
- setJobGroup(String, String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
- setJobGroup(String, String, boolean) - Method in class org.apache.spark.SparkContext
-
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
- setJobGroupBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string job_group = 7;
- setJobId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
All IDs are int64 for extendability, even when they are currently int32 in Spark.
- setJobIds(int, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- setJobTags(int, String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- setJvmGcTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double jvm_gc_time = 11;
- setJvmGcTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- setJvmGcTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 jvm_gc_time = 19;
- setJvmGcTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 jvm_gc_time = 21;
- setJvmGcTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 jvm_gc_time = 6;
- setK(int) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- setK(int) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setK(int) - Method in class org.apache.spark.ml.clustering.KMeans
- setK(int) - Method in class org.apache.spark.ml.clustering.LDA
- setK(int) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- setK(int) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- setK(int) - Method in class org.apache.spark.ml.feature.PCA
- setK(int) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the desired number of leaf clusters (default: 4).
- setK(int) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the number of Gaussians in the mixture model.
- setK(int) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the number of clusters to create (k).
- setK(int) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the number of topics to infer, i.e., the number of soft cluster centers.
- setK(int) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Set the number of clusters.
- setK(int) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Set the number of clusters.
- setKappa(double) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Learning rate: exponential decay rate---should be between (0.5, 1.0] to guarantee asymptotic convergence.
- setKeepLastCheckpoint(boolean) - Method in class org.apache.spark.ml.clustering.LDA
- setKeepLastCheckpoint(boolean) - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
-
If using checkpointing, this indicates whether to keep the last checkpoint (vs clean up).
- setKeyOrdering(Ordering<K>) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set key ordering for RDD's shuffle.
- setKilledTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 killed_tasks = 4;
- setKilledTasks(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- setLabelCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- setLabelCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setLabelCol(String) - Method in class org.apache.spark.ml.feature.RFormula
- setLabelCol(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- setLabelCol(String) - Method in class org.apache.spark.ml.Predictor
- setLabelCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- setLabels(String[]) - Method in class org.apache.spark.ml.feature.IndexToString
- setLabelType(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- setLambda(double) - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Set the smoothing parameter.
- setLambda(double) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the regularization parameter, lambda.
- setLastUpdated(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 last_updated = 4;
- setLatestOffset(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string latest_offset = 4;
- setLatestOffsetBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string latest_offset = 4;
- setLatestSeenOffset(Offset) - Method in interface org.apache.spark.sql.connector.read.streaming.AcceptsLatestSeenOffset
-
Callback method to receive the latest seen offset information from streaming execution.
- setLaunchTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 launch_time = 5;
- setLaunchTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 launch_time = 5;
- setLayers(int[]) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param
MultilayerPerceptronClassifier.layers()
. - setLeafCol(String) - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
- setLearningDecay(double) - Method in class org.apache.spark.ml.clustering.LDA
- setLearningOffset(double) - Method in class org.apache.spark.ml.clustering.LDA
- setLearningRate(double) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets initial learning rate (default: 0.025).
- setLearningRate(double) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- setLink(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param
GeneralizedLinearRegression.link()
. - setLinkPower(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param
GeneralizedLinearRegression.linkPower()
. - setLinkPredictionCol(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the link prediction (linear predictor) column name.
- setLinkPredictionCol(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
Sets the link prediction (linear predictor) column name.
- setLocalBlocksFetched(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- setLocalBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 local_blocks_fetched = 2;
- setLocalBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 local_bytes_read = 6;
- setLocale(String) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- setLocalMergedBlocksFetched(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- setLocalMergedBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_blocks_fetched = 4;
- setLocalMergedBytesRead(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- setLocalMergedBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_bytes_read = 8;
- setLocalMergedChunksFetched(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- setLocalMergedChunksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 local_merged_chunks_fetched = 6;
- setLocalProperty(String, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Set a local property that affects jobs submitted from this thread, and all child threads, such as the Spark fair scheduler pool.
- setLocalProperty(String, String) - Method in class org.apache.spark.SparkContext
-
Set a local property that affects jobs submitted from this thread, such as the Spark fair scheduler pool.
- setLogLevel(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Control our logLevel.
- setLogLevel(String) - Method in class org.apache.spark.SparkContext
-
Control our logLevel.
- setLogLevel(Level) - Static method in class org.apache.spark.util.Utils
-
configure a new log4j level
- setLogLevelIfNeeded(String) - Static method in class org.apache.spark.util.Utils
- setLoss(String) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Sets the value of param
LinearRegression.loss()
. - setLoss(Loss) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- setLossType(String) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setLossType(String) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setLower(double) - Method in class org.apache.spark.ml.feature.RobustScaler
- setLowerBoundsOnCoefficients(Matrix) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the lower bounds on coefficients if fitting under bound constrained optimization.
- setLowerBoundsOnIntercepts(Vector) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the lower bounds on intercepts if fitting under bound constrained optimization.
- setMainClass(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Sets the application class name for Java/Scala applications.
- setMainClass(String) - Method in class org.apache.spark.launcher.SparkLauncher
- setMapSideCombine(boolean) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set mapSideCombine flag for RDD's shuffle.
- setMaster(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Set the Spark master for the application.
- setMaster(String) - Method in class org.apache.spark.launcher.SparkLauncher
- setMaster(String) - Method in class org.apache.spark.SparkConf
-
The master URL to connect to, such as "local" to run locally with one thread, "local[4]" to run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone cluster.
- setMax(double) - Method in class org.apache.spark.ml.feature.MinMaxScaler
- setMax(double) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- setMaxBins(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setMaxBins(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setMaxBins(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setMaxBins(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setMaxBins(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setMaxBins(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setMaxBins(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setMaxBlockSizeInMB(double) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Sets the value of param
LinearSVC.maxBlockSizeInMB()
. - setMaxBlockSizeInMB(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Sets the value of param
LogisticRegression.maxBlockSizeInMB()
. - setMaxBlockSizeInMB(double) - Method in class org.apache.spark.ml.clustering.KMeans
-
Sets the value of param
KMeans.maxBlockSizeInMB()
. - setMaxBlockSizeInMB(double) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Sets the value of param
AFTSurvivalRegression.maxBlockSizeInMB()
. - setMaxBlockSizeInMB(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Sets the value of param
LinearRegression.maxBlockSizeInMB()
. - setMaxCategories(int) - Method in class org.apache.spark.ml.feature.VectorIndexer
- setMaxCores(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 max_cores = 4;
- setMaxDepth(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setMaxDepth(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setMaxDepth(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setMaxDepth(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setMaxDepth(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setMaxDepth(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setMaxDepth(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setMaxDF(double) - Method in class org.apache.spark.ml.feature.CountVectorizer
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.KMeans
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.LDA
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- setMaxIter(int) - Method in class org.apache.spark.ml.feature.Word2Vec
- setMaxIter(int) - Method in class org.apache.spark.ml.recommendation.ALS
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the maximum number of iterations (applicable for solver "irls").
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the maximum number of iterations.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the max number of k-means iterations to split clusters (default: 20).
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the maximum number of iterations allowed.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set maximum number of iterations allowed.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the maximum number of iterations allowed.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Set maximum number of iterations of the power iteration loop
- setMaxLocalProjDBSize(long) - Method in class org.apache.spark.ml.fpm.PrefixSpan
- setMaxLocalProjDBSize(long) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Sets the maximum number of items (including delimiters used in the internal storage format) allowed in a projected database before local processing (default:
32000000L
). - setMaxMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 max_memory = 19;
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setMaxMemoryInMB(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setMaxPatternLength(int) - Method in class org.apache.spark.ml.fpm.PrefixSpan
- setMaxPatternLength(int) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Sets maximal pattern length (default:
10
). - setMaxSentenceLength(int) - Method in class org.apache.spark.ml.feature.Word2Vec
- setMaxSentenceLength(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets the maximum length (in words) of each sentence in the input data.
- setMaxTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 max_tasks = 8;
- setMemoryBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double memory_bytes_spilled = 16;
- setMemoryBytesSpilled(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- setMemoryBytesSpilled(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- setMemoryBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 memory_bytes_spilled = 13;
- setMemoryBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 memory_bytes_spilled = 21;
- setMemoryBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 memory_bytes_spilled = 23;
- setMemoryBytesSpilled(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 memory_bytes_spilled = 8;
- setMemoryMetrics(StoreTypes.MemoryMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- setMemoryMetrics(StoreTypes.MemoryMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.MemoryMetrics memory_metrics = 24;
- setMemoryPerExecutorMb(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional int32 memory_per_executor_mb = 6;
- setMemoryRemaining(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 memory_remaining = 3;
- setMemoryUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 memory_used = 5;
- setMemoryUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
int64 memory_used = 2;
- setMemoryUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
int64 memory_used = 3;
- setMemoryUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int64 memory_used = 6;
- setMemoryUsedBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 memory_used_bytes = 8;
- setMemSize(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
int64 mem_size = 8;
- setMergedFetchFallbackCount(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- setMergedFetchFallbackCount(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 merged_fetch_fallback_count = 2;
- setMergerLocs(Seq<BlockManagerId>) - Method in class org.apache.spark.ShuffleDependency
- setMetricLabel(double) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setMetricLabel(double) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- setMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- setMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- setMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- setMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- setMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- setMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- setMetricsProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- setMetricsProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- setMetricType(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string metric_type = 3;
- setMetricTypeBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string metric_type = 3;
- setMetricValuesIsNull(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
bool metric_values_is_null = 13;
- setMin(double) - Method in class org.apache.spark.ml.feature.MinMaxScaler
- setMin(double) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- setMinConfidence(double) - Method in class org.apache.spark.ml.fpm.FPGrowth
- setMinConfidence(double) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- setMinConfidence(double) - Method in class org.apache.spark.mllib.fpm.AssociationRules
-
Sets the minimal confidence (default:
0.8
). - setMinCount(int) - Method in class org.apache.spark.ml.feature.Word2Vec
- setMinCount(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets minCount, the minimum number of times a token must appear to be included in the word2vec model's vocabulary (default: 5).
- setMinDF(double) - Method in class org.apache.spark.ml.feature.CountVectorizer
- setMinDivisibleClusterSize(double) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- setMinDivisibleClusterSize(double) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the minimum number of points (if greater than or equal to
1.0
) or the minimum proportion of points (if less than1.0
) of a divisible cluster (default: 1). - setMinDocFreq(int) - Method in class org.apache.spark.ml.feature.IDF
- setMiniBatchFraction(double) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the mini-batch fraction parameter.
- setMiniBatchFraction(double) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the mini-batch fraction parameter.
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the fraction of each batch to use for updates.
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Mini-batch fraction in (0, 1], which sets the fraction of document sampled and used in each iteration.
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set fraction of data to be used for each SGD iteration.
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the fraction of each batch to use for updates.
- setMinInfoGain(double) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setMinInfoGain(double) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setMinInfoGain(double) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setMinInfoGain(double) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setMinInfoGain(double) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setMinInfoGain(double) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setMinInfoGain(double) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setMinInstancesPerNode(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setMinSupport(double) - Method in class org.apache.spark.ml.fpm.FPGrowth
- setMinSupport(double) - Method in class org.apache.spark.ml.fpm.PrefixSpan
- setMinSupport(double) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Sets the minimal support level (default:
0.3
). - setMinSupport(double) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Sets the minimal support level (default:
0.1
). - setMinTF(double) - Method in class org.apache.spark.ml.feature.CountVectorizer
- setMinTF(double) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- setMinTokenLength(int) - Method in class org.apache.spark.ml.feature.RegexTokenizer
- setMinWeightFractionPerNode(double) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setMinWeightFractionPerNode(double) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setMinWeightFractionPerNode(double) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setMinWeightFractionPerNode(double) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setMinWeightFractionPerNode(double) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setMinWeightFractionPerNode(double) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setMinWeightFractionPerNode(double) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setMissingValue(double) - Method in class org.apache.spark.ml.feature.Imputer
- setModelType(String) - Method in class org.apache.spark.ml.classification.NaiveBayes
-
Set the model type using a string (case-sensitive).
- setModelType(String) - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Set the model type using a string (case-sensitive).
- setN(int) - Method in class org.apache.spark.ml.feature.NGram
- setName(String) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Assign a name to this RDD
- setName(String) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Assign a name to this RDD
- setName(String) - Method in class org.apache.spark.api.java.JavaRDD
-
Assign a name to this RDD
- setName(String) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- setName(String) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- setName(String) - Method in class org.apache.spark.rdd.RDD
-
Assign a name to this RDD
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
optional string name = 1;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
optional string name = 1;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string name = 2;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string name = 1;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string name = 39;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string name = 1;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string name = 1;
- setName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string name = 3;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
optional string name = 1;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
optional string name = 1;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
optional string name = 2;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
optional string name = 1;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string name = 39;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string name = 1;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string name = 1;
- setNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string name = 3;
- setNames(String[]) - Method in class org.apache.spark.ml.feature.VectorSlicer
- setNode(StoreTypes.SparkPlanGraphNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- setNode(StoreTypes.SparkPlanGraphNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
.org.apache.spark.status.protobuf.SparkPlanGraphNode node = 1;
- setNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- setNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- setNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- setNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- setNonnegative(boolean) - Method in class org.apache.spark.ml.recommendation.ALS
- setNonnegative(boolean) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set whether the least-squares problems solved at each iteration should have nonnegativity constraints.
- setNullAt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- setNullAt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- setNullAt(int) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- setNumActiveStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_active_stages = 16;
- setNumActiveTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_active_tasks = 10;
- setNumActiveTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_active_tasks = 2;
- setNumActiveTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_active_tasks = 5;
- setNumBins(int) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- setNumBlocks(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
Sets both numUserBlocks and numItemBlocks to the specific value.
- setNumBuckets(int) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setNumBucketsArray(int[]) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setNumCachedPartitions(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 num_cached_partitions = 4;
- setNumClasses(int) - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
-
Set the number of possible outcomes for k classes classification problem in Multinomial Logistic Regression.
- setNumClasses(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setNumCompletedIndices(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_indices = 15;
- setNumCompletedIndices(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_completed_indices = 9;
- setNumCompletedJobs(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
int32 num_completed_jobs = 1;
- setNumCompletedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
int32 num_completed_stages = 2;
- setNumCompletedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_stages = 17;
- setNumCompletedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_completed_tasks = 11;
- setNumCompletedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_completed_tasks = 3;
- setNumCompleteTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_complete_tasks = 6;
- setNumCorrections(int) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the number of corrections used in the LBFGS update.
- setNumFailedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_failed_stages = 19;
- setNumFailedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_failed_tasks = 13;
- setNumFailedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_failed_tasks = 4;
- setNumFailedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_failed_tasks = 7;
- setNumFeatures(int) - Method in class org.apache.spark.ml.feature.FeatureHasher
- setNumFeatures(int) - Method in class org.apache.spark.ml.feature.HashingTF
- setNumFolds(int) - Method in class org.apache.spark.ml.tuning.CrossValidator
- setNumHashTables(int) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- setNumHashTables(int) - Method in class org.apache.spark.ml.feature.MinHashLSH
- setNumInputRows(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
int64 num_input_rows = 5;
- setNumItemBlocks(int) - Method in class org.apache.spark.ml.recommendation.ALS
- setNumIterations(int) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the number of iterations of gradient descent to run per update.
- setNumIterations(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets number of iterations (default: 1), which should be smaller than or equal to number of partitions.
- setNumIterations(int) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the number of iterations for SGD.
- setNumIterations(int) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the maximal number of iterations for L-BFGS.
- setNumIterations(int) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the number of iterations of gradient descent to run per update.
- setNumIterations(int) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- setNumKilledTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_killed_tasks = 14;
- setNumKilledTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_killed_tasks = 5;
- setNumKilledTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_killed_tasks = 8;
- setNumOutputRows(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
int64 num_output_rows = 2;
- setNumPartitions(int) - Method in class org.apache.spark.ml.feature.Word2Vec
- setNumPartitions(int) - Method in class org.apache.spark.ml.fpm.FPGrowth
- setNumPartitions(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets number of partitions (default: 1).
- setNumPartitions(int) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Sets the number of partitions used by parallel FP-growth (default: same as input data).
- setNumPartitions(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
int32 num_partitions = 3;
- setNumRows(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
-
Sets the number of rows in this batch.
- setNumRowsDroppedByWatermark(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_dropped_by_watermark = 9;
- setNumRowsRemoved(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_removed = 5;
- setNumRowsTotal(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_total = 2;
- setNumRowsUpdated(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_rows_updated = 3;
- setNumShufflePartitions(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_shuffle_partitions = 10;
- setNumSkippedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_skipped_stages = 18;
- setNumSkippedTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_skipped_tasks = 12;
- setNumStateStoreInstances(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
int64 num_state_store_instances = 11;
- setNumTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
int32 num_tasks = 9;
- setNumTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
int32 num_tasks = 1;
- setNumTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 num_tasks = 4;
- setNumTopFeatures(int) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setNumTopFeatures(int) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- setNumTrees(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setNumTrees(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setNumUserBlocks(int) - Method in class org.apache.spark.ml.recommendation.ALS
- setOffHeapMemoryRemaining(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_remaining = 8;
- setOffHeapMemoryUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 off_heap_memory_used = 6;
- setOffsetCol(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param
GeneralizedLinearRegression.offsetCol()
. - setOnHeapMemoryRemaining(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_remaining = 7;
- setOnHeapMemoryUsed(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
optional int64 on_heap_memory_used = 5;
- setOperatorName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
optional string operator_name = 1;
- setOperatorNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
optional string operator_name = 1;
- setOptimizeDocConcentration(boolean) - Method in class org.apache.spark.ml.clustering.LDA
- setOptimizeDocConcentration(boolean) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Sets whether to optimize docConcentration parameter during training.
- setOptimizer(String) - Method in class org.apache.spark.ml.clustering.LDA
- setOptimizer(String) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the LDAOptimizer used to perform the actual calculation by algorithm name.
- setOptimizer(LDAOptimizer) - Method in class org.apache.spark.mllib.clustering.LDA
-
LDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer)
- setOrNull(long, int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given unscaled Long, with a given precision and scale, and return it, or return null if it cannot be set due to overflow.
- setOutgoingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- setOutgoingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- setOutputBytes(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- setOutputBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 output_bytes = 7;
- setOutputBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 output_bytes = 26;
- setOutputBytesWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 output_bytes_written = 28;
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Binarizer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Bucketizer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.FeatureHasher
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.HashingTF
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.IDF
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.IDFModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Imputer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.ImputerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.IndexToString
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Interaction
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MinHashLSH
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MinHashLSHModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScaler
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.PCA
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.PCAModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.RobustScaler
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.RobustScalerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StandardScaler
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StandardScalerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorAssembler
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorSlicer
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Word2Vec
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Word2VecModel
- setOutputCol(String) - Method in class org.apache.spark.ml.UnaryTransformer
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.Binarizer
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.Bucketizer
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.Imputer
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.ImputerModel
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.StringIndexer
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- setOutputDeterministicLevel(StoreTypes.DeterministicLevel) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- setOutputDeterministicLevelValue(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
.org.apache.spark.status.protobuf.DeterministicLevel output_deterministic_level = 6;
- setOutputMetrics(StoreTypes.OutputMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- setOutputMetrics(StoreTypes.OutputMetricDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.OutputMetricDistributions output_metrics = 16;
- setOutputMetrics(StoreTypes.OutputMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- setOutputMetrics(StoreTypes.OutputMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.OutputMetrics output_metrics = 12;
- setOutputRecords(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- setOutputRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 output_records = 8;
- setOutputRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 output_records = 27;
- setOutputRecordsWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 output_records_written = 29;
- setP(double) - Method in class org.apache.spark.ml.feature.Normalizer
- setParallelism(int) - Method in class org.apache.spark.ml.classification.OneVsRest
-
The implementation of parallel one vs.
- setParallelism(int) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
Set the maximum level of parallelism to evaluate models in parallel.
- setParallelism(int) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
Set the maximum level of parallelism to evaluate models in parallel.
- setParent(Estimator<M>) - Method in class org.apache.spark.ml.Model
-
Sets the parent of this model (Java API).
- setPartitionId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int32 partition_id = 4;
- setPartitionId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 partition_id = 4;
- setPartitionId(Integer) - Method in class org.apache.spark.sql.util.MapperRowCounter
- setPartitions(int, StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- setPartitions(int, StoreTypes.RDDPartitionInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- setPathOptionAndCallWithPathParameterError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- setPattern(String) - Method in class org.apache.spark.ml.feature.RegexTokenizer
- setPeacePeriod(int) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Set the number of initial batches to ignore.
- setPeakExecutionMemory(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double peak_execution_memory = 15;
- setPeakExecutionMemory(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- setPeakExecutionMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 peak_execution_memory = 23;
- setPeakExecutionMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 peak_execution_memory = 25;
- setPeakExecutionMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 peak_execution_memory = 10;
- setPeakExecutorMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- setPeakExecutorMetrics(StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_executor_metrics = 50;
- setPeakMemoryMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- setPeakMemoryMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- setPeakMemoryMetrics(StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 16;
- setPeakMemoryMetrics(StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional .org.apache.spark.status.protobuf.ExecutorMetrics peak_memory_metrics = 26;
- setPeakMemoryMetrics(StoreTypes.ExecutorPeakMetricsDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- setPeakMemoryMetrics(StoreTypes.ExecutorPeakMetricsDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
.org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions peak_memory_metrics = 16;
- setPercentile(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setPercentile(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- setPhysicalPlanDescription(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string physical_plan_description = 5;
- setPhysicalPlanDescriptionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
optional string physical_plan_description = 5;
- setPredictionCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
- setPredictionCol(String) - Method in class org.apache.spark.ml.classification.OneVsRestModel
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.KMeans
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.KMeansModel
- setPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- setPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- setPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- setPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- setPredictionCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowth
- setPredictionCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- setPredictionCol(String) - Method in class org.apache.spark.ml.PredictionModel
- setPredictionCol(String) - Method in class org.apache.spark.ml.Predictor
- setPredictionCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
- setPredictionCol(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
- setPredictionCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- setPredictionCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- setProbabilityCol(String) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
- setProbabilityCol(String) - Method in class org.apache.spark.ml.classification.ProbabilisticClassifier
- setProbabilityCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setProbabilityCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- setProbabilityCol(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setProcessedRowsPerSecond(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
double processed_rows_per_second = 7;
- setProductBlocks(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of product blocks to parallelize the computation.
- setProgress(StoreTypes.StreamingQueryProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- setProgress(StoreTypes.StreamingQueryProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
.org.apache.spark.status.protobuf.StreamingQueryProgress progress = 1;
- setPropertiesFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Set a custom properties file with Spark configuration for the application.
- setPropertiesFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
- setProperty(String, String) - Static method in interface org.apache.spark.sql.connector.catalog.NamespaceChange
-
Create a NamespaceChange for setting a namespace property.
- setProperty(String, String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for setting a table property.
- setProperty(String, String) - Static method in interface org.apache.spark.sql.connector.catalog.ViewChange
-
Create a ViewChange for setting a table property.
- setQuantile(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
optional string quantile = 3;
- setQuantileBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
optional string quantile = 3;
- setQuantileCalculationStrategy(Enumeration.Value) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setQuantileProbabilities(double[]) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- setQuantileProbabilities(double[]) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- setQuantiles(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- setQuantiles(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- setQuantiles(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- setQuantilesCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- setQuantilesCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- setRandomCenters(int, double, long) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Initialize random centers, requiring only the number of dimensions.
- setRank(int) - Method in class org.apache.spark.ml.recommendation.ALS
- setRank(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the rank of the feature matrices computed (number of features).
- setRatingCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.classification.ClassificationModel
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.classification.Classifier
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.classification.OneVsRestModel
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- setRddBlocks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 rdd_blocks = 4;
- setRddIds(int, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- setReadBytes(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- setReadRecords(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- setRecordsRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double records_read = 19;
- setRecordsRead(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- setRecordsRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
int64 records_read = 2;
- setRecordsRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 records_read = 7;
- setRecordsWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double records_written = 21;
- setRecordsWritten(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- setRecordsWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
int64 records_written = 2;
- setRecordsWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 records_written = 3;
- setRegParam(double) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the L2 regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.ml.recommendation.ALS
- setRegParam(double) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the L2 regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the regularization parameter for L2 regularization.
- setRegParam(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the regularization parameter.
- setRelativeError(double) - Method in class org.apache.spark.ml.feature.Imputer
- setRelativeError(double) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- setRelativeError(double) - Method in class org.apache.spark.ml.feature.RobustScaler
- setRemote(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Set the Spark master for the application.
- setRemoteBlocksFetched(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- setRemoteBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_blocks_fetched = 1;
- setRemoteBytesRead(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- setRemoteBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_bytes_read = 4;
- setRemoteBytesReadToDisk(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- setRemoteBytesReadToDisk(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_bytes_read_to_disk = 5;
- setRemoteMergedBlocksFetched(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- setRemoteMergedBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_blocks_fetched = 3;
- setRemoteMergedBytesRead(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- setRemoteMergedBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_bytes_read = 7;
- setRemoteMergedChunksFetched(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- setRemoteMergedChunksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_chunks_fetched = 5;
- setRemoteMergedReqsDuration(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- setRemoteMergedReqsDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
int64 remote_merged_reqs_duration = 9;
- setRemoteReqsDuration(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- setRemoteReqsDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
int64 remote_reqs_duration = 8;
- setRemoveReason(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string remove_reason = 22;
- setRemoveReasonBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional string remove_reason = 22;
- setRemoveTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
optional int64 remove_time = 21;
- setRemoveTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
optional int64 remove_time = 6;
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- setRepeatedField(Descriptors.FieldDescriptor, int, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- setResourceName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string resource_name = 1;
- setResourceName(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
optional string resource_name = 1;
- setResourceNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string resource_name = 1;
- setResourceNameBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
optional string resource_name = 1;
- setResourceProfileId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 resource_profile_id = 29;
- setResourceProfileId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 resource_profile_id = 49;
- setResourceProfiles(int, StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- setResourceProfiles(int, StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- setResultFetchStart(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional int64 result_fetch_start = 6;
- setResultFetchStart(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_fetch_start = 6;
- setResultSerializationTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double result_serialization_time = 12;
- setResultSerializationTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- setResultSerializationTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 result_serialization_time = 20;
- setResultSerializationTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_serialization_time = 22;
- setResultSerializationTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 result_serialization_time = 7;
- setResultSize(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double result_size = 10;
- setResultSize(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- setResultSize(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 result_size = 18;
- setResultSize(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 result_size = 20;
- setResultSize(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
int64 result_size = 5;
- setRootCluster(StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- setRootCluster(StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
.org.apache.spark.status.protobuf.RDDOperationClusterWrapper root_cluster = 5;
- setRootExecutionId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 root_execution_id = 2;
- setRpInfo(StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- setRpInfo(StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
.org.apache.spark.status.protobuf.ResourceProfileInfo rp_info = 1;
- setRunId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string run_id = 3;
- setRunId(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string run_id = 2;
- setRunIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
optional string run_id = 3;
- setRunIdBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string run_id = 2;
- setRuntime(StoreTypes.RuntimeInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- setRuntime(StoreTypes.RuntimeInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
.org.apache.spark.status.protobuf.RuntimeInfo runtime = 1;
- setSample(JavaRDD<Double>) - Method in class org.apache.spark.mllib.stat.KernelDensity
-
Sets the sample to use for density estimation (for Java users).
- setSample(RDD<Object>) - Method in class org.apache.spark.mllib.stat.KernelDensity
-
Sets the sample to use for density estimation.
- setScalaVersion(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string scala_version = 3;
- setScalaVersionBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
optional string scala_version = 3;
- setScalingVec(Vector) - Method in class org.apache.spark.ml.feature.ElementwiseProduct
- setSchedulerDelay(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double scheduler_delay = 14;
- setSchedulerDelay(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- setSchedulerDelay(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 scheduler_delay = 17;
- setSchedulingPool(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string scheduling_pool = 42;
- setSchedulingPoolBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional string scheduling_pool = 42;
- setSeed(long) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- setSeed(long) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the random seed for weight initialization.
- setSeed(long) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setSeed(long) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Set the seed for weights initialization if weights are not set
- setSeed(long) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setSeed(long) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- setSeed(long) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setSeed(long) - Method in class org.apache.spark.ml.clustering.KMeans
- setSeed(long) - Method in class org.apache.spark.ml.clustering.LDA
- setSeed(long) - Method in class org.apache.spark.ml.clustering.LDAModel
- setSeed(long) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- setSeed(long) - Method in class org.apache.spark.ml.feature.MinHashLSH
- setSeed(long) - Method in class org.apache.spark.ml.feature.Word2Vec
- setSeed(long) - Method in class org.apache.spark.ml.recommendation.ALS
- setSeed(long) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setSeed(long) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the random seed for weight initialization.
- setSeed(long) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setSeed(long) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setSeed(long) - Method in class org.apache.spark.ml.tuning.CrossValidator
- setSeed(long) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the random seed (default: hash value of the class name).
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the random seed
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the random seed for cluster initialization.
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the random seed for cluster initialization.
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Set the random seed for cluster initialization.
- setSeed(long) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets random seed (default: a random long integer).
- setSeed(long) - Method in class org.apache.spark.mllib.random.ExponentialGenerator
- setSeed(long) - Method in class org.apache.spark.mllib.random.GammaGenerator
- setSeed(long) - Method in class org.apache.spark.mllib.random.LogNormalGenerator
- setSeed(long) - Method in class org.apache.spark.mllib.random.PoissonGenerator
- setSeed(long) - Method in class org.apache.spark.mllib.random.StandardNormalGenerator
- setSeed(long) - Method in class org.apache.spark.mllib.random.UniformGenerator
- setSeed(long) - Method in class org.apache.spark.mllib.random.WeibullGenerator
- setSeed(long) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets a random seed to have deterministic results.
- setSeed(long) - Method in class org.apache.spark.util.random.BernoulliCellSampler
- setSeed(long) - Method in class org.apache.spark.util.random.BernoulliSampler
- setSeed(long) - Method in class org.apache.spark.util.random.PoissonSampler
- setSeed(long) - Method in interface org.apache.spark.util.random.Pseudorandom
-
Set random seed.
- setSelectionMode(String) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- setSelectionThreshold(double) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- setSelectorType(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- setSelectorType(String) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
- setSequenceCol(String) - Method in class org.apache.spark.ml.fpm.PrefixSpan
- setSerializer(Serializer) - Method in class org.apache.spark.rdd.CoGroupedRDD
-
Set a serializer for this RDD's shuffle, or null to use the default (spark.serializer)
- setSerializer(Serializer) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set a serializer for this RDD's shuffle, or null to use the default (spark.serializer)
- setShuffleBytesWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_bytes_written = 37;
- setShuffleCorruptMergedBlockChunks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_corrupt_merged_block_chunks = 33;
- setShuffleCorruptMergedBlockChunks(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_corrupt_merged_block_chunks = 53;
- setShuffleCorruptMergedBlockChunks(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_corrupt_merged_block_chunks = 42;
- setShuffleFetchWaitTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_fetch_wait_time = 26;
- setShuffleFetchWaitTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_fetch_wait_time = 30;
- setShuffleFetchWaitTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_fetch_wait_time = 32;
- setShuffleLocalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_local_blocks_fetched = 25;
- setShuffleLocalBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_local_blocks_fetched = 29;
- setShuffleLocalBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_local_blocks_fetched = 31;
- setShuffleLocalBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_local_bytes_read = 33;
- setShuffleLocalBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_local_bytes_read = 35;
- setShuffleMergedFetchFallbackCount(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_fetch_fallback_count = 34;
- setShuffleMergedFetchFallbackCount(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_fetch_fallback_count = 54;
- setShuffleMergedFetchFallbackCount(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_fetch_fallback_count = 43;
- setShuffleMergedLocalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_blocks_fetched = 36;
- setShuffleMergedLocalBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_blocks_fetched = 56;
- setShuffleMergedLocalBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_blocks_fetched = 45;
- setShuffleMergedLocalBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_bytes_read = 40;
- setShuffleMergedLocalBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_bytes_read = 60;
- setShuffleMergedLocalBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_bytes_read = 49;
- setShuffleMergedLocalChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_local_chunks_fetched = 38;
- setShuffleMergedLocalChunksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_local_chunks_fetched = 58;
- setShuffleMergedLocalChunksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_local_chunks_fetched = 47;
- setShuffleMergedRemoteBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_blocks_fetched = 35;
- setShuffleMergedRemoteBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_blocks_fetched = 55;
- setShuffleMergedRemoteBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_blocks_fetched = 44;
- setShuffleMergedRemoteBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_bytes_read = 39;
- setShuffleMergedRemoteBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_bytes_read = 59;
- setShuffleMergedRemoteBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_bytes_read = 48;
- setShuffleMergedRemoteChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_chunks_fetched = 37;
- setShuffleMergedRemoteChunksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_chunks_fetched = 57;
- setShuffleMergedRemoteChunksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_chunks_fetched = 46;
- setShuffleMergedRemoteReqDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_merged_remote_req_duration = 51;
- setShuffleMergedRemoteReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_merged_remote_reqs_duration = 42;
- setShuffleMergedRemoteReqsDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_merged_remote_reqs_duration = 62;
- setShuffleMergersCount(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int32 shuffle_mergers_count = 64;
- setShufflePushReadMetrics(StoreTypes.ShufflePushReadMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- setShufflePushReadMetrics(StoreTypes.ShufflePushReadMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetrics shuffle_push_read_metrics = 9;
- setShufflePushReadMetricsDist(StoreTypes.ShufflePushReadMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- setShufflePushReadMetricsDist(StoreTypes.ShufflePushReadMetricDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions shuffle_push_read_metrics_dist = 10;
- setShuffleRead(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- setShuffleRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_read = 9;
- setShuffleReadBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_read_bytes = 22;
- setShuffleReadBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_read_bytes = 34;
- setShuffleReadMetrics(StoreTypes.ShuffleReadMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- setShuffleReadMetrics(StoreTypes.ShuffleReadMetricDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetricDistributions shuffle_read_metrics = 17;
- setShuffleReadMetrics(StoreTypes.ShuffleReadMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- setShuffleReadMetrics(StoreTypes.ShuffleReadMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleReadMetrics shuffle_read_metrics = 13;
- setShuffleReadRecords(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- setShuffleReadRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_read_records = 10;
- setShuffleReadRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_read_records = 35;
- setShuffleRecordsRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_records_read = 23;
- setShuffleRecordsRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_records_read = 36;
- setShuffleRecordsWritten(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_records_written = 39;
- setShuffleRemoteBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_blocks_fetched = 24;
- setShuffleRemoteBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_blocks_fetched = 28;
- setShuffleRemoteBlocksFetched(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_blocks_fetched = 30;
- setShuffleRemoteBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_bytes_read = 27;
- setShuffleRemoteBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_bytes_read = 31;
- setShuffleRemoteBytesRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_bytes_read = 33;
- setShuffleRemoteBytesReadToDisk(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_bytes_read_to_disk = 28;
- setShuffleRemoteBytesReadToDisk(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_bytes_read_to_disk = 32;
- setShuffleRemoteBytesReadToDisk(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_bytes_read_to_disk = 34;
- setShuffleRemoteReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_remote_reqs_duration = 41;
- setShuffleRemoteReqsDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_remote_reqs_duration = 61;
- setShuffleRemoteReqsDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_remote_reqs_duration = 50;
- setShuffleTotalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_total_blocks_fetched = 29;
- setShuffleWrite(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- setShuffleWrite(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_write = 11;
- setShuffleWriteBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_bytes = 30;
- setShuffleWriteBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_bytes = 36;
- setShuffleWriteMetrics(StoreTypes.ShuffleWriteMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- setShuffleWriteMetrics(StoreTypes.ShuffleWriteMetricDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions shuffle_write_metrics = 18;
- setShuffleWriteMetrics(StoreTypes.ShuffleWriteMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- setShuffleWriteMetrics(StoreTypes.ShuffleWriteMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
.org.apache.spark.status.protobuf.ShuffleWriteMetrics shuffle_write_metrics = 14;
- setShuffleWriteRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_records = 31;
- setShuffleWriteRecords(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- setShuffleWriteRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 shuffle_write_records = 12;
- setShuffleWriteRecords(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_records = 38;
- setShuffleWriteTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
double shuffle_write_time = 32;
- setShuffleWriteTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 shuffle_write_time = 37;
- setShuffleWriteTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 shuffle_write_time = 38;
- setSink(StoreTypes.SinkProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- setSink(StoreTypes.SinkProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
.org.apache.spark.status.protobuf.SinkProgress sink = 11;
- setSize(int) - Method in class org.apache.spark.ml.feature.VectorSizeHint
- setSkippedStages(int, int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- setSmoothing(double) - Method in class org.apache.spark.ml.classification.NaiveBayes
-
Set the smoothing parameter.
- setSolver(String) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the solver algorithm used for optimization.
- setSolver(String) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param
MultilayerPerceptronClassifier.solver()
. - setSolver(String) - Method in class org.apache.spark.ml.clustering.KMeans
-
Sets the value of param
KMeans.solver()
. - setSolver(String) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the solver algorithm used for optimization.
- setSolver(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the solver algorithm used for optimization.
- setSolver(String) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the solver algorithm used for optimization.
- setSources(int, StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- setSources(int, StoreTypes.SourceProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- setSparkContextSessionConf(SparkSession, Map<Object, Object>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- setSparkHome(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set a custom Spark installation location for the application.
- setSparkHome(String) - Method in class org.apache.spark.SparkConf
-
Set the location where Spark is installed on worker nodes.
- setSparkProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- setSparkProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- setSparkUser(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string spark_user = 6;
- setSparkUserBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
optional string spark_user = 6;
- setSpeculationSummary(StoreTypes.SpeculationStageSummary) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- setSpeculationSummary(StoreTypes.SpeculationStageSummary.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.SpeculationStageSummary speculation_summary = 47;
- setSpeculative(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
bool speculative = 12;
- setSpeculative(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
bool speculative = 12;
- setSplits(double[]) - Method in class org.apache.spark.ml.feature.Bucketizer
- setSplitsArray(double[][]) - Method in class org.apache.spark.ml.feature.Bucketizer
- setSqlExecutionId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
optional int64 sql_execution_id = 3;
- setSQLReadObject(Function2<DataInputStream, Object, Object>) - Static method in class org.apache.spark.api.r.SerDe
- setSQLWriteObject(Function2<DataOutputStream, Object, Object>) - Static method in class org.apache.spark.api.r.SerDe
- setSrcCol(String) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- setSrcOnly(long, int, VD) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- setStageAttemptId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int32 stage_attempt_id = 2;
- setStageAttemptId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
int32 stage_attempt_id = 2;
- setStageAttemptId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
int32 stage_attempt_id = 2;
- setStageAttemptId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int32 stage_attempt_id = 41;
- setStageId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int64 stage_id = 1;
- setStageId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
int64 stage_id = 1;
- setStageId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
int64 stage_id = 1;
- setStageId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
int64 stage_id = 1;
- setStageId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
int64 stage_id = 2;
- setStageId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 stage_id = 40;
- setStageIds(int, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- setStageIds(int, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- setStages(int, long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- setStages(PipelineStage[]) - Method in class org.apache.spark.ml.Pipeline
- setStandardization(boolean) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Whether to standardize the training features before fitting the model.
- setStandardization(boolean) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Whether to standardize the training features before fitting the model.
- setStandardization(boolean) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Whether to standardize the training features before fitting the model.
- setStartOffset(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string start_offset = 2;
- setStartOffsetBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
optional string start_offset = 2;
- setStartTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
int64 start_time = 2;
- setStartTimestamp(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
int64 start_timestamp = 6;
- setStatement(String) - Method in class org.apache.spark.ml.feature.SQLTransformer
- setStateOperators(int, StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- setStateOperators(int, StoreTypes.StateOperatorProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- setStatus(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string status = 10;
- setStatus(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string status = 10;
- setStatus(StoreTypes.JobExecutionStatus) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- setStatus(StoreTypes.StageStatus) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- setStatusBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string status = 10;
- setStatusBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string status = 10;
- setStatusValue(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
.org.apache.spark.status.protobuf.JobExecutionStatus status = 8;
- setStatusValue(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
.org.apache.spark.status.protobuf.StageStatus status = 1;
- setStepSize(double) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the initial step size for the first step (like learning rate).
- setStepSize(double) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setStepSize(double) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param
MultilayerPerceptronClassifier.stepSize()
(applicable only for solver "gd"). - setStepSize(double) - Method in class org.apache.spark.ml.feature.Word2Vec
- setStepSize(double) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the initial step size for the first step (like learning rate).
- setStepSize(double) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setStepSize(double) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the step size for gradient descent.
- setStepSize(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the initial step size of SGD for the first step.
- setStepSize(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the step size for gradient descent.
- setStopWords(String[]) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- setStorageLevel(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string storage_level = 2;
- setStorageLevel(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string storage_level = 5;
- setStorageLevel(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string storage_level = 4;
- setStorageLevelBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
optional string storage_level = 2;
- setStorageLevelBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
optional string storage_level = 5;
- setStorageLevelBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
optional string storage_level = 4;
- setStrategy(String) - Method in class org.apache.spark.ml.feature.Imputer
-
Imputation strategy.
- setStringField(String, Function1<String, Object>) - Static method in class org.apache.spark.status.protobuf.Utils
- setStringIndexerOrderType(String) - Method in class org.apache.spark.ml.feature.RFormula
- setStringOrderType(String) - Method in class org.apache.spark.ml.feature.StringIndexer
- setSubmissionTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
optional int64 submission_time = 4;
- setSubmissionTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
int64 submission_time = 8;
- setSubmissionTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional int64 submission_time = 10;
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.clustering.LDA
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- setSubsamplingRate(double) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setSucceededTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int32 succeeded_tasks = 3;
- setSucceededTasks(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- setSummary(Option<T>) - Method in interface org.apache.spark.ml.util.HasTrainingSummary
- setSystemProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- setSystemProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- setTaskCount(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
int64 task_count = 4;
- setTaskId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
int64 task_id = 1;
- setTaskId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
int64 task_id = 1;
- setTaskLocality(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string task_locality = 11;
- setTaskLocality(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string task_locality = 11;
- setTaskLocalityBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional string task_locality = 11;
- setTaskLocalityBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
optional string task_locality = 11;
- setTaskMetrics(StoreTypes.TaskMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- setTaskMetrics(StoreTypes.TaskMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetrics task_metrics = 15;
- setTaskMetricsDistributions(StoreTypes.TaskMetricDistributions) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- setTaskMetricsDistributions(StoreTypes.TaskMetricDistributions.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
optional .org.apache.spark.status.protobuf.TaskMetricDistributions task_metrics_distributions = 51;
- setTaskTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- setTaskTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
int64 task_time = 1;
- setTau0(double) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
A (positive) learning parameter that downweights early iterations.
- setTestMethod(String) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Set the statistical method used for significance testing.
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set threshold in binary classification.
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LinearSVCModel
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- setThreshold(double) - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
Set threshold in binary classification, in range [0, 1].
- setThreshold(double) - Method in class org.apache.spark.ml.feature.Binarizer
- setThreshold(double) - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
Sets the threshold that separates positive predictions from negative predictions in Binary Logistic Regression.
- setThreshold(double) - Method in class org.apache.spark.mllib.classification.SVMModel
-
Sets the threshold that separates positive predictions from negative predictions.
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.LogisticRegression
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- setThresholds(double[]) - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
Set thresholds in multiclass (or binary) classification to adjust the probability of predicting each class.
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.ProbabilisticClassifier
- setThresholds(double[]) - Method in class org.apache.spark.ml.feature.Binarizer
- setThroughOrigin(boolean) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- setTimeoutDuration(long) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout duration in ms for this key.
- setTimeoutDuration(String) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout duration for this key as a string.
- setTimeoutTimestamp(long) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as milliseconds in epoch time.
- setTimeoutTimestamp(long, String) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as milliseconds in epoch time and an additional duration as a string (e.g.
- setTimeoutTimestamp(Date) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as a java.sql.Date.
- setTimeoutTimestamp(Date, String) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as a java.sql.Date and an additional duration as a string (e.g.
- setTimestamp(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string timestamp = 4;
- setTimestampBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
optional string timestamp = 4;
- setToId(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
int32 to_id = 2;
- setToId(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
int64 to_id = 2;
- setTol(double) - Method in class org.apache.spark.ml.classification.FMClassifier
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setTol(double) - Method in class org.apache.spark.ml.clustering.KMeans
- setTol(double) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.regression.FMRegressor
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the convergence tolerance of iterations.
- setToLowercase(boolean) - Method in class org.apache.spark.ml.feature.RegexTokenizer
- setTopicConcentration(double) - Method in class org.apache.spark.ml.clustering.LDA
- setTopicConcentration(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
- setTopicDistributionCol(String) - Method in class org.apache.spark.ml.clustering.LDA
- setTopicDistributionCol(String) - Method in class org.apache.spark.ml.clustering.LDAModel
- setTotalBlocksFetched(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- setTotalCores(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 total_cores = 7;
- setTotalCores(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
int32 total_cores = 4;
- setTotalDuration(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_duration = 13;
- setTotalGcTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_gc_time = 14;
- setTotalInputBytes(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_input_bytes = 15;
- setTotalOffHeapStorageMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 total_off_heap_storage_memory = 4;
- setTotalOnHeapStorageMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 total_on_heap_storage_memory = 3;
- setTotalShuffleRead(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_shuffle_read = 16;
- setTotalShuffleWrite(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int64 total_shuffle_write = 17;
- setTotalTasks(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
int32 total_tasks = 12;
- setTrainRatio(double) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- setTreeStrategy(Strategy) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- setUiRoot(ContextHandler, UIRoot) - Static method in class org.apache.spark.status.api.v1.UIRootFromServletContext
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
- setUnknownFields(UnknownFieldSet) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
- setUpdate(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string update = 3;
- setUpdateBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string update = 3;
- setUpdater(Updater) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the updater function to actually perform a gradient step in a given direction.
- setUpdater(Updater) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the updater function to actually perform a gradient step in a given direction.
- SetupDriver(RpcEndpointRef) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
- SetupDriver$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver$
- setupGroups(int, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Initializes targetLen partition groups.
- setUpper(double) - Method in class org.apache.spark.ml.feature.RobustScaler
- setUpperBoundsOnCoefficients(Matrix) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the upper bounds on coefficients if fitting under bound constrained optimization.
- setUpperBoundsOnIntercepts(Vector) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the upper bounds on intercepts if fitting under bound constrained optimization.
- setupUI(org.apache.spark.ui.SparkUI) - Method in interface org.apache.spark.status.AppHistoryServerPlugin
-
Sets up UI of this plugin to rebuild the history UI.
- setUsedBins(int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Set the number of bins currently being used by the histogram.
- setUseDisk(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool use_disk = 6;
- setUsedOffHeapStorageMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 used_off_heap_storage_memory = 2;
- setUsedOnHeapStorageMemory(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
int64 used_on_heap_storage_memory = 1;
- setUseMemory(boolean) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
bool use_memory = 5;
- setUseNodeIdCache(boolean) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- setUserBlocks(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of user blocks to parallelize the computation.
- setUserCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
- setUserCol(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
- setValidateData(boolean) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Set if the algorithm should validate data before training.
- setValidationIndicatorCol(String) - Method in class org.apache.spark.ml.classification.GBTClassifier
- setValidationIndicatorCol(String) - Method in class org.apache.spark.ml.regression.GBTRegressor
- setValidationTol(double) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- setValue(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string value = 4;
- setValue1(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value1 = 1;
- setValue1Bytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value1 = 1;
- setValue2(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value2 = 2;
- setValue2Bytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
optional string value2 = 2;
- setValueBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
optional string value = 4;
- setVarianceCol(String) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- setVarianceCol(String) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- setVariancePower(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param
GeneralizedLinearRegression.variancePower()
. - setVarianceThreshold(double) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- setVectorSize(int) - Method in class org.apache.spark.ml.feature.Word2Vec
- setVectorSize(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets vector size (default: 100).
- setVendor(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string vendor = 4;
- setVendorBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
optional string vendor = 4;
- setVerbose(boolean) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Enables verbose reporting for SparkSubmit.
- setVerbose(boolean) - Method in class org.apache.spark.launcher.SparkLauncher
- setVocabSize(int) - Method in class org.apache.spark.ml.feature.CountVectorizer
- setWeightCol(String) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
Sets the value of param
DecisionTreeClassifier.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
Sets the value of param
GBTClassifier.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the value of param
LinearSVC.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Sets the value of param
LogisticRegression.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.classification.NaiveBayes
-
Sets the value of param
NaiveBayes.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
-
Sets the value of param
OneVsRest.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
Sets the value of param
RandomForestClassifier.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
Sets the value of param
BisectingKMeans.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- setWeightCol(String) - Method in class org.apache.spark.ml.clustering.KMeans
-
Sets the value of param
KMeans.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- setWeightCol(String) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- setWeightCol(String) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- setWeightCol(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- setWeightCol(String) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- setWeightCol(String) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
Sets the value of param
DecisionTreeRegressor.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
Sets the value of param
GBTRegressor.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param
GeneralizedLinearRegression.weightCol()
. - setWeightCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- setWeightCol(String) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Whether to over-/under-sample training instances according to the given weights in weightCol.
- setWeightCol(String) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
Sets the value of param
RandomForestRegressor.weightCol()
. - setWindowSize(int) - Method in class org.apache.spark.ml.feature.Word2Vec
- setWindowSize(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets the window of words (default: 5)
- setWindowSize(int) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Set the number of batches to compute significance tests over.
- setWithCentering(boolean) - Method in class org.apache.spark.ml.feature.RobustScaler
- setWithMean(boolean) - Method in class org.apache.spark.ml.feature.StandardScaler
- setWithMean(boolean) - Method in class org.apache.spark.mllib.feature.StandardScalerModel
- setWithScaling(boolean) - Method in class org.apache.spark.ml.feature.RobustScaler
- setWithStd(boolean) - Method in class org.apache.spark.ml.feature.StandardScaler
- setWithStd(boolean) - Method in class org.apache.spark.mllib.feature.StandardScalerModel
- setWriteBytes(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- setWriteRecords(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- setWriteTime(int, double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- setWriteTime(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
int64 write_time = 2;
- sha(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a sha1 hash value as a hex string of the
col
. - sha1(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the SHA-1 digest of a binary column and returns the value as a 40 character hex string.
- sha2(Column, int) - Static method in class org.apache.spark.sql.functions
-
Calculates the SHA-2 family of hash functions of a binary column and returns the value as a hex string.
- shape() - Method in class org.apache.spark.mllib.random.GammaGenerator
- SharedParamsCodeGen - Class in org.apache.spark.ml.param.shared
-
Code generator for shared params (sharedParams.scala).
- SharedParamsCodeGen() - Constructor for class org.apache.spark.ml.param.shared.SharedParamsCodeGen
- SharedReadWrite$() - Constructor for class org.apache.spark.ml.Pipeline.SharedReadWrite$
- sharedState() - Method in class org.apache.spark.sql.SparkSession
- shiftleft(Column, int) - Static method in class org.apache.spark.sql.functions
-
Shift the given value numBits left.
- shiftLeft(Column, int) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use shiftleft. Since 3.2.0.
- shiftright(Column, int) - Static method in class org.apache.spark.sql.functions
-
(Signed) shift the given value numBits right.
- shiftRight(Column, int) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use shiftright. Since 3.2.0.
- shiftrightunsigned(Column, int) - Static method in class org.apache.spark.sql.functions
-
Unsigned shift the given value numBits right.
- shiftRightUnsigned(Column, int) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use shiftrightunsigned. Since 3.2.0.
- SHORT() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable short type.
- SHORT_STR - Static variable in class org.apache.spark.types.variant.VariantUtil
- ShortestPaths - Class in org.apache.spark.graphx.lib
-
Computes shortest paths to the given set of landmark vertices, returning a graph where each vertex attribute is a map containing the shortest-path distance to each reachable landmark.
- ShortestPaths() - Constructor for class org.apache.spark.graphx.lib.ShortestPaths
- ShortExactNumeric - Class in org.apache.spark.sql.types
- ShortExactNumeric() - Constructor for class org.apache.spark.sql.types.ShortExactNumeric
- shortName() - Method in interface org.apache.spark.ml.util.MLFormatRegister
- shortName() - Method in interface org.apache.spark.sql.sources.DataSourceRegister
-
The string that represents the format that this data source provider uses.
- shortStrHeader(int) - Static method in class org.apache.spark.types.variant.VariantUtil
- shortTimeUnitString(TimeUnit) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
Return the short string for a
TimeUnit
. - ShortType - Class in org.apache.spark.sql.types
-
The data type representing
Short
values. - ShortType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the ShortType object.
- ShortType() - Constructor for class org.apache.spark.sql.types.ShortType
- ShortTypeExpression - Class in org.apache.spark.sql.types
- ShortTypeExpression() - Constructor for class org.apache.spark.sql.types.ShortTypeExpression
- shortVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Given a Spark version string, return the short version string.
- shouldCloseFileAfterWrite(SparkConf, boolean) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
- shouldDistributeGaussians(int, int) - Static method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Heuristic to distribute the computation of the
MultivariateGaussian
s, approximately when d is greater than 25 except for when k is very small. - shouldFilterOutPath(String) - Static method in class org.apache.spark.util.HadoopFSUtils
-
Checks if we should filter out this path.
- shouldFilterOutPathName(String) - Static method in class org.apache.spark.util.HadoopFSUtils
-
Checks if we should filter out this path name.
- shouldGoLeft(int, Split[]) - Method in interface org.apache.spark.ml.tree.Split
-
Return true (split to left) or false (split to right).
- shouldGoLeft(Vector) - Method in interface org.apache.spark.ml.tree.Split
-
Return true (split to left) or false (split to right).
- shouldOwn(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Validates that the input param belongs to this instance.
- shouldRollover(long) - Method in interface org.apache.spark.util.logging.RollingPolicy
-
Whether rollover should be initiated at this moment
- show() - Method in class org.apache.spark.sql.api.Dataset
-
Displays the top 20 rows of Dataset in a tabular form.
- show(boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Displays the top 20 rows of Dataset in a tabular form.
- show(int) - Method in class org.apache.spark.sql.api.Dataset
-
Displays the Dataset in a tabular form.
- show(int, boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Displays the Dataset in a tabular form.
- show(int, boolean) - Method in class org.apache.spark.sql.Dataset
- show(int, int) - Method in class org.apache.spark.sql.api.Dataset
-
Displays the Dataset in a tabular form.
- show(int, int, boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Displays the Dataset in a tabular form.
- show(int, int, boolean) - Method in class org.apache.spark.sql.Dataset
- showBytesDistribution(String, org.apache.spark.util.Distribution) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showBytesDistribution(String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showBytesDistribution(String, Option<org.apache.spark.util.Distribution>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showColumnsWithConflictNamespacesError(Seq<String>, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- showCreateTableAsSerdeNotAllowedOnSparkDataSourceTableError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- showCreateTableAsSerdeNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- showCreateTableFailToExecuteUnsupportedConfError(TableIdentifier, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- showCreateTableNotSupportedOnTempView(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- showCreateTableNotSupportTransactionalHiveTableError(CatalogTable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- showCreateTableOrViewFailToExecuteUnsupportedFeatureError(CatalogTable, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- showDagVizForJob(int, Seq<org.apache.spark.ui.scope.RDDOperationGraph>) - Static method in class org.apache.spark.ui.UIUtils
-
Return a "DAG visualization" DOM element that expands into a visualization for a job.
- showDagVizForStage(int, Option<org.apache.spark.ui.scope.RDDOperationGraph>) - Static method in class org.apache.spark.ui.UIUtils
-
Return a "DAG visualization" DOM element that expands into a visualization for a stage.
- showDistribution(String, String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showDistribution(String, org.apache.spark.util.Distribution, Function1<Object, String>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showDistribution(String, Option<org.apache.spark.util.Distribution>, String) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showDistribution(String, Option<org.apache.spark.util.Distribution>, Function1<Object, String>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showFunctionsInvalidPatternError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- showFunctionsUnsupportedError(String, SqlBaseParser.IdentifierContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- showMillisDistribution(String, Function1<BatchInfo, Option<Object>>) - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
- showMillisDistribution(String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showMillisDistribution(String, Option<org.apache.spark.util.Distribution>) - Static method in class org.apache.spark.scheduler.StatsReportListener
- showPartitionNotAllowedOnTableNotPartitionedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- shuffle(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a random permutation of the given array.
- shuffle(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a random permutation of the given array.
- SHUFFLE() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_BATCH() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_CHUNK() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_CORRUPT_MERGED_BLOCK_CHUNKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_CORRUPT_MERGED_BLOCK_CHUNKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_CORRUPT_MERGED_BLOCK_CHUNKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_DATA() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_FETCH_WAIT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_FETCH_WAIT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_FETCH_WAIT_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_INDEX() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_LOCAL_BLOCKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_LOCAL_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_LOCAL_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_MERGED_DATA() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_MERGED_FETCH_FALLBACK_COUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_FETCH_FALLBACK_COUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGED_FETCH_FALLBACK_COUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_INDEX() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_MERGED_LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGED_LOCAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_LOCAL_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_LOCAL_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGED_LOCAL_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_LOCAL_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_LOCAL_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGED_LOCAL_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_META() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_MERGED_REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGED_REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGED_REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_REMOTE_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_REMOTE_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGED_REMOTE_CHUNKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_REMOTE_REQ_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_MERGED_REMOTE_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_MERGED_REMOTE_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_MERGERS_COUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_PUSH() - Static method in class org.apache.spark.storage.BlockId
- SHUFFLE_PUSH_CORRUPT_MERGED_BLOCK_CHUNKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_FETCH_FALLBACK_COUNT() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_LOCAL_BLOCKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_LOCAL_CHUNKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_LOCAL_READS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_REMOTE_BLOCKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_REMOTE_CHUNKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_REMOTE_READS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_MERGED_REMOTE_REQS_DURATION() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_PUSH_READ_METRICS_DIST_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- SHUFFLE_PUSH_READ_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- SHUFFLE_PUSH_READ_METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
- SHUFFLE_READ() - Static method in class org.apache.spark.ui.ToolTips
- SHUFFLE_READ_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_READ_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_READ_FETCH_WAIT_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_READ_FETCH_WAIT_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
- SHUFFLE_READ_FETCH_WAIT_TIME() - Static method in class org.apache.spark.ui.ToolTips
- SHUFFLE_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- SHUFFLE_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- SHUFFLE_READ_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- SHUFFLE_READ_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- SHUFFLE_READ_METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
- SHUFFLE_READ_RECORDS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_READ_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- SHUFFLE_READ_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- SHUFFLE_READ_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_READ_REMOTE_SIZE() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
- SHUFFLE_READ_REMOTE_SIZE() - Static method in class org.apache.spark.ui.ToolTips
- SHUFFLE_RECORDS_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_RECORDS_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_RECORDS_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_REMOTE_BLOCKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_REMOTE_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_REMOTE_BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_REMOTE_BYTES_READ_TO_DISK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_REMOTE_BYTES_READ_TO_DISK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_REMOTE_BYTES_READ_TO_DISK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_REMOTE_READS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_REMOTE_READS_TO_DISK() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_REMOTE_REQS_DURATION() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_REMOTE_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_REMOTE_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_REMOTE_REQS_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- SHUFFLE_SERVICE() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
- SHUFFLE_TOTAL_BLOCKS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_TOTAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_TOTAL_READS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_WRITE() - Static method in class org.apache.spark.ui.ToolTips
- SHUFFLE_WRITE_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_WRITE_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_WRITE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- SHUFFLE_WRITE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- SHUFFLE_WRITE_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- SHUFFLE_WRITE_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- SHUFFLE_WRITE_METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
- SHUFFLE_WRITE_RECORDS() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_WRITE_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_WRITE_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- SHUFFLE_WRITE_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- SHUFFLE_WRITE_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_WRITE_SIZE() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_WRITE_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
- SHUFFLE_WRITE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- SHUFFLE_WRITE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- SHUFFLE_WRITE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- ShuffleBlockBatchId - Class in org.apache.spark.storage
- ShuffleBlockBatchId(int, long, int, int) - Constructor for class org.apache.spark.storage.ShuffleBlockBatchId
- ShuffleBlockChunkId - Class in org.apache.spark.storage
- ShuffleBlockChunkId(int, int, int, int) - Constructor for class org.apache.spark.storage.ShuffleBlockChunkId
- ShuffleBlockId - Class in org.apache.spark.storage
- ShuffleBlockId(int, long, int) - Constructor for class org.apache.spark.storage.ShuffleBlockId
- ShuffleChecksumBlockId - Class in org.apache.spark.storage
- ShuffleChecksumBlockId(int, long, int) - Constructor for class org.apache.spark.storage.ShuffleChecksumBlockId
- shuffleCleaned(int) - Method in interface org.apache.spark.CleanerListener
- shuffleCorruptMergedBlockChunks() - Method in class org.apache.spark.status.api.v1.StageData
- ShuffleDataBlockId - Class in org.apache.spark.storage
- ShuffleDataBlockId(int, long, int) - Constructor for class org.apache.spark.storage.ShuffleDataBlockId
- ShuffleDataIO - Interface in org.apache.spark.shuffle.api
-
:: Private :: An interface for plugging in modules for storing and reading temporary shuffle data.
- ShuffleDependency<K,
V, C> - Class in org.apache.spark -
:: DeveloperApi :: Represents a dependency on the output of a shuffle stage.
- ShuffleDependency(RDD<? extends Product2<K, V>>, Partitioner, Serializer, Option<Ordering<K>>, Option<Aggregator<K, V, C>>, boolean, ShuffleWriteProcessor, ClassTag<K>, ClassTag<V>, ClassTag<C>) - Constructor for class org.apache.spark.ShuffleDependency
- ShuffledRDD<K,
V, C> - Class in org.apache.spark.rdd -
:: DeveloperApi :: The resulting RDD from a shuffle (e.g.
- ShuffledRDD(RDD<? extends Product2<K, V>>, Partitioner, ClassTag<K>, ClassTag<V>, ClassTag<C>) - Constructor for class org.apache.spark.rdd.ShuffledRDD
- ShuffleDriverComponents - Interface in org.apache.spark.shuffle.api
-
:: Private :: An interface for building shuffle support modules for the Driver.
- ShuffleExecutorComponents - Interface in org.apache.spark.shuffle.api
-
:: Private :: An interface for building shuffle support for Executors.
- ShuffleFetchCompletionListener - Class in org.apache.spark.storage
-
A listener to be called at the completion of the ShuffleBlockFetcherIterator param: data the ShuffleBlockFetcherIterator to process
- ShuffleFetchCompletionListener(ShuffleBlockFetcherIterator) - Constructor for class org.apache.spark.storage.ShuffleFetchCompletionListener
- shuffleFetchWaitTime() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleHandle() - Method in class org.apache.spark.ShuffleDependency
- shuffleId() - Method in class org.apache.spark.CleanShuffle
- shuffleId() - Method in class org.apache.spark.FetchFailed
- shuffleId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ShufflePushCompletion
- shuffleId() - Method in class org.apache.spark.ShuffleDependency
- shuffleId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
- shuffleId() - Method in class org.apache.spark.storage.ShuffleBlockBatchId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleBlockChunkId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleChecksumBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleDataBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleIndexBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleMergedBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
- shuffleId() - Method in class org.apache.spark.storage.ShufflePushBlockId
- ShuffleIndexBlockId - Class in org.apache.spark.storage
- ShuffleIndexBlockId(int, long, int) - Constructor for class org.apache.spark.storage.ShuffleIndexBlockId
- shuffleLocalBlocksFetched() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleLocalBytesRead() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleManager() - Method in class org.apache.spark.SparkEnv
- ShuffleMapOutputWriter - Interface in org.apache.spark.shuffle.api
-
:: Private :: A top-level writer that returns child writers for persisting the output of a map task, and then commits all of the writes as one atomic operation.
- shuffleMergeAllowed() - Method in class org.apache.spark.ShuffleDependency
- ShuffleMergedBlockId - Class in org.apache.spark.storage
- ShuffleMergedBlockId(int, int, int) - Constructor for class org.apache.spark.storage.ShuffleMergedBlockId
- ShuffleMergedDataBlockId - Class in org.apache.spark.storage
- ShuffleMergedDataBlockId(String, int, int, int) - Constructor for class org.apache.spark.storage.ShuffleMergedDataBlockId
- shuffleMergedFetchFallbackCount() - Method in class org.apache.spark.status.api.v1.StageData
- ShuffleMergedIndexBlockId - Class in org.apache.spark.storage
- ShuffleMergedIndexBlockId(String, int, int, int) - Constructor for class org.apache.spark.storage.ShuffleMergedIndexBlockId
- shuffleMergedLocalBlocksFetched() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleMergedLocalBytesRead() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleMergedLocalChunksFetched() - Method in class org.apache.spark.status.api.v1.StageData
- ShuffleMergedMetaBlockId - Class in org.apache.spark.storage
- ShuffleMergedMetaBlockId(String, int, int, int) - Constructor for class org.apache.spark.storage.ShuffleMergedMetaBlockId
- shuffleMergedRemoteBlocksFetched() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleMergedRemoteBytesRead() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleMergedRemoteChunksFetched() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleMergedRemoteReqsDuration() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleMergeEnabled() - Method in class org.apache.spark.ShuffleDependency
- shuffleMergeFinalized() - Method in class org.apache.spark.ShuffleDependency
-
Returns true if push-based shuffle is disabled or if the shuffle merge for this shuffle is finalized.
- shuffleMergeId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ShufflePushCompletion
- shuffleMergeId() - Method in class org.apache.spark.ShuffleDependency
-
shuffleMergeId is used to uniquely identify merging process of shuffle by an indeterminate stage attempt.
- shuffleMergeId() - Method in class org.apache.spark.storage.ShuffleBlockChunkId
- shuffleMergeId() - Method in class org.apache.spark.storage.ShuffleMergedBlockId
- shuffleMergeId() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
- shuffleMergeId() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
- shuffleMergeId() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
- shuffleMergeId() - Method in class org.apache.spark.storage.ShufflePushBlockId
- shuffleMergersCount() - Method in class org.apache.spark.status.api.v1.StageData
- ShuffleOutputStatus - Interface in org.apache.spark.scheduler
-
A common trait between
MapStatus
andMergeStatus
. - ShufflePartitionWriter - Interface in org.apache.spark.shuffle.api
-
:: Private :: An interface for opening streams to persist partition bytes to a backing data store.
- ShufflePushBlockId - Class in org.apache.spark.storage
- ShufflePushBlockId(int, int, int, int) - Constructor for class org.apache.spark.storage.ShufflePushBlockId
- ShufflePushCompletion(int, int, int) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ShufflePushCompletion
- ShufflePushCompletion$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ShufflePushCompletion$
- ShufflePushReadMetricDistributions - Class in org.apache.spark.status.api.v1
- shufflePushReadMetrics() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
- ShufflePushReadMetrics - Class in org.apache.spark.status.api.v1
- shufflePushReadMetricsDist() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- shuffleRead() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- shuffleRead() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- shuffleRead$() - Constructor for class org.apache.spark.InternalAccumulator.shuffleRead$
- shuffleReadBytes() - Method in class org.apache.spark.status.api.v1.StageData
- ShuffleReadMetricDistributions - Class in org.apache.spark.status.api.v1
- shuffleReadMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- shuffleReadMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- ShuffleReadMetrics - Class in org.apache.spark.status.api.v1
- shuffleReadRecords() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- shuffleReadRecords() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- shuffleReadRecords() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleRemoteBlocksFetched() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleRemoteBytesRead() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleRemoteBytesReadToDisk() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleRemoteReqsDuration() - Method in class org.apache.spark.status.api.v1.StageData
- ShuffleStatus - Class in org.apache.spark
-
Helper class used by the
MapOutputTrackerMaster
to perform bookkeeping for a single ShuffleMapStage. - ShuffleStatus(int, int) - Constructor for class org.apache.spark.ShuffleStatus
- shuffleWrite() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- shuffleWrite() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- shuffleWrite$() - Constructor for class org.apache.spark.InternalAccumulator.shuffleWrite$
- shuffleWriteBytes() - Method in class org.apache.spark.status.api.v1.StageData
- ShuffleWriteMetricDistributions - Class in org.apache.spark.status.api.v1
- shuffleWriteMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
- shuffleWriteMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetrics
- ShuffleWriteMetrics - Class in org.apache.spark.status.api.v1
- shuffleWriteRecords() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- shuffleWriteRecords() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- shuffleWriteRecords() - Method in class org.apache.spark.status.api.v1.StageData
- shuffleWriterProcessor() - Method in class org.apache.spark.ShuffleDependency
- shuffleWriteTime() - Method in class org.apache.spark.status.api.v1.StageData
- shutdown() - Method in interface org.apache.spark.api.plugin.DriverPlugin
-
Informs the plugin that the Spark application is shutting down.
- shutdown() - Method in interface org.apache.spark.api.plugin.ExecutorPlugin
-
Clean up and terminate this plugin.
- shutdown(ExecutorService, Duration) - Static method in class org.apache.spark.util.ThreadUtils
- Shutdown(int) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown
- Shutdown$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown$
- ShutdownHookManager - Class in org.apache.spark.util
-
Various utility methods used by Spark.
- ShutdownHookManager() - Constructor for class org.apache.spark.util.ShutdownHookManager
- sigma() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- sigma() - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
- sigmas() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
- sign(byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- sign(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- sign(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- sign(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- sign(long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- sign(short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- sign(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the signum of the given value.
- sign(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- SignalUtils - Class in org.apache.spark.util
-
Contains utilities for working with posix signals.
- SignalUtils() - Constructor for class org.apache.spark.util.SignalUtils
- signum(byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- signum(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- signum(long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- signum(short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- signum(String) - Static method in class org.apache.spark.sql.functions
-
Computes the signum of the given column.
- signum(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the signum of the given value.
- signum(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- signum(T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- signum(T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- silhouette() - Method in class org.apache.spark.ml.evaluation.ClusteringMetrics
-
Returns the silhouette score
- SimpleFutureAction<T> - Class in org.apache.spark
-
A
FutureAction
holding the result of an action that triggers a single job. - SimpleMetricsCachedBatch - Interface in org.apache.spark.sql.columnar
-
A
CachedBatch
that stores some simple metrics that can be used for filtering of batches with theSimpleMetricsCachedBatchSerializer
. - SimpleMetricsCachedBatchSerializer - Class in org.apache.spark.sql.columnar
-
Provides basic filtering for
CachedBatchSerializer
implementations. - SimpleMetricsCachedBatchSerializer() - Constructor for class org.apache.spark.sql.columnar.SimpleMetricsCachedBatchSerializer
- simpleString() - Method in class org.apache.spark.sql.types.ArrayType
- simpleString() - Static method in class org.apache.spark.sql.types.BinaryType
- simpleString() - Static method in class org.apache.spark.sql.types.BooleanType
- simpleString() - Method in class org.apache.spark.sql.types.ByteType
- simpleString() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- simpleString() - Method in class org.apache.spark.sql.types.DataType
-
Readable string representation for the type.
- simpleString() - Static method in class org.apache.spark.sql.types.DateType
- simpleString() - Method in class org.apache.spark.sql.types.DecimalType
- simpleString() - Static method in class org.apache.spark.sql.types.DoubleType
- simpleString() - Static method in class org.apache.spark.sql.types.FloatType
- simpleString() - Method in class org.apache.spark.sql.types.IntegerType
- simpleString() - Method in class org.apache.spark.sql.types.LongType
- simpleString() - Method in class org.apache.spark.sql.types.MapType
- simpleString() - Static method in class org.apache.spark.sql.types.NullType
- simpleString() - Method in class org.apache.spark.sql.types.ObjectType
- simpleString() - Method in class org.apache.spark.sql.types.ShortType
- simpleString() - Static method in class org.apache.spark.sql.types.StringType
- simpleString() - Method in class org.apache.spark.sql.types.StructType
- simpleString() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- simpleString() - Static method in class org.apache.spark.sql.types.TimestampType
- simpleString() - Static method in class org.apache.spark.sql.types.VariantType
- SimpleUpdater - Class in org.apache.spark.mllib.optimization
-
A simple updater for gradient descent *without* any regularization.
- SimpleUpdater() - Constructor for class org.apache.spark.mllib.optimization.SimpleUpdater
- sin(String) - Static method in class org.apache.spark.sql.functions
- sin(Column) - Static method in class org.apache.spark.sql.functions
- SingleSpillShuffleMapOutputWriter - Interface in org.apache.spark.shuffle.api
-
Optional extension for partition writing that is optimized for transferring a single file to the backing store.
- SingleStatementExec - Class in org.apache.spark.sql.scripting
-
Executable node for SingleStatement.
- SingleStatementExec(LogicalPlan, Origin, boolean) - Constructor for class org.apache.spark.sql.scripting.SingleStatementExec
- singleTableStarInCountNotAllowedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- SingleValueExecutorMetricType - Interface in org.apache.spark.metrics
- SingularValueDecomposition<UType,
VType> - Class in org.apache.spark.mllib.linalg -
Represents singular value decomposition (SVD) factors.
- SingularValueDecomposition(UType, Vector, VType) - Constructor for class org.apache.spark.mllib.linalg.SingularValueDecomposition
- sinh(String) - Static method in class org.apache.spark.sql.functions
- sinh(Column) - Static method in class org.apache.spark.sql.functions
- sink() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- Sink - Interface in org.apache.spark.metrics.sink
- SINK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- SinkProgress - Class in org.apache.spark.sql.streaming
-
Information about progress made for a sink in the execution of a
StreamingQuery
during a trigger. - SinkProgressSerializer - Class in org.apache.spark.status.protobuf.sql
- SinkProgressSerializer() - Constructor for class org.apache.spark.status.protobuf.sql.SinkProgressSerializer
- size() - Method in class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
- size() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Size of the attribute group.
- size() - Method in class org.apache.spark.ml.feature.VectorSizeHint
-
The size of Vectors in
inputCol
. - size() - Method in class org.apache.spark.ml.linalg.DenseVector
- size() - Method in class org.apache.spark.ml.linalg.SparseVector
- size() - Method in interface org.apache.spark.ml.linalg.Vector
-
Size of the vector.
- size() - Method in class org.apache.spark.ml.param.ParamMap
-
Number of param pairs in this map.
- size() - Method in class org.apache.spark.mllib.linalg.DenseVector
- size() - Method in class org.apache.spark.mllib.linalg.SparseVector
- size() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Size of the vector.
- size() - Method in interface org.apache.spark.sql.Row
-
Number of elements in the Row.
- size() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- size() - Method in interface org.apache.spark.storage.BlockData
- size() - Method in class org.apache.spark.storage.DiskBlockData
- size() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
- size() - Method in interface org.apache.spark.storage.memory.MemoryEntry
- size() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
- size(Column) - Static method in class org.apache.spark.sql.functions
-
Returns length of array or map.
- size(KVStoreView<T>) - Static method in class org.apache.spark.status.KVUtils
- SIZE_IN_MEMORY() - Static method in class org.apache.spark.ui.storage.ToolTips
- SIZE_LIMIT - Static variable in class org.apache.spark.types.variant.VariantUtil
- SIZE_ON_DISK() - Static method in class org.apache.spark.ui.storage.ToolTips
- SizeEstimator - Class in org.apache.spark.util
-
:: DeveloperApi :: Estimates the sizes of Java objects (number of bytes of memory they occupy), for use in memory-aware caches.
- SizeEstimator() - Constructor for class org.apache.spark.util.SizeEstimator
- sizeInBytes() - Method in interface org.apache.spark.sql.columnar.CachedBatch
- sizeInBytes() - Method in interface org.apache.spark.sql.columnar.SimpleMetricsCachedBatch
- sizeInBytes() - Method in interface org.apache.spark.sql.connector.read.HasPartitionStatistics
-
Returns the size in bytes of the partition statistics associated to this partition.
- sizeInBytes() - Method in interface org.apache.spark.sql.connector.read.Statistics
- sizeInBytes() - Method in class org.apache.spark.sql.sources.BaseRelation
-
Returns an estimated size of this relation in bytes.
- sketch(RDD<K>, int, ClassTag<K>) - Static method in class org.apache.spark.RangePartitioner
-
Sketches the input RDD via reservoir sampling on each partition.
- skewness(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the skewness of the values in a group.
- skewness(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the skewness of the values in a group.
- skip(long) - Method in class org.apache.spark.io.NioBufferedFileInputStream
- skip(long) - Method in class org.apache.spark.io.ReadAheadInputStream
- skip(long) - Method in class org.apache.spark.storage.BufferReleasingInputStream
- SKIPPED - Enum constant in enum class org.apache.spark.status.api.v1.StageStatus
- SKIPPED_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- skippedStages() - Method in class org.apache.spark.status.LiveJob
- skippedTasks() - Method in class org.apache.spark.status.LiveJob
- skipWhitespace() - Static method in class org.apache.spark.ml.feature.RFormulaParser
- slice(Column, int, int) - Static method in class org.apache.spark.sql.functions
-
Returns an array containing all the elements in
x
from indexstart
(or starting from the end ifstart
is negative) with the specifiedlength
. - slice(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array containing all the elements in
x
from indexstart
(or starting from the end ifstart
is negative) with the specifiedlength
. - slice(org.apache.spark.streaming.Interval) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return all the RDDs defined by the Interval object (both end times included)
- slice(Time, Time) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return all the RDDs between 'fromDuration' to 'toDuration' (both included)
- slice(Time, Time) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return all the RDDs between 'fromTime' to 'toTime' (both included)
- slideDuration() - Method in class org.apache.spark.streaming.dstream.DStream
-
Time interval after which the DStream generates an RDD
- slideDuration() - Method in class org.apache.spark.streaming.dstream.InputDStream
- sliding(int) - Method in class org.apache.spark.mllib.rdd.RDDFunctions
-
sliding(Int, Int)*
with step = 1. - sliding(int, int) - Method in class org.apache.spark.mllib.rdd.RDDFunctions
-
Returns an RDD from grouping items of its parent RDD in fixed size blocks by passing a sliding window over them.
- smoothing() - Method in class org.apache.spark.ml.classification.NaiveBayes
- smoothing() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- smoothing() - Method in interface org.apache.spark.ml.classification.NaiveBayesParams
-
The smoothing parameter.
- SNAPPY - Enum constant in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- SnappyCompressionCodec - Class in org.apache.spark.io
-
:: DeveloperApi :: Snappy implementation of
CompressionCodec
. - SnappyCompressionCodec(SparkConf) - Constructor for class org.apache.spark.io.SnappyCompressionCodec
- SnowflakeDialect - Class in org.apache.spark.sql.jdbc
- SnowflakeDialect() - Constructor for class org.apache.spark.sql.jdbc.SnowflakeDialect
- socketStream(String, int, Function<InputStream, Iterable<T>>, StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from network source hostname:port.
- socketStream(String, int, Function1<InputStream, Iterator<T>>, StorageLevel, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Creates an input stream from TCP source hostname:port.
- socketTextStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from network source hostname:port.
- socketTextStream(String, int, StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream from network source hostname:port.
- socketTextStream(String, int, StorageLevel) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Creates an input stream from TCP source hostname:port.
- softmax(double[]) - Static method in class org.apache.spark.ml.impl.Utils
-
Perform in-place softmax conversion.
- softmax(double[], int, int, int, double[]) - Static method in class org.apache.spark.ml.impl.Utils
-
Perform softmax conversion.
- solve(double[], double[]) - Static method in class org.apache.spark.mllib.linalg.CholeskyDecomposition
-
Solves a symmetric positive definite linear system via Cholesky factorization.
- solve(double[], double[], NNLS.Workspace) - Static method in class org.apache.spark.mllib.optimization.NNLS
-
Solve a least squares problem, possibly with nonnegativity constraints, by a modified projected gradient method.
- solve(double, double, DenseVector, DenseVector, DenseVector) - Method in interface org.apache.spark.ml.optim.NormalEquationSolver
-
Solve the normal equations from summary statistics.
- solve(ALS.NormalEquation, double) - Method in interface org.apache.spark.ml.recommendation.ALS.LeastSquaresNESolver
-
Solves a least squares problem with regularization (possibly with other constraints).
- solver() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- solver() - Method in class org.apache.spark.ml.classification.FMClassifier
- solver() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- solver() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- solver() - Method in interface org.apache.spark.ml.classification.MultilayerPerceptronParams
-
The solver algorithm for optimization.
- solver() - Method in class org.apache.spark.ml.clustering.KMeans
- solver() - Method in class org.apache.spark.ml.clustering.KMeansModel
- solver() - Method in interface org.apache.spark.ml.clustering.KMeansParams
-
Param for the name of optimization method used in KMeans.
- solver() - Method in interface org.apache.spark.ml.param.shared.HasSolver
-
Param for the solver algorithm for optimization.
- solver() - Method in interface org.apache.spark.ml.regression.FactorizationMachinesParams
-
The solver algorithm for optimization.
- solver() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- solver() - Method in class org.apache.spark.ml.regression.FMRegressor
- solver() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- solver() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
The solver algorithm for optimization.
- solver() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- solver() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
- solver() - Method in class org.apache.spark.ml.regression.LinearRegression
- solver() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- solver() - Method in interface org.apache.spark.ml.regression.LinearRegressionParams
-
The solver algorithm for optimization.
- some(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if at least one value of
e
is true. - sort(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the specified column, all in ascending order.
- sort(String, String...) - Method in class org.apache.spark.sql.Dataset
- sort(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the specified column, all in ascending order.
- sort(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- sort(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the given expressions.
- sort(Column...) - Method in class org.apache.spark.sql.Dataset
- sort(Expression, SortDirection) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a sort expression.
- sort(Expression, SortDirection, NullOrdering) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a sort expression.
- sort(Expression, SortDirection, NullOrdering) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
- sort(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset sorted by the given expressions.
- sort(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- Sort() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- sort_array(Column) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array for the given column in ascending order, according to the natural ordering of the array elements.
- sort_array(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array for the given column in ascending or descending order, according to the natural ordering of the array elements.
- sortBy(String, String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Sorts the output in each bucket by the given columns.
- sortBy(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Sorts the output in each bucket by the given columns.
- sortBy(Function<T, S>, boolean, int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return this RDD sorted by the given key function.
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
-
Return this RDD sorted by the given key function.
- sortByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements in ascending order.
- sortByKey(boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(boolean, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(boolean, int) - Method in class org.apache.spark.rdd.OrderedRDDFunctions
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(Comparator<K>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(Comparator<K>, boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(Comparator<K>, boolean, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByWithoutBucketingError() - Method in interface org.apache.spark.sql.errors.CompilationErrors
- sortByWithoutBucketingError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- SortDirection - Enum Class in org.apache.spark.sql.connector.expressions
-
A sort direction used in sorting expressions.
- SortOrder - Interface in org.apache.spark.sql.connector.expressions
-
Represents a sort order in the public expression API.
- sortWithinPartitions(String, String...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- sortWithinPartitions(String, String...) - Method in class org.apache.spark.sql.Dataset
- sortWithinPartitions(String, Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- sortWithinPartitions(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
- sortWithinPartitions(Column...) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- sortWithinPartitions(Column...) - Method in class org.apache.spark.sql.Dataset
- sortWithinPartitions(Seq<Column>) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- sortWithinPartitions(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
- soundex(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the soundex code for the specified expression.
- source() - Method in class org.apache.spark.sql.connector.expressions.Extract
- Source - Interface in org.apache.spark.metrics.source
- SOURCE_NAME_CONSOLE() - Static method in class org.apache.spark.sql.streaming.DataStreamWriter
- SOURCE_NAME_FOREACH() - Static method in class org.apache.spark.sql.streaming.DataStreamWriter
- SOURCE_NAME_FOREACH_BATCH() - Static method in class org.apache.spark.sql.streaming.DataStreamWriter
- SOURCE_NAME_MEMORY() - Static method in class org.apache.spark.sql.streaming.DataStreamWriter
- SOURCE_NAME_NOOP() - Static method in class org.apache.spark.sql.streaming.DataStreamWriter
- SOURCE_NAME_TABLE() - Static method in class org.apache.spark.sql.streaming.DataStreamWriter
- sourceName() - Method in class org.apache.spark.metrics.source.DoubleAccumulatorSource
- sourceName() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
- sourceName() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
- sourceName() - Method in interface org.apache.spark.metrics.source.Source
- sourceNotSupportedWithContinuousTriggerError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- SourceProgress - Class in org.apache.spark.sql.streaming
-
Information about progress made for a source in the execution of a
StreamingQuery
during a trigger. - SourceProgressSerializer - Class in org.apache.spark.status.protobuf.sql
- SourceProgressSerializer() - Constructor for class org.apache.spark.status.protobuf.sql.SourceProgressSerializer
- sources() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- SOURCES_ALLOW_ONE_TIME_QUERY() - Static method in class org.apache.spark.sql.streaming.DataStreamWriter
- SOURCES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- sourceSchema(SQLContext, Option<StructType>, String, Map<String, String>) - Method in interface org.apache.spark.sql.sources.StreamSourceProvider
-
Returns the name and schema of the source that can be used to continually read data.
- spark() - Method in class org.apache.spark.status.api.v1.VersionInfo
- spark_branch() - Static method in class org.apache.spark.SparkBuildInfo
- spark_build_date() - Static method in class org.apache.spark.SparkBuildInfo
- spark_build_user() - Static method in class org.apache.spark.SparkBuildInfo
- SPARK_CONNECTOR_NAME() - Static method in class org.apache.spark.ui.JettyUtils
- SPARK_CONTEXT_SHUTDOWN_PRIORITY() - Static method in class org.apache.spark.util.ShutdownHookManager
-
The shutdown priority of the SparkContext instance.
- spark_doc_root() - Static method in class org.apache.spark.SparkBuildInfo
- SPARK_IO_ENCRYPTION_COMMONS_CONFIG_PREFIX() - Static method in class org.apache.spark.security.CryptoStreamUtils
- SPARK_LOCAL_REMOTE - Static variable in class org.apache.spark.launcher.SparkLauncher
- SPARK_MASTER - Static variable in class org.apache.spark.launcher.SparkLauncher
-
The Spark master.
- spark_partition_id() - Static method in class org.apache.spark.sql.functions
-
Partition ID.
- SPARK_PROPERTIES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- SPARK_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
- SPARK_REMOTE - Static variable in class org.apache.spark.launcher.SparkLauncher
-
The Spark remote.
- spark_repo_url() - Static method in class org.apache.spark.SparkBuildInfo
- spark_revision() - Static method in class org.apache.spark.SparkBuildInfo
- SPARK_USER_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- spark_version() - Static method in class org.apache.spark.SparkBuildInfo
- SparkAppConfig(Seq<Tuple2<String, String>>, Option<byte[]>, Option<byte[]>, ResourceProfile, Option<String>) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
- SparkAppConfig$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig$
- SparkAppHandle - Interface in org.apache.spark.launcher
-
A handle to a running Spark application.
- SparkAppHandle.Listener - Interface in org.apache.spark.launcher
-
Listener for updates to a handle's state.
- SparkAppHandle.State - Enum Class in org.apache.spark.launcher
-
Represents the application's state.
- SparkArrayOps(Object) - Constructor for class org.apache.spark.util.ArrayImplicits.SparkArrayOps
- SparkAWSCredentials - Interface in org.apache.spark.streaming.kinesis
-
Serializable interface providing a method executors can call to obtain an AWSCredentialsProvider instance for authenticating to AWS services.
- SparkAWSCredentials.Builder - Class in org.apache.spark.streaming.kinesis
-
Builder for
SparkAWSCredentials
instances. - SparkBuildInfo - Class in org.apache.spark
- SparkBuildInfo() - Constructor for class org.apache.spark.SparkBuildInfo
- SparkClassUtils - Interface in org.apache.spark.util
- SparkClosureCleaner - Class in org.apache.spark.util
- SparkClosureCleaner() - Constructor for class org.apache.spark.util.SparkClosureCleaner
- SparkCollectionUtils - Interface in org.apache.spark.util
- SparkConf - Class in org.apache.spark
-
Configuration for a Spark application.
- SparkConf() - Constructor for class org.apache.spark.SparkConf
-
Create a SparkConf that loads defaults from system properties and the classpath
- SparkConf(boolean) - Constructor for class org.apache.spark.SparkConf
- sparkContext() - Method in class org.apache.spark.rdd.RDD
-
The SparkContext that created this RDD.
- sparkContext() - Method in class org.apache.spark.sql.SparkSession
- sparkContext() - Method in class org.apache.spark.sql.SQLContext
- sparkContext() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.The underlying SparkContext
- sparkContext() - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Return the associated Spark context
- SparkContext - Class in org.apache.spark
-
Main entry point for Spark functionality.
- SparkContext() - Constructor for class org.apache.spark.SparkContext
-
Create a SparkContext that loads settings from system properties (for instance, when launching with ./bin/spark-submit).
- SparkContext(String, String, String, Seq<String>, Map<String, String>) - Constructor for class org.apache.spark.SparkContext
-
Alternative constructor that allows setting common Spark properties directly
- SparkContext(String, String, SparkConf) - Constructor for class org.apache.spark.SparkContext
-
Alternative constructor that allows setting common Spark properties directly
- SparkContext(SparkConf) - Constructor for class org.apache.spark.SparkContext
- SparkCoreErrors - Class in org.apache.spark.errors
-
Object for grouping error messages from (most) exceptions thrown during query execution.
- SparkCoreErrors() - Constructor for class org.apache.spark.errors.SparkCoreErrors
- SparkDataStream - Interface in org.apache.spark.sql.connector.read.streaming
-
The base interface representing a readable data stream in a Spark streaming query.
- SparkEnv - Class in org.apache.spark
-
:: DeveloperApi :: Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, RpcEnv, block manager, map output tracker, etc.
- SparkEnv(String, RpcEnv, Serializer, Serializer, org.apache.spark.serializer.SerializerManager, MapOutputTracker, org.apache.spark.broadcast.BroadcastManager, org.apache.spark.storage.BlockManager, SecurityManager, org.apache.spark.metrics.MetricsSystem, org.apache.spark.scheduler.OutputCommitCoordinator, SparkConf) - Constructor for class org.apache.spark.SparkEnv
- SparkEnvUtils - Interface in org.apache.spark.util
- sparkError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- SparkErrorUtils - Interface in org.apache.spark.util
- sparkEventFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- sparkEventFromJson(String) - Static method in class org.apache.spark.util.JsonProtocol
- sparkEventToJsonString(SparkListenerEvent) - Static method in class org.apache.spark.util.JsonProtocol
-
------------------------------------------------- * JSON serialization methods for SparkListenerEvents |
- sparkEventToJsonString(SparkListenerEvent, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- SparkException - Exception in org.apache.spark
- SparkException(String) - Constructor for exception org.apache.spark.SparkException
- SparkException(String, Throwable) - Constructor for exception org.apache.spark.SparkException
- SparkException(String, Throwable, Option<String>, Map<String, String>, QueryContext[]) - Constructor for exception org.apache.spark.SparkException
- SparkException(String, Map<String, String>, Throwable) - Constructor for exception org.apache.spark.SparkException
- SparkException(String, Map<String, String>, Throwable, QueryContext[]) - Constructor for exception org.apache.spark.SparkException
- SparkException(String, Map<String, String>, Throwable, QueryContext[], String) - Constructor for exception org.apache.spark.SparkException
- SparkExecutorInfo - Interface in org.apache.spark
-
Exposes information about Spark Executors.
- SparkExecutorInfoImpl - Class in org.apache.spark
- SparkExecutorInfoImpl(String, int, long, int, long, long, long, long) - Constructor for class org.apache.spark.SparkExecutorInfoImpl
- SparkExitCode - Class in org.apache.spark.util
- SparkExitCode() - Constructor for class org.apache.spark.util.SparkExitCode
- SparkFiles - Class in org.apache.spark
-
Resolves paths to files added through
SparkContext.addFile()
. - SparkFiles() - Constructor for class org.apache.spark.SparkFiles
- SparkFileUtils - Interface in org.apache.spark.util
- SparkFilterApi - Class in org.apache.parquet.filter2.predicate
-
TODO (PARQUET-1809): This is a temporary workaround; it is intended to be moved to Parquet.
- SparkFilterApi() - Constructor for class org.apache.parquet.filter2.predicate.SparkFilterApi
- SparkFirehoseListener - Class in org.apache.spark
-
Class that allows users to receive all SparkListener events.
- SparkFirehoseListener() - Constructor for class org.apache.spark.SparkFirehoseListener
- SparkHadoopMapRedUtil - Class in org.apache.spark.mapred
- SparkHadoopMapRedUtil() - Constructor for class org.apache.spark.mapred.SparkHadoopMapRedUtil
- sparkJavaOpts(SparkConf, Function1<String, Object>) - Static method in class org.apache.spark.util.Utils
-
Convert all spark properties set in the given SparkConf to a sequence of java options.
- sparkJobCancelled(int, String, Exception) - Static method in class org.apache.spark.errors.SparkCoreErrors
- sparkJobCancelledAsPartOfJobGroupError(int, String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- SparkJobInfo - Interface in org.apache.spark
-
Exposes information about Spark Jobs.
- SparkJobInfoImpl - Class in org.apache.spark
- SparkJobInfoImpl(int, int[], JobExecutionStatus) - Constructor for class org.apache.spark.SparkJobInfoImpl
- SparkLauncher - Class in org.apache.spark.launcher
-
Launcher for Spark applications.
- SparkLauncher() - Constructor for class org.apache.spark.launcher.SparkLauncher
- SparkLauncher(Map<String, String>) - Constructor for class org.apache.spark.launcher.SparkLauncher
-
Creates a launcher that will set the given environment variables in the child.
- SparkListener - Class in org.apache.spark.scheduler
-
:: DeveloperApi :: A default implementation for
SparkListenerInterface
that has no-op implementations for all callbacks. - SparkListener() - Constructor for class org.apache.spark.scheduler.SparkListener
- SparkListenerApplicationEnd - Class in org.apache.spark.scheduler
- SparkListenerApplicationEnd(long, Option<Object>) - Constructor for class org.apache.spark.scheduler.SparkListenerApplicationEnd
- SparkListenerApplicationStart - Class in org.apache.spark.scheduler
- SparkListenerApplicationStart(String, Option<String>, long, String, Option<String>, Option<Map<String, String>>, Option<Map<String, String>>) - Constructor for class org.apache.spark.scheduler.SparkListenerApplicationStart
- SparkListenerBlockManagerAdded - Class in org.apache.spark.scheduler
- SparkListenerBlockManagerAdded(long, BlockManagerId, long, Option<Object>, Option<Object>) - Constructor for class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- SparkListenerBlockManagerRemoved - Class in org.apache.spark.scheduler
- SparkListenerBlockManagerRemoved(long, BlockManagerId) - Constructor for class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
- SparkListenerBlockUpdated - Class in org.apache.spark.scheduler
- SparkListenerBlockUpdated(BlockUpdatedInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerBlockUpdated
- SparkListenerBus - Interface in org.apache.spark.scheduler
-
A
SparkListenerEvent
bus that relaysSparkListenerEvent
s to its listeners - SparkListenerEnvironmentUpdate - Class in org.apache.spark.scheduler
- SparkListenerEnvironmentUpdate(Map<String, Seq<Tuple2<String, String>>>) - Constructor for class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
- SparkListenerEvent - Interface in org.apache.spark.scheduler
- SparkListenerExecutorAdded - Class in org.apache.spark.scheduler
- SparkListenerExecutorAdded(long, String, ExecutorInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorAdded
- SparkListenerExecutorBlacklisted - Class in org.apache.spark.scheduler
-
Deprecated.use SparkListenerExecutorExcluded instead. Since 3.1.0.
- SparkListenerExecutorBlacklisted(long, String, int) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- SparkListenerExecutorBlacklistedForStage - Class in org.apache.spark.scheduler
-
Deprecated.use SparkListenerExecutorExcludedForStage instead. Since 3.1.0.
- SparkListenerExecutorBlacklistedForStage(long, String, int, int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- SparkListenerExecutorExcluded - Class in org.apache.spark.scheduler
- SparkListenerExecutorExcluded(long, String, int) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorExcluded
- SparkListenerExecutorExcludedForStage - Class in org.apache.spark.scheduler
- SparkListenerExecutorExcludedForStage(long, String, int, int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- SparkListenerExecutorMetricsUpdate - Class in org.apache.spark.scheduler
-
Periodic updates from executors.
- SparkListenerExecutorMetricsUpdate(String, Seq<Tuple4<Object, Object, Object, Seq<AccumulableInfo>>>, Map<Tuple2<Object, Object>, ExecutorMetrics>) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
- SparkListenerExecutorRemoved - Class in org.apache.spark.scheduler
- SparkListenerExecutorRemoved(long, String, String) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorRemoved
- SparkListenerExecutorUnblacklisted - Class in org.apache.spark.scheduler
-
Deprecated.use SparkListenerExecutorUnexcluded instead. Since 3.1.0.
- SparkListenerExecutorUnblacklisted(long, String) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
Deprecated.
- SparkListenerExecutorUnexcluded - Class in org.apache.spark.scheduler
- SparkListenerExecutorUnexcluded(long, String) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
- SparkListenerInterface - Interface in org.apache.spark.scheduler
-
Interface for listening to events from the Spark scheduler.
- SparkListenerJobEnd - Class in org.apache.spark.scheduler
- SparkListenerJobEnd(int, long, JobResult) - Constructor for class org.apache.spark.scheduler.SparkListenerJobEnd
- SparkListenerJobStart - Class in org.apache.spark.scheduler
- SparkListenerJobStart(int, long, Seq<StageInfo>, Properties) - Constructor for class org.apache.spark.scheduler.SparkListenerJobStart
- SparkListenerLogStart - Class in org.apache.spark.scheduler
-
An internal class that describes the metadata of an event log.
- SparkListenerLogStart(String) - Constructor for class org.apache.spark.scheduler.SparkListenerLogStart
- SparkListenerMiscellaneousProcessAdded - Class in org.apache.spark.scheduler
- SparkListenerMiscellaneousProcessAdded(long, String, MiscellaneousProcessDetails) - Constructor for class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
- SparkListenerNodeBlacklisted - Class in org.apache.spark.scheduler
-
Deprecated.use SparkListenerNodeExcluded instead. Since 3.1.0.
- SparkListenerNodeBlacklisted(long, String, int) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- SparkListenerNodeBlacklistedForStage - Class in org.apache.spark.scheduler
-
Deprecated.use SparkListenerNodeExcludedForStage instead. Since 3.1.0.
- SparkListenerNodeBlacklistedForStage(long, String, int, int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- SparkListenerNodeExcluded - Class in org.apache.spark.scheduler
- SparkListenerNodeExcluded(long, String, int) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeExcluded
- SparkListenerNodeExcludedForStage - Class in org.apache.spark.scheduler
- SparkListenerNodeExcludedForStage(long, String, int, int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- SparkListenerNodeUnblacklisted - Class in org.apache.spark.scheduler
-
Deprecated.use SparkListenerNodeUnexcluded instead. Since 3.1.0.
- SparkListenerNodeUnblacklisted(long, String) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
Deprecated.
- SparkListenerNodeUnexcluded - Class in org.apache.spark.scheduler
- SparkListenerNodeUnexcluded(long, String) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
- SparkListenerResourceProfileAdded - Class in org.apache.spark.scheduler
- SparkListenerResourceProfileAdded(ResourceProfile) - Constructor for class org.apache.spark.scheduler.SparkListenerResourceProfileAdded
- SparkListenerSpeculativeTaskSubmitted - Class in org.apache.spark.scheduler
- SparkListenerSpeculativeTaskSubmitted(int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- SparkListenerSpeculativeTaskSubmitted(int, int, int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- SparkListenerStageCompleted - Class in org.apache.spark.scheduler
- SparkListenerStageCompleted(StageInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerStageCompleted
- SparkListenerStageExecutorMetrics - Class in org.apache.spark.scheduler
-
Peak metric values for the executor for the stage, written to the history log at stage completion.
- SparkListenerStageExecutorMetrics(String, int, int, ExecutorMetrics) - Constructor for class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
- SparkListenerStageSubmitted - Class in org.apache.spark.scheduler
- SparkListenerStageSubmitted(StageInfo, Properties) - Constructor for class org.apache.spark.scheduler.SparkListenerStageSubmitted
- SparkListenerTaskEnd - Class in org.apache.spark.scheduler
- SparkListenerTaskEnd(int, int, String, TaskEndReason, TaskInfo, ExecutorMetrics, TaskMetrics) - Constructor for class org.apache.spark.scheduler.SparkListenerTaskEnd
- SparkListenerTaskGettingResult - Class in org.apache.spark.scheduler
- SparkListenerTaskGettingResult(TaskInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerTaskGettingResult
- SparkListenerTaskStart - Class in org.apache.spark.scheduler
- SparkListenerTaskStart(int, int, TaskInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerTaskStart
- SparkListenerUnpersistRDD - Class in org.apache.spark.scheduler
- SparkListenerUnpersistRDD(int) - Constructor for class org.apache.spark.scheduler.SparkListenerUnpersistRDD
- SparkListenerUnschedulableTaskSetAdded - Class in org.apache.spark.scheduler
- SparkListenerUnschedulableTaskSetAdded(int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
- SparkListenerUnschedulableTaskSetRemoved - Class in org.apache.spark.scheduler
- SparkListenerUnschedulableTaskSetRemoved(int, int) - Constructor for class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
- SparkMasterRegex - Class in org.apache.spark
-
A collection of regexes for extracting information from the master string.
- SparkMasterRegex() - Constructor for class org.apache.spark.SparkMasterRegex
- SparkPath - Class in org.apache.spark.paths
-
A canonical representation of a file path.
- SparkPath() - Constructor for class org.apache.spark.paths.SparkPath
- SparkPlugin - Interface in org.apache.spark.api.plugin
-
:: DeveloperApi :: A plugin that can be dynamically loaded into a Spark application.
- sparkProperties() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
- sparkProperties() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
- sparkRPackagePath(boolean) - Static method in class org.apache.spark.api.r.RUtils
-
Get the list of paths for R packages in various deployment modes, of which the first path is for the SparkR package itself.
- SparkSchemaUtils - Class in org.apache.spark.util
-
Utils for handling schemas.
- SparkSchemaUtils() - Constructor for class org.apache.spark.util.SparkSchemaUtils
- SparkSerDeUtils - Interface in org.apache.spark.util
- sparkSession() - Method in interface org.apache.spark.ml.util.BaseReadWrite
-
Returns the user-specified Spark Session or the default.
- sparkSession() - Method in class org.apache.spark.sql.api.Dataset
- sparkSession() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns the
SparkSession
associated withthis
. - sparkSession() - Method in class org.apache.spark.sql.Dataset
- sparkSession() - Method in class org.apache.spark.sql.SQLContext
- sparkSession() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
- SparkSession - Class in org.apache.spark.sql.api
-
The entry point to programming Spark with the Dataset and DataFrame API.
- SparkSession - Class in org.apache.spark.sql
-
The entry point to programming Spark with the Dataset and DataFrame API.
- SparkSession() - Constructor for class org.apache.spark.sql.api.SparkSession
- SparkSession.Builder - Class in org.apache.spark.sql
-
Builder for
SparkSession
. - SparkSession.Converter$ - Class in org.apache.spark.sql
- SparkSession.implicits$ - Class in org.apache.spark.sql
-
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into
DataFrame
s. - SparkSessionExtensions - Class in org.apache.spark.sql
-
:: Experimental :: Holder for injection points to the
SparkSession
. - SparkSessionExtensions() - Constructor for class org.apache.spark.sql.SparkSessionExtensions
- SparkSessionExtensionsProvider - Interface in org.apache.spark.sql
-
:: Unstable ::
- SparkShutdownHook - Class in org.apache.spark.util
- SparkShutdownHook(int, Function0<BoxedUnit>) - Constructor for class org.apache.spark.util.SparkShutdownHook
- SparkStageInfo - Interface in org.apache.spark
-
Exposes information about Spark Stages.
- SparkStageInfoImpl - Class in org.apache.spark
- SparkStageInfoImpl(int, int, long, String, int, int, int, int) - Constructor for class org.apache.spark.SparkStageInfoImpl
- SparkStatusTracker - Class in org.apache.spark
-
Low-level status reporting APIs for monitoring job and stage progress.
- SparkStreamUtils - Interface in org.apache.spark.util
- SparkTestUtils - Interface in org.apache.spark.util
- SparkTestUtils.JavaSourceFromString - Class in org.apache.spark.util
- SparkThreadUtils - Class in org.apache.spark.util
- SparkThreadUtils() - Constructor for class org.apache.spark.util.SparkThreadUtils
- SparkThrowable - Interface in org.apache.spark
-
Interface mixed into Throwables thrown from Spark.
- SparkThrowableHelper - Class in org.apache.spark
-
Companion object used by instances of
SparkThrowable
to access error class information and construct error messages. - SparkThrowableHelper() - Constructor for class org.apache.spark.SparkThrowableHelper
- sparkUpgradeInReadingDatesError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- sparkUpgradeInWritingDatesError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- sparkUser() - Method in class org.apache.spark.api.java.JavaSparkContext
- sparkUser() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- sparkUser() - Method in class org.apache.spark.SparkContext
- sparkUser() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- sparkVersion() - Method in class org.apache.spark.scheduler.SparkListenerLogStart
- sparse(int, int[], double[]) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a sparse vector providing its index array and value array.
- sparse(int, int[], double[]) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a sparse vector providing its index array and value array.
- sparse(int, int, int[], int[], double[]) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Creates a column-major sparse matrix in Compressed Sparse Column (CSC) format.
- sparse(int, int, int[], int[], double[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Creates a column-major sparse matrix in Compressed Sparse Column (CSC) format.
- sparse(int, Iterable<Tuple2<Integer, Double>>) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs in a Java friendly way.
- sparse(int, Iterable<Tuple2<Integer, Double>>) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs in a Java friendly way.
- sparse(int, Seq<Tuple2<Object, Object>>) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs.
- sparse(int, Seq<Tuple2<Object, Object>>) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs.
- SparseMatrix - Class in org.apache.spark.ml.linalg
-
Column-major sparse matrix.
- SparseMatrix - Class in org.apache.spark.mllib.linalg
-
Column-major sparse matrix.
- SparseMatrix(int, int, int[], int[], double[]) - Constructor for class org.apache.spark.ml.linalg.SparseMatrix
-
Column-major sparse matrix.
- SparseMatrix(int, int, int[], int[], double[]) - Constructor for class org.apache.spark.mllib.linalg.SparseMatrix
-
Column-major sparse matrix.
- SparseMatrix(int, int, int[], int[], double[], boolean) - Constructor for class org.apache.spark.ml.linalg.SparseMatrix
- SparseMatrix(int, int, int[], int[], double[], boolean) - Constructor for class org.apache.spark.mllib.linalg.SparseMatrix
- SparseVector - Class in org.apache.spark.ml.linalg
-
A sparse vector represented by an index array and a value array.
- SparseVector - Class in org.apache.spark.mllib.linalg
-
A sparse vector represented by an index array and a value array.
- SparseVector(int, int[], double[]) - Constructor for class org.apache.spark.ml.linalg.SparseVector
- SparseVector(int, int[], double[]) - Constructor for class org.apache.spark.mllib.linalg.SparseVector
- sparsity() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- sparsity() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns the ratio of number of zeros by total number of values.
- SPARSITY() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- spdiag(Vector) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a diagonal matrix in
SparseMatrix
format from the supplied values. - spdiag(Vector) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a diagonal matrix in
SparseMatrix
format from the supplied values. - SpearmanCorrelation - Class in org.apache.spark.mllib.stat.correlation
-
Compute Spearman's correlation for two RDDs of the type RDD[Double] or the correlation matrix for an RDD of the type RDD[Vector].
- SpearmanCorrelation() - Constructor for class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
- SpecialLengths - Class in org.apache.spark.api.r
- SpecialLengths() - Constructor for class org.apache.spark.api.r.SpecialLengths
- specifyingDBInCreateTempFuncError(String, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- specifyPartitionNotAllowedWhenTableSchemaNotDefinedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- SPECULATION_SUMMARY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- speculationStageSummary() - Method in class org.apache.spark.status.LiveStage
- SpeculationStageSummary - Class in org.apache.spark.status.api.v1
- speculationSummary() - Method in class org.apache.spark.status.api.v1.StageData
- speculative() - Method in class org.apache.spark.scheduler.TaskInfo
- speculative() - Method in class org.apache.spark.status.api.v1.TaskData
- SPECULATIVE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- SPECULATIVE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- speye(int) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a sparse Identity Matrix in
Matrix
format. - speye(int) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate an Identity Matrix in
SparseMatrix
format. - speye(int) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a sparse Identity Matrix in
Matrix
format. - speye(int) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate an Identity Matrix in
SparseMatrix
format. - SpillListener - Class in org.apache.spark
-
A
SparkListener
that detects whether spills have occurred in Spark jobs. - SpillListener() - Constructor for class org.apache.spark.SpillListener
- split() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
- split() - Method in class org.apache.spark.ml.tree.InternalNode
- split() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- split() - Method in class org.apache.spark.mllib.tree.model.Node
- split(Column, String) - Static method in class org.apache.spark.sql.functions
-
Splits str around matches of the given pattern.
- split(Column, String, int) - Static method in class org.apache.spark.sql.functions
-
Splits str around matches of the given pattern.
- split(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Splits str around matches of the given pattern.
- split(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Splits str around matches of the given pattern.
- Split - Class in org.apache.spark.mllib.tree.model
-
Split applied to a feature param: feature feature index param: threshold Threshold for continuous feature.
- Split - Interface in org.apache.spark.ml.tree
-
Interface for a "Split," which specifies a test made at a decision tree node to choose the left or right path.
- Split(int, double, Enumeration.Value, List<Object>) - Constructor for class org.apache.spark.mllib.tree.model.Split
- split_part(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Splits
str
by delimiter and return requested part of the split (1-based). - splitAndCountPartitions(Iterator<String>) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
Splits lines and counts the words.
- splitCommandString(String) - Static method in class org.apache.spark.util.Utils
-
Split a string of potentially quoted arguments from the command line the way that a shell would do it to determine arguments to a command.
- SplitData(int, double[], int) - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
- SplitData(int, double, int, Seq<Object>) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- SplitData$() - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
- SplitData$() - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
- splitIndex() - Method in class org.apache.spark.storage.RDDBlockId
- SplitInfo - Class in org.apache.spark.scheduler
- SplitInfo(Class<?>, String, String, long, Object) - Constructor for class org.apache.spark.scheduler.SplitInfo
- splits() - Method in class org.apache.spark.ml.feature.Bucketizer
-
Parameter for mapping continuous features into buckets.
- splitsArray() - Method in class org.apache.spark.ml.feature.Bucketizer
-
Parameter for specifying multiple splits parameters.
- spr(double, Vector, double[]) - Static method in class org.apache.spark.ml.linalg.BLAS
-
Adds alpha * v * v.t to a matrix in-place.
- spr(double, Vector, DenseVector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
Adds alpha * x * x.t to a matrix in-place.
- spr(double, Vector, double[]) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
Adds alpha * v * v.t to a matrix in-place.
- spr(double, Vector, DenseVector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
Adds alpha * v * v.t to a matrix in-place.
- sprand(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a
SparseMatrix
consisting ofi.i.d.
uniform random numbers. - sprand(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a
SparseMatrix
consisting ofi.i.d
. - sprand(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a
SparseMatrix
consisting ofi.i.d.
uniform random numbers. - sprand(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a
SparseMatrix
consisting ofi.i.d
. - sprandn(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a
SparseMatrix
consisting ofi.i.d.
gaussian random numbers. - sprandn(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a
SparseMatrix
consisting ofi.i.d
. - sprandn(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a
SparseMatrix
consisting ofi.i.d.
gaussian random numbers. - sprandn(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a
SparseMatrix
consisting ofi.i.d
. - sqdist(Vector, Vector) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Returns the squared distance between two Vectors.
- sqdist(Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Returns the squared distance between two Vectors.
- sql() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- sql() - Method in class org.apache.spark.sql.types.ArrayType
- sql() - Static method in class org.apache.spark.sql.types.BinaryType
- sql() - Static method in class org.apache.spark.sql.types.BooleanType
- sql() - Static method in class org.apache.spark.sql.types.ByteType
- sql() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
- sql() - Method in class org.apache.spark.sql.types.DataType
- sql() - Static method in class org.apache.spark.sql.types.DateType
- sql() - Method in class org.apache.spark.sql.types.DecimalType
- sql() - Static method in class org.apache.spark.sql.types.DoubleType
- sql() - Static method in class org.apache.spark.sql.types.FloatType
- sql() - Static method in class org.apache.spark.sql.types.IntegerType
- sql() - Static method in class org.apache.spark.sql.types.LongType
- sql() - Method in class org.apache.spark.sql.types.MapType
- sql() - Static method in class org.apache.spark.sql.types.NullType
- sql() - Static method in class org.apache.spark.sql.types.ShortType
- sql() - Static method in class org.apache.spark.sql.types.StringType
- sql() - Method in class org.apache.spark.sql.types.StructType
- sql() - Static method in class org.apache.spark.sql.types.TimestampNTZType
- sql() - Static method in class org.apache.spark.sql.types.TimestampType
- sql() - Method in class org.apache.spark.sql.types.UserDefinedType
- sql() - Static method in class org.apache.spark.sql.types.VariantType
- sql(String) - Method in class org.apache.spark.sql.api.SparkSession
-
Executes a SQL query using Spark, returning the result as a
DataFrame
. - sql(String) - Method in class org.apache.spark.sql.SparkSession
- sql(String) - Method in class org.apache.spark.sql.SQLContext
- sql(String, Object) - Method in class org.apache.spark.sql.api.SparkSession
-
Executes a SQL query substituting positional parameters by the given arguments, returning the result as a
DataFrame
. - sql(String, Object) - Method in class org.apache.spark.sql.SparkSession
- sql(String, Map<String, Object>) - Method in class org.apache.spark.sql.api.SparkSession
-
Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame
. - sql(String, Map<String, Object>) - Method in class org.apache.spark.sql.SparkSession
- sql(String, Map<String, Object>) - Method in class org.apache.spark.sql.api.SparkSession
-
Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame
. - sql(String, Map<String, Object>) - Method in class org.apache.spark.sql.SparkSession
- SQL - Enum constant in enum class org.apache.spark.QueryContextType
- SQL_EXECUTION_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- sqlConfigNotFoundError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- sqlContext() - Method in interface org.apache.spark.ml.util.BaseReadWrite
-
Returns the user-specified SQL context or the default.
- sqlContext() - Method in class org.apache.spark.sql.Dataset
- sqlContext() - Method in class org.apache.spark.sql.sources.BaseRelation
- sqlContext() - Method in class org.apache.spark.sql.SparkSession
-
A wrapped version of this session in the form of a
SQLContext
, for backward compatibility. - SQLContext - Class in org.apache.spark.sql
-
The entry point for working with structured data (rows and columns) in Spark 1.x.
- SQLContext(JavaSparkContext) - Constructor for class org.apache.spark.sql.SQLContext
-
Deprecated.Use SparkSession.builder instead. Since 2.0.0.
- SQLContext(SparkContext) - Constructor for class org.apache.spark.sql.SQLContext
-
Deprecated.Use SparkSession.builder instead. Since 2.0.0.
- SQLContext.implicits$ - Class in org.apache.spark.sql
-
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into
DataFrame
s. - SQLDataTypes - Class in org.apache.spark.ml.linalg
-
SQL data types for vectors and matrices.
- SQLDataTypes() - Constructor for class org.apache.spark.ml.linalg.SQLDataTypes
- SQLImplicits - Class in org.apache.spark.sql
-
A collection of implicit methods for converting common Scala objects into
Dataset
s. - SQLImplicits() - Constructor for class org.apache.spark.sql.SQLImplicits
- SQLImplicits.StringToColumn - Class in org.apache.spark.sql
-
Converts $"col name" into a
Column
. - SQLOpenHashSet<T> - Class in org.apache.spark.sql.util
- SQLOpenHashSet(int, double, ClassTag<T>) - Constructor for class org.apache.spark.sql.util.SQLOpenHashSet
- SQLOpenHashSet(int, ClassTag<T>) - Constructor for class org.apache.spark.sql.util.SQLOpenHashSet
- SQLOpenHashSet(ClassTag<T>) - Constructor for class org.apache.spark.sql.util.SQLOpenHashSet
- SQLPlanMetricSerializer - Class in org.apache.spark.status.protobuf.sql
- SQLPlanMetricSerializer() - Constructor for class org.apache.spark.status.protobuf.sql.SQLPlanMetricSerializer
- SqlScriptingErrors - Class in org.apache.spark.sql.errors
-
Object for grouping error messages thrown during parsing/interpreting phase of the SQL Scripting Language interpreter.
- SqlScriptingErrors() - Constructor for class org.apache.spark.sql.errors.SqlScriptingErrors
- SqlScriptingException - Exception in org.apache.spark.sql.exceptions
- SqlScriptingException(String, Throwable, Origin, Map<String, String>) - Constructor for exception org.apache.spark.sql.exceptions.SqlScriptingException
- SqlScriptingInterpreter - Class in org.apache.spark.sql.scripting
-
SQL scripting interpreter - builds SQL script execution plan.
- SqlScriptingInterpreter() - Constructor for class org.apache.spark.sql.scripting.SqlScriptingInterpreter
- sqlState() - Method in class org.apache.spark.ErrorInfo
- sqlStatementUnsupportedError(String, Origin) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- SQLTransformer - Class in org.apache.spark.ml.feature
-
Implements the transformations which are defined by SQL statement.
- SQLTransformer() - Constructor for class org.apache.spark.ml.feature.SQLTransformer
- SQLTransformer(String) - Constructor for class org.apache.spark.ml.feature.SQLTransformer
- sqlType() - Method in class org.apache.spark.mllib.linalg.VectorUDT
- sqlType() - Method in class org.apache.spark.sql.types.UserDefinedType
-
Underlying storage type for this UDT
- SQLUserDefinedType - Annotation Interface in org.apache.spark.sql.types
-
::DeveloperApi:: A user-defined type which can be automatically recognized by a SQLContext and registered.
- SQLUtils - Class in org.apache.spark.sql.api.r
- SQLUtils() - Constructor for class org.apache.spark.sql.api.r.SQLUtils
- sqrt(String) - Static method in class org.apache.spark.sql.functions
-
Computes the square root of the specified float value.
- sqrt(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the square root of the specified float value.
- Sqrt$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
- SquaredError - Class in org.apache.spark.mllib.tree.loss
-
Class for squared error loss calculation.
- SquaredError() - Constructor for class org.apache.spark.mllib.tree.loss.SquaredError
- SquaredEuclideanSilhouette - Class in org.apache.spark.ml.evaluation
-
SquaredEuclideanSilhouette computes the average of the Silhouette over all the data of the dataset, which is a measure of how appropriately the data have been clustered.
- SquaredEuclideanSilhouette() - Constructor for class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
- SquaredEuclideanSilhouette.ClusterStats - Class in org.apache.spark.ml.evaluation
- SquaredEuclideanSilhouette.ClusterStats$ - Class in org.apache.spark.ml.evaluation
- SquaredL2Updater - Class in org.apache.spark.mllib.optimization
-
Updater for L2 regularized problems.
- SquaredL2Updater() - Constructor for class org.apache.spark.mllib.optimization.SquaredL2Updater
- squaredNormSum() - Method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
- Src - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose the source and edge fields but not the destination field.
- srcAttr() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex attribute of the edge's source vertex.
- srcAttr() - Method in class org.apache.spark.graphx.EdgeTriplet
-
The source vertex attribute
- srcAttr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- srcCol() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- srcCol() - Method in interface org.apache.spark.ml.clustering.PowerIterationClusteringParams
-
Param for the name of the input column for source vertex IDs.
- srcId() - Method in class org.apache.spark.graphx.Edge
- srcId() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex id of the edge's source vertex.
- srcId() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
- SrcOnly - Enum constant in enum class org.apache.spark.graphx.impl.EdgeActiveness
-
The source vertex must be active.
- srdd() - Method in class org.apache.spark.api.java.JavaDoubleRDD
- ssc() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
- stack(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Separates
col1
, ...,colk
inton
rows. - stackTrace() - Method in class org.apache.spark.ExceptionFailure
- stackTrace() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- StackTrace - Class in org.apache.spark.status.api.v1
- StackTrace(Seq<String>) - Constructor for class org.apache.spark.status.api.v1.StackTrace
- stackTraceFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- stackTraceToJson(StackTraceElement[], JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- stackTraceToString(Throwable) - Method in interface org.apache.spark.util.SparkErrorUtils
- stackTraceToString(Throwable) - Static method in class org.apache.spark.util.Utils
- stage() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
- STAGE() - Static method in class org.apache.spark.status.TaskIndexNames
- STAGE_ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- STAGE_ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- STAGE_ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- STAGE_ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- STAGE_DAG() - Static method in class org.apache.spark.ui.ToolTips
- STAGE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- STAGE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- STAGE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- STAGE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- STAGE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- STAGE_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- STAGE_IDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- STAGE_IDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- STAGE_STATUS_ACTIVE - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_ACTIVE = 1;
- STAGE_STATUS_ACTIVE_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_ACTIVE = 1;
- STAGE_STATUS_COMPLETE - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_COMPLETE = 2;
- STAGE_STATUS_COMPLETE_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_COMPLETE = 2;
- STAGE_STATUS_FAILED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_FAILED = 3;
- STAGE_STATUS_FAILED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_FAILED = 3;
- STAGE_STATUS_PENDING - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_PENDING = 4;
- STAGE_STATUS_PENDING_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_PENDING = 4;
- STAGE_STATUS_SKIPPED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_SKIPPED = 5;
- STAGE_STATUS_SKIPPED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_SKIPPED = 5;
- STAGE_STATUS_UNSPECIFIED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_UNSPECIFIED = 0;
- STAGE_STATUS_UNSPECIFIED_VALUE - Static variable in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
STAGE_STATUS_UNSPECIFIED = 0;
- STAGE_TIMELINE() - Static method in class org.apache.spark.ui.ToolTips
- stageAttempt() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
- stageAttemptId() - Method in class org.apache.spark.ContextBarrierId
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerTaskStart
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
- stageAttemptNumber() - Method in class org.apache.spark.BarrierTaskContext
- stageAttemptNumber() - Method in class org.apache.spark.TaskContext
-
How many times the stage that this task belongs to has been attempted.
- stageCompletedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- stageCompletedToJson(SparkListenerStageCompleted, JsonGenerator, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- stageCreate(Identifier, Column[], Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.StagingTableCatalog
-
Stage the creation of a table, preparing it to be committed into the metastore.
- stageCreate(Identifier, StructType, Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.StagingTableCatalog
-
Deprecated.This is deprecated. Please override
StagingTableCatalog.stageCreate(Identifier, Column[], Transform[], Map)
instead. - stageCreateOrReplace(Identifier, Column[], Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.StagingTableCatalog
-
Stage the creation or replacement of a table, preparing it to be committed into the metastore when the returned table's
StagedTable.commitStagedChanges()
is called. - stageCreateOrReplace(Identifier, StructType, Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.StagingTableCatalog
-
Deprecated.
- StageData - Class in org.apache.spark.status.api.v1
- StagedTable - Interface in org.apache.spark.sql.connector.catalog
-
Represents a table which is staged for being committed to the metastore.
- stageExecutorMetricsFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- stageExecutorMetricsToJson(SparkListenerStageExecutorMetrics, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- stageFailed(String) - Method in class org.apache.spark.scheduler.StageInfo
- stageId() - Method in class org.apache.spark.BarrierTaskContext
- stageId() - Method in class org.apache.spark.ContextBarrierId
- stageId() - Method in interface org.apache.spark.scheduler.Schedulable
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerTaskStart
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
- stageId() - Method in class org.apache.spark.scheduler.StageInfo
- stageId() - Method in interface org.apache.spark.SparkStageInfo
- stageId() - Method in class org.apache.spark.SparkStageInfoImpl
- stageId() - Method in class org.apache.spark.status.api.v1.StageData
- stageId() - Method in class org.apache.spark.TaskContext
-
The ID of the stage that this task belong to.
- stageIds() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
- stageIds() - Method in interface org.apache.spark.SparkJobInfo
- stageIds() - Method in class org.apache.spark.SparkJobInfoImpl
- stageIds() - Method in class org.apache.spark.status.api.v1.JobData
- stageIds() - Method in class org.apache.spark.status.LiveJob
- stageIds() - Method in class org.apache.spark.status.SchedulerPool
- stageInfo() - Method in class org.apache.spark.scheduler.SparkListenerStageCompleted
- stageInfo() - Method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
- StageInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi :: Stores information about a stage to pass from the scheduler to SparkListeners.
- StageInfo(int, int, String, int, Seq<RDDInfo>, Seq<Object>, String, TaskMetrics, Seq<Seq<TaskLocation>>, Option<Object>, int, boolean, int) - Constructor for class org.apache.spark.scheduler.StageInfo
- stageInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
--------------------------------------------------------------------- * JSON deserialization methods for classes SparkListenerEvents depend on |
- stageInfos() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
- stageInfoToJson(StageInfo, JsonGenerator, JsonProtocolOptions, boolean) - Static method in class org.apache.spark.util.JsonProtocol
-
------------------------------------------------------------------- * JSON serialization methods for classes SparkListenerEvents depend on |
- stageName() - Method in class org.apache.spark.ml.clustering.InternalKMeansModelWriter
- stageName() - Method in class org.apache.spark.ml.clustering.PMMLKMeansModelWriter
- stageName() - Method in class org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
- stageName() - Method in class org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
- stageName() - Method in interface org.apache.spark.ml.util.MLFormatRegister
-
The string that represents the stage type that this writer supports.
- stageReplace(Identifier, Column[], Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.StagingTableCatalog
-
Stage the replacement of a table, preparing it to be committed into the metastore when the returned table's
StagedTable.commitStagedChanges()
is called. - stageReplace(Identifier, StructType, Transform[], Map<String, String>) - Method in interface org.apache.spark.sql.connector.catalog.StagingTableCatalog
-
Deprecated.
- stages() - Method in class org.apache.spark.ml.Pipeline
-
param for pipeline stages
- stages() - Method in class org.apache.spark.ml.PipelineModel
- STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- StageStatus - Enum Class in org.apache.spark.status.api.v1
- StageStatusSerializer - Class in org.apache.spark.status.protobuf
- StageStatusSerializer() - Constructor for class org.apache.spark.status.protobuf.StageStatusSerializer
- stageSubmittedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- stageSubmittedToJson(SparkListenerStageSubmitted, JsonGenerator, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- StagingTableCatalog - Interface in org.apache.spark.sql.connector.catalog
-
An optional mix-in for implementations of
TableCatalog
that support staging creation of the a table before committing the table's metadata along with its contents in CREATE TABLE AS SELECT or REPLACE TABLE AS SELECT operations. - standard() - Method in class org.apache.spark.ErrorStateInfo
- STANDARD() - Static method in class org.apache.spark.ErrorMessageFormat
- standardization() - Method in class org.apache.spark.ml.classification.LinearSVC
- standardization() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- standardization() - Method in class org.apache.spark.ml.classification.LogisticRegression
- standardization() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- standardization() - Method in interface org.apache.spark.ml.param.shared.HasStandardization
-
Param for whether to standardize the training features before fitting the model.
- standardization() - Method in class org.apache.spark.ml.regression.LinearRegression
- standardization() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- StandardNormalGenerator - Class in org.apache.spark.mllib.random
-
Generates i.i.d.
- StandardNormalGenerator() - Constructor for class org.apache.spark.mllib.random.StandardNormalGenerator
- StandardScaler - Class in org.apache.spark.ml.feature
-
Standardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set.
- StandardScaler - Class in org.apache.spark.mllib.feature
-
Standardizes features by removing the mean and scaling to unit std using column summary statistics on the samples in the training set.
- StandardScaler() - Constructor for class org.apache.spark.ml.feature.StandardScaler
- StandardScaler() - Constructor for class org.apache.spark.mllib.feature.StandardScaler
- StandardScaler(boolean, boolean) - Constructor for class org.apache.spark.mllib.feature.StandardScaler
- StandardScaler(String) - Constructor for class org.apache.spark.ml.feature.StandardScaler
- StandardScalerModel - Class in org.apache.spark.ml.feature
-
Model fitted by
StandardScaler
. - StandardScalerModel - Class in org.apache.spark.mllib.feature
-
Represents a StandardScaler model that can transform vectors.
- StandardScalerModel(Vector) - Constructor for class org.apache.spark.mllib.feature.StandardScalerModel
- StandardScalerModel(Vector, Vector) - Constructor for class org.apache.spark.mllib.feature.StandardScalerModel
- StandardScalerModel(Vector, Vector, boolean, boolean) - Constructor for class org.apache.spark.mllib.feature.StandardScalerModel
- StandardScalerParams - Interface in org.apache.spark.ml.feature
-
Params for
StandardScaler
andStandardScalerModel
. - starExpandDataTypeNotSupportedError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- starGraph(SparkContext, int) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
Create a star graph with vertex 0 being the center.
- starNotAllowedWhenGroupByOrdinalPositionUsedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- start() - Method in interface org.apache.spark.metrics.sink.Sink
- start() - Method in interface org.apache.spark.scheduler.SchedulerBackend
- start() - Method in interface org.apache.spark.scheduler.TaskScheduler
- start() - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Starts the execution of the streaming query, which will continually output results to the given path as new data arrives.
- start() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Start the execution of the streams.
- start() - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
- start() - Method in class org.apache.spark.streaming.dstream.InputDStream
-
Method called to start receiving data.
- start() - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
- start() - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Start the execution of the streams.
- start(String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Starts the execution of the streaming query, which will continually output results to the given path as new data arrives.
- START_OFFSET_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- START_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- START_TIMESTAMP_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- startApplication(SparkAppHandle.Listener...) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Starts a Spark application.
- startApplication(SparkAppHandle.Listener...) - Method in class org.apache.spark.launcher.InProcessLauncher
-
Starts a Spark application.
- startApplication(SparkAppHandle.Listener...) - Method in class org.apache.spark.launcher.SparkLauncher
-
Starts a Spark application.
- startField - Variable in class org.apache.spark.types.variant.VariantUtil.IntervalFields
- startField() - Method in class org.apache.spark.sql.types.DayTimeIntervalType
- startField() - Method in class org.apache.spark.sql.types.YearMonthIntervalType
- startIndex() - Method in interface org.apache.spark.QueryContext
- startIndexInLevel(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the index of the first node in the given level.
- startJettyServer(String, int, SSLOptions, SparkConf, String, int) - Static method in class org.apache.spark.ui.JettyUtils
-
Attempt to start a Jetty server bound to the supplied hostName:port using the given context handlers.
- startOffset() - Method in class org.apache.spark.sql.streaming.SourceProgress
- startOffset() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- startPosition() - Method in exception org.apache.spark.sql.AnalysisException
- startReduceId() - Method in class org.apache.spark.storage.ShuffleBlockBatchId
- startServiceOnPort(int, Function1<Object, Tuple2<T, Object>>, int, String) - Static method in class org.apache.spark.util.Utils
-
Attempt to start a service on the given port, or fail after a number of attempts.
- startServiceOnPort(int, Function1<Object, Tuple2<T, Object>>, SparkConf, String) - Static method in class org.apache.spark.util.Utils
-
Attempt to start a service on the given port, or fail after a number of attempts.
- startswith(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a boolean.
- startsWith(String) - Method in class org.apache.spark.sql.Column
-
String starts with another string literal.
- startsWith(Column) - Method in class org.apache.spark.sql.Column
-
String starts with.
- startTime() - Method in class org.apache.spark.api.java.JavaSparkContext
- startTime() - Method in class org.apache.spark.SparkContext
- startTime() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- startTime() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
- startTime() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
- startTime() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- stat() - Method in class org.apache.spark.sql.api.Dataset
-
Returns a
DataFrameStatFunctions
for working statistic functions support. - stat() - Method in class org.apache.spark.sql.Dataset
- StatCounter - Class in org.apache.spark.util
-
A class for tracking the statistics of a set of numbers (count, mean and variance) in a numerically robust way.
- StatCounter() - Constructor for class org.apache.spark.util.StatCounter
-
Initialize the StatCounter with no values.
- StatCounter(IterableOnce<Object>) - Constructor for class org.apache.spark.util.StatCounter
- state() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
- state() - Method in class org.apache.spark.scheduler.local.StatusUpdate
- State<S> - Class in org.apache.spark.streaming
-
:: Experimental :: Abstract class for getting and updating the state in mapping function used in the
mapWithState
operation of apair DStream
(Scala) or aJavaPairDStream
(Java). - State() - Constructor for class org.apache.spark.streaming.State
- STATE_OPERATORS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- stateChanged(SparkAppHandle) - Method in interface org.apache.spark.launcher.SparkAppHandle.Listener
-
Callback for changes in the handle's state.
- statefulOperatorNotMatchInStateMetadataError(Map<Object, String>, Map<Object, String>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- StatefulProcessorHandle - Interface in org.apache.spark.sql.streaming
-
Represents the operation handle provided to the stateful processor used in the arbitrary state API v2.
- statement() - Method in class org.apache.spark.ml.feature.SQLTransformer
-
SQL statement parameter.
- stateNotDefinedOrAlreadyRemovedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- StateOperatorProgress - Class in org.apache.spark.sql.streaming
-
Information about updates made to stateful operators in a
StreamingQuery
during a trigger. - StateOperatorProgressSerializer - Class in org.apache.spark.status.protobuf.sql
- StateOperatorProgressSerializer() - Constructor for class org.apache.spark.status.protobuf.sql.StateOperatorProgressSerializer
- stateOperators() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- stateSnapshots() - Method in class org.apache.spark.streaming.api.java.JavaMapWithStateDStream
- stateSnapshots() - Method in class org.apache.spark.streaming.dstream.MapWithStateDStream
-
Return a pair DStream where each RDD is the snapshot of the state of all the keys.
- StateSpec<KeyType,
ValueType, StateType, MappedType> - Class in org.apache.spark.streaming -
:: Experimental :: Abstract class representing all the specifications of the DStream transformation
mapWithState
operation of apair DStream
(Scala) or aJavaPairDStream
(Java). - StateSpec() - Constructor for class org.apache.spark.streaming.StateSpec
- stateStoreHandleNotInitialized() - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- stateStoreHandleNotInitialized() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- staticPageRank(int, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
- staticPageRank(int, double, Graph<Object, Object>) - Method in class org.apache.spark.graphx.GraphOps
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight, optionally including including a previous pageRank computation to be used as a start point for the new iterations
- staticParallelPersonalizedPageRank(long[], int, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run parallel personalized PageRank for a given array of source vertices, such that all random walks are started relative to the source vertices
- staticPartitionInUserSpecifiedColumnsError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- staticPersonalizedPageRank(long, int, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run Personalized PageRank for a fixed number of iterations with with all iterations originating at the source node returning a graph with vertex attributes containing the PageRank and edge attributes the normalized edge weight.
- StaticSources - Class in org.apache.spark.metrics.source
- StaticSources() - Constructor for class org.apache.spark.metrics.source.StaticSources
- statistic() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
- statistic() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
- statistic() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
Test statistic.
- statisticNotRecognizedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- Statistics - Class in org.apache.spark.mllib.stat
-
API for statistical functions in MLlib.
- Statistics - Interface in org.apache.spark.sql.connector.read
-
An interface to represent statistics for a data source, which is returned by
SupportsReportStatistics.estimateStatistics()
. - Statistics() - Constructor for class org.apache.spark.mllib.stat.Statistics
- stats() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a
StatCounter
object that captures the mean, variance and count of the RDD's elements in one operation. - stats() - Method in class org.apache.spark.mllib.tree.model.Node
- stats() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Return a
StatCounter
object that captures the mean, variance and count of the RDD's elements in one operation. - stats() - Method in interface org.apache.spark.sql.columnar.SimpleMetricsCachedBatch
-
Holds stats for each cached column.
- StatsdMetricType - Class in org.apache.spark.metrics.sink
- StatsdMetricType() - Constructor for class org.apache.spark.metrics.sink.StatsdMetricType
- StatsReportListener - Class in org.apache.spark.scheduler
-
:: DeveloperApi :: Simple SparkListener that logs a few summary statistics when each stage completes.
- StatsReportListener - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi :: A simple StreamingListener that logs summary statistics across Spark Streaming batches param: numBatchInfos Number of last batches to consider for generating statistics (default: 10)
- StatsReportListener() - Constructor for class org.apache.spark.scheduler.StatsReportListener
- StatsReportListener(int) - Constructor for class org.apache.spark.streaming.scheduler.StatsReportListener
- status() - Method in class org.apache.spark.scheduler.TaskInfo
- status() - Method in interface org.apache.spark.SparkJobInfo
- status() - Method in class org.apache.spark.SparkJobInfoImpl
- status() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Returns the current status of the query.
- status() - Method in class org.apache.spark.status.api.v1.JobData
- status() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- status() - Method in class org.apache.spark.status.api.v1.StageData
- status() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- status() - Method in class org.apache.spark.status.api.v1.TaskData
- status() - Method in class org.apache.spark.status.LiveJob
- status() - Method in class org.apache.spark.status.LiveStage
- status() - Method in class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
- STATUS() - Static method in class org.apache.spark.status.TaskIndexNames
- STATUS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- STATUS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- STATUS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- STATUS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- statusTracker() - Method in class org.apache.spark.api.java.JavaSparkContext
- statusTracker() - Method in class org.apache.spark.SparkContext
- StatusUpdate - Class in org.apache.spark.scheduler.local
- StatusUpdate(long, Enumeration.Value, ByteBuffer) - Constructor for class org.apache.spark.scheduler.local.StatusUpdate
- StatusUpdate(String, long, Enumeration.Value, org.apache.spark.util.SerializableBuffer, int, Map<String, Map<String, Object>>) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
- StatusUpdate$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
- std() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- std() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- std() - Method in class org.apache.spark.mllib.feature.StandardScalerModel
- std() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
- std(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- std(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: alias for
stddev_samp
. - std(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- STD() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- stddev(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: alias for
stddev_samp
. - stddev(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: alias for
stddev_samp
. - stddev_pop(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population standard deviation of the expression in a group.
- stddev_pop(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population standard deviation of the expression in a group.
- stddev_samp(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample standard deviation of the expression in a group.
- stddev_samp(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample standard deviation of the expression in a group.
- stdev() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the population standard deviation of this RDD's elements.
- stdev() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the population standard deviation of this RDD's elements.
- stdev() - Method in class org.apache.spark.util.StatCounter
-
Return the population standard deviation of the values.
- stepSize() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- stepSize() - Method in class org.apache.spark.ml.classification.FMClassifier
- stepSize() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- stepSize() - Method in class org.apache.spark.ml.classification.GBTClassifier
- stepSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- stepSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- stepSize() - Method in class org.apache.spark.ml.feature.Word2Vec
- stepSize() - Method in class org.apache.spark.ml.feature.Word2VecModel
- stepSize() - Method in interface org.apache.spark.ml.param.shared.HasStepSize
-
Param for Step size to be used for each iteration of optimization (> 0).
- stepSize() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- stepSize() - Method in class org.apache.spark.ml.regression.FMRegressor
- stepSize() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- stepSize() - Method in class org.apache.spark.ml.regression.GBTRegressor
- stepSize() - Method in interface org.apache.spark.ml.tree.GBTParams
-
Param for Step size (a.k.a.
- stop() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Shut down the SparkContext.
- stop() - Method in interface org.apache.spark.broadcast.BroadcastFactory
- stop() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Asks the application to stop.
- stop() - Method in interface org.apache.spark.metrics.sink.Sink
- stop() - Method in interface org.apache.spark.scheduler.SchedulerBackend
- stop() - Method in class org.apache.spark.SparkContext
-
Shut down the SparkContext.
- stop() - Method in class org.apache.spark.sql.api.SparkSession
-
Synonym for
close()
. - stop() - Method in interface org.apache.spark.sql.api.StreamingQuery
-
Stops the execution of this query if it is running.
- stop() - Method in interface org.apache.spark.sql.connector.read.streaming.SparkDataStream
-
Stop this source and free any resources it has allocated.
- stop() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Stop the execution of the streams.
- stop() - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
- stop() - Method in class org.apache.spark.streaming.dstream.InputDStream
-
Method called to stop receiving data.
- stop() - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
- stop(boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Stop the execution of the streams.
- stop(boolean) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Stop the execution of the streams immediately (does not wait for all received data to be processed).
- stop(boolean, boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Stop the execution of the streams.
- stop(boolean, boolean) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Stop the execution of the streams, with option of ensuring all received data has been processed.
- stop(int) - Method in interface org.apache.spark.scheduler.SchedulerBackend
- stop(int) - Method in interface org.apache.spark.scheduler.TaskScheduler
- stop(int) - Method in class org.apache.spark.SparkContext
-
Shut down the SparkContext with exit code that will passed to scheduler backend.
- stop(String) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Stop the receiver completely.
- stop(String, Throwable) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Stop the receiver completely due to an exception
- StopAllReceivers - Class in org.apache.spark.streaming.scheduler
-
This message will trigger ReceiverTrackerEndpoint to send stop signals to all registered receivers.
- StopAllReceivers() - Constructor for class org.apache.spark.streaming.scheduler.StopAllReceivers
- StopBlockManagerMaster$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster$
- StopCoordinator - Class in org.apache.spark.scheduler
- StopCoordinator() - Constructor for class org.apache.spark.scheduler.StopCoordinator
- StopDriver$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver$
- StopExecutor - Class in org.apache.spark.scheduler.local
- StopExecutor() - Constructor for class org.apache.spark.scheduler.local.StopExecutor
- StopExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor$
- StopExecutors$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors$
- stopIndex() - Method in interface org.apache.spark.QueryContext
- StopMapOutputTracker - Class in org.apache.spark
- StopMapOutputTracker() - Constructor for class org.apache.spark.StopMapOutputTracker
- STOPPED - Enum constant in enum class org.apache.spark.streaming.StreamingContextState
-
The context has been stopped and cannot be used any more.
- stopPosition() - Method in interface org.apache.spark.sql.avro.AvroUtils.RowReader
- StopReceiver - Class in org.apache.spark.streaming.receiver
- StopReceiver() - Constructor for class org.apache.spark.streaming.receiver.StopReceiver
- stopStandaloneSchedulerDriverEndpointError(Exception) - Static method in class org.apache.spark.errors.SparkCoreErrors
- stopWords() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
The words to be filtered out.
- StopWordsRemover - Class in org.apache.spark.ml.feature
-
A feature transformer that filters out stop words from input.
- StopWordsRemover() - Constructor for class org.apache.spark.ml.feature.StopWordsRemover
- StopWordsRemover(String) - Constructor for class org.apache.spark.ml.feature.StopWordsRemover
- STORAGE_LEVEL() - Static method in class org.apache.spark.ui.storage.ToolTips
- STORAGE_LEVEL_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- STORAGE_LEVEL_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- STORAGE_LEVEL_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- storageLevel() - Method in class org.apache.spark.sql.api.Dataset
-
Get the Dataset's current storage level, or StorageLevel.NONE if not persisted.
- storageLevel() - Method in class org.apache.spark.sql.Dataset
- storageLevel() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
- storageLevel() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
- storageLevel() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- storageLevel() - Method in class org.apache.spark.storage.BlockStatus
- storageLevel() - Method in class org.apache.spark.storage.BlockUpdatedInfo
- storageLevel() - Method in class org.apache.spark.storage.RDDInfo
- storageLevel() - Method in class org.apache.spark.streaming.receiver.Receiver
- StorageLevel - Class in org.apache.spark.storage
-
:: DeveloperApi :: Flags for controlling the storage of an RDD.
- StorageLevel() - Constructor for class org.apache.spark.storage.StorageLevel
- storageLevelFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- StorageLevelMapper - Enum Class in org.apache.spark.storage
-
A mapper class easy to obtain storage levels based on their names.
- StorageLevels - Class in org.apache.spark.api.java
-
Expose some commonly useful storage level constants.
- StorageLevels() - Constructor for class org.apache.spark.api.java.StorageLevels
- storageLevelToJson(StorageLevel, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- StorageUtils - Class in org.apache.spark.storage
-
Helper methods for storage-related objects.
- StorageUtils() - Constructor for class org.apache.spark.storage.StorageUtils
- store(ByteBuffer) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store the bytes of received data as a data block into Spark's memory.
- store(ByteBuffer, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store the bytes of received data as a data block into Spark's memory.
- store(Iterator<T>) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(Iterator<T>, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(Iterator<T>) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(Iterator<T>, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(ArrayBuffer<T>) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an ArrayBuffer of received data as a data block into Spark's memory.
- store(ArrayBuffer<T>, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an ArrayBuffer of received data as a data block into Spark's memory.
- store(T) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store a single item of received data to Spark's memory.
- storeBlock(StreamBlockId, ReceivedBlock) - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockHandler
-
Store a received block with the given block id and return related metadata
- storedAsAndStoredByBothSpecifiedError(SqlBaseParser.CreateFileFormatContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- StoreTypes - Class in org.apache.spark.status.protobuf
- StoreTypes.AccumulableInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.AccumulableInfo
- StoreTypes.AccumulableInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.AccumulableInfo
- StoreTypes.AccumulableInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ApplicationAttemptInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationAttemptInfo
- StoreTypes.ApplicationAttemptInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationAttemptInfo
- StoreTypes.ApplicationAttemptInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ApplicationEnvironmentInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationEnvironmentInfo
- StoreTypes.ApplicationEnvironmentInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationEnvironmentInfo
- StoreTypes.ApplicationEnvironmentInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ApplicationEnvironmentInfoWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationEnvironmentInfoWrapper
- StoreTypes.ApplicationEnvironmentInfoWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationEnvironmentInfoWrapper
- StoreTypes.ApplicationEnvironmentInfoWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ApplicationInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationInfo
- StoreTypes.ApplicationInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationInfo
- StoreTypes.ApplicationInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ApplicationInfoWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationInfoWrapper
- StoreTypes.ApplicationInfoWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ApplicationInfoWrapper
- StoreTypes.ApplicationInfoWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.AppSummary - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.AppSummary
- StoreTypes.AppSummary.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.AppSummary
- StoreTypes.AppSummaryOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.CachedQuantile - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.CachedQuantile
- StoreTypes.CachedQuantile.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.CachedQuantile
- StoreTypes.CachedQuantileOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.DeterministicLevel - Enum Class in org.apache.spark.status.protobuf
-
Protobuf enum
org.apache.spark.status.protobuf.DeterministicLevel
- StoreTypes.ExecutorMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorMetrics
- StoreTypes.ExecutorMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorMetrics
- StoreTypes.ExecutorMetricsDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorMetricsDistributions
- StoreTypes.ExecutorMetricsDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorMetricsDistributions
- StoreTypes.ExecutorMetricsDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ExecutorMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ExecutorPeakMetricsDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions
- StoreTypes.ExecutorPeakMetricsDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorPeakMetricsDistributions
- StoreTypes.ExecutorPeakMetricsDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ExecutorResourceRequest - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorResourceRequest
- StoreTypes.ExecutorResourceRequest.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorResourceRequest
- StoreTypes.ExecutorResourceRequestOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ExecutorStageSummary - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorStageSummary
- StoreTypes.ExecutorStageSummary.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorStageSummary
- StoreTypes.ExecutorStageSummaryOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ExecutorStageSummaryWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorStageSummaryWrapper
- StoreTypes.ExecutorStageSummaryWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorStageSummaryWrapper
- StoreTypes.ExecutorStageSummaryWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ExecutorSummary - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorSummary
- StoreTypes.ExecutorSummary.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorSummary
- StoreTypes.ExecutorSummaryOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ExecutorSummaryWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorSummaryWrapper
- StoreTypes.ExecutorSummaryWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ExecutorSummaryWrapper
- StoreTypes.ExecutorSummaryWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.InputMetricDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.InputMetricDistributions
- StoreTypes.InputMetricDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.InputMetricDistributions
- StoreTypes.InputMetricDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.InputMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.InputMetrics
- StoreTypes.InputMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.InputMetrics
- StoreTypes.InputMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.JobData - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.JobData
- StoreTypes.JobData.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.JobData
- StoreTypes.JobDataOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.JobDataWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.JobDataWrapper
- StoreTypes.JobDataWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.JobDataWrapper
- StoreTypes.JobDataWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.JobExecutionStatus - Enum Class in org.apache.spark.status.protobuf
-
Protobuf enum
org.apache.spark.status.protobuf.JobExecutionStatus
- StoreTypes.MemoryMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.MemoryMetrics
- StoreTypes.MemoryMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.MemoryMetrics
- StoreTypes.MemoryMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.OutputMetricDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.OutputMetricDistributions
- StoreTypes.OutputMetricDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.OutputMetricDistributions
- StoreTypes.OutputMetricDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.OutputMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.OutputMetrics
- StoreTypes.OutputMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.OutputMetrics
- StoreTypes.OutputMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.PairStrings - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.PairStrings
- StoreTypes.PairStrings.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.PairStrings
- StoreTypes.PairStringsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.PoolData - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.PoolData
- StoreTypes.PoolData.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.PoolData
- StoreTypes.PoolDataOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ProcessSummary - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ProcessSummary
- StoreTypes.ProcessSummary.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ProcessSummary
- StoreTypes.ProcessSummaryOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ProcessSummaryWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ProcessSummaryWrapper
- StoreTypes.ProcessSummaryWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ProcessSummaryWrapper
- StoreTypes.ProcessSummaryWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDDataDistribution - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDDataDistribution
- StoreTypes.RDDDataDistribution.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDDataDistribution
- StoreTypes.RDDDataDistributionOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDOperationClusterWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationClusterWrapper
- StoreTypes.RDDOperationClusterWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationClusterWrapper
- StoreTypes.RDDOperationClusterWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDOperationEdge - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationEdge
- StoreTypes.RDDOperationEdge.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationEdge
- StoreTypes.RDDOperationEdgeOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDOperationGraphWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationGraphWrapper
- StoreTypes.RDDOperationGraphWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationGraphWrapper
- StoreTypes.RDDOperationGraphWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDOperationNode - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationNode
- StoreTypes.RDDOperationNode.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDOperationNode
- StoreTypes.RDDOperationNodeOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDPartitionInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDPartitionInfo
- StoreTypes.RDDPartitionInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDPartitionInfo
- StoreTypes.RDDPartitionInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDStorageInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDStorageInfo
- StoreTypes.RDDStorageInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDStorageInfo
- StoreTypes.RDDStorageInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RDDStorageInfoWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDStorageInfoWrapper
- StoreTypes.RDDStorageInfoWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RDDStorageInfoWrapper
- StoreTypes.RDDStorageInfoWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ResourceInformation - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ResourceInformation
- StoreTypes.ResourceInformation.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ResourceInformation
- StoreTypes.ResourceInformationOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ResourceProfileInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ResourceProfileInfo
- StoreTypes.ResourceProfileInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ResourceProfileInfo
- StoreTypes.ResourceProfileInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ResourceProfileWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ResourceProfileWrapper
- StoreTypes.ResourceProfileWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ResourceProfileWrapper
- StoreTypes.ResourceProfileWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.RuntimeInfo - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RuntimeInfo
- StoreTypes.RuntimeInfo.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.RuntimeInfo
- StoreTypes.RuntimeInfoOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ShufflePushReadMetricDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions
- StoreTypes.ShufflePushReadMetricDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShufflePushReadMetricDistributions
- StoreTypes.ShufflePushReadMetricDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ShufflePushReadMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShufflePushReadMetrics
- StoreTypes.ShufflePushReadMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShufflePushReadMetrics
- StoreTypes.ShufflePushReadMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ShuffleReadMetricDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleReadMetricDistributions
- StoreTypes.ShuffleReadMetricDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleReadMetricDistributions
- StoreTypes.ShuffleReadMetricDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ShuffleReadMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleReadMetrics
- StoreTypes.ShuffleReadMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleReadMetrics
- StoreTypes.ShuffleReadMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ShuffleWriteMetricDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions
- StoreTypes.ShuffleWriteMetricDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleWriteMetricDistributions
- StoreTypes.ShuffleWriteMetricDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.ShuffleWriteMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleWriteMetrics
- StoreTypes.ShuffleWriteMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.ShuffleWriteMetrics
- StoreTypes.ShuffleWriteMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SinkProgress - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SinkProgress
- StoreTypes.SinkProgress.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SinkProgress
- StoreTypes.SinkProgressOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SourceProgress - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SourceProgress
- StoreTypes.SourceProgress.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SourceProgress
- StoreTypes.SourceProgressOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SparkPlanGraphClusterWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper
- StoreTypes.SparkPlanGraphClusterWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphClusterWrapper
- StoreTypes.SparkPlanGraphClusterWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SparkPlanGraphEdge - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphEdge
- StoreTypes.SparkPlanGraphEdge.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphEdge
- StoreTypes.SparkPlanGraphEdgeOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SparkPlanGraphNode - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphNode
- StoreTypes.SparkPlanGraphNode.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphNode
- StoreTypes.SparkPlanGraphNodeOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SparkPlanGraphNodeWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper
- StoreTypes.SparkPlanGraphNodeWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper
- StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase - Enum Class in org.apache.spark.status.protobuf
- StoreTypes.SparkPlanGraphNodeWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SparkPlanGraphWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphWrapper
- StoreTypes.SparkPlanGraphWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SparkPlanGraphWrapper
- StoreTypes.SparkPlanGraphWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SpeculationStageSummary - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SpeculationStageSummary
- StoreTypes.SpeculationStageSummary.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SpeculationStageSummary
- StoreTypes.SpeculationStageSummaryOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SpeculationStageSummaryWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SpeculationStageSummaryWrapper
- StoreTypes.SpeculationStageSummaryWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SpeculationStageSummaryWrapper
- StoreTypes.SpeculationStageSummaryWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SQLExecutionUIData - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SQLExecutionUIData
- StoreTypes.SQLExecutionUIData.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SQLExecutionUIData
- StoreTypes.SQLExecutionUIDataOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.SQLPlanMetric - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SQLPlanMetric
- StoreTypes.SQLPlanMetric.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.SQLPlanMetric
- StoreTypes.SQLPlanMetricOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.StageData - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StageData
- StoreTypes.StageData.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StageData
- StoreTypes.StageDataOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.StageDataWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StageDataWrapper
- StoreTypes.StageDataWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StageDataWrapper
- StoreTypes.StageDataWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.StageStatus - Enum Class in org.apache.spark.status.protobuf
-
Protobuf enum
org.apache.spark.status.protobuf.StageStatus
- StoreTypes.StateOperatorProgress - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StateOperatorProgress
- StoreTypes.StateOperatorProgress.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StateOperatorProgress
- StoreTypes.StateOperatorProgressOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.StreamBlockData - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamBlockData
- StoreTypes.StreamBlockData.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamBlockData
- StoreTypes.StreamBlockDataOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.StreamingQueryData - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamingQueryData
- StoreTypes.StreamingQueryData.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamingQueryData
- StoreTypes.StreamingQueryDataOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.StreamingQueryProgress - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamingQueryProgress
- StoreTypes.StreamingQueryProgress.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamingQueryProgress
- StoreTypes.StreamingQueryProgressOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.StreamingQueryProgressWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamingQueryProgressWrapper
- StoreTypes.StreamingQueryProgressWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.StreamingQueryProgressWrapper
- StoreTypes.StreamingQueryProgressWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.TaskData - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskData
- StoreTypes.TaskData.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskData
- StoreTypes.TaskDataOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.TaskDataWrapper - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskDataWrapper
- StoreTypes.TaskDataWrapper.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskDataWrapper
- StoreTypes.TaskDataWrapperOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.TaskMetricDistributions - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskMetricDistributions
- StoreTypes.TaskMetricDistributions.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskMetricDistributions
- StoreTypes.TaskMetricDistributionsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.TaskMetrics - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskMetrics
- StoreTypes.TaskMetrics.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskMetrics
- StoreTypes.TaskMetricsOrBuilder - Interface in org.apache.spark.status.protobuf
- StoreTypes.TaskResourceRequest - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskResourceRequest
- StoreTypes.TaskResourceRequest.Builder - Class in org.apache.spark.status.protobuf
-
Protobuf type
org.apache.spark.status.protobuf.TaskResourceRequest
- StoreTypes.TaskResourceRequestOrBuilder - Interface in org.apache.spark.status.protobuf
- storeValue(T) - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
- storeValue(T) - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
- storeValue(T) - Method in interface org.apache.spark.storage.memory.ValuesHolder
- str_to_map(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a map after splitting the text into key/value pairs using delimiters.
- str_to_map(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Creates a map after splitting the text into key/value pairs using delimiters.
- str_to_map(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Creates a map after splitting the text into key/value pairs using delimiters.
- strategy() - Method in class org.apache.spark.ml.feature.Imputer
- strategy() - Method in class org.apache.spark.ml.feature.ImputerModel
- strategy() - Method in interface org.apache.spark.ml.feature.ImputerParams
-
The imputation strategy.
- Strategy - Class in org.apache.spark.mllib.tree.configuration
-
Stores all the configuration options for tree construction param: algo Learning goal.
- Strategy(Enumeration.Value, Impurity, int, int, int, Map<Integer, Integer>) - Constructor for class org.apache.spark.mllib.tree.configuration.Strategy
-
Java-friendly constructor for
Strategy
- Strategy(Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>, int, double, int, double, boolean, int) - Constructor for class org.apache.spark.mllib.tree.configuration.Strategy
-
Backwards compatible constructor for
Strategy
- Strategy(Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>, int, double, int, double, boolean, int, double, boolean) - Constructor for class org.apache.spark.mllib.tree.configuration.Strategy
- StratifiedSamplingUtils - Class in org.apache.spark.util.random
-
Auxiliary functions and data structures for the sampleByKey method in PairRDDFunctions.
- StratifiedSamplingUtils() - Constructor for class org.apache.spark.util.random.StratifiedSamplingUtils
- STREAM() - Static method in class org.apache.spark.storage.BlockId
- StreamBlockId - Class in org.apache.spark.storage
- StreamBlockId(int, long) - Constructor for class org.apache.spark.storage.StreamBlockId
- streamedOperatorUnsupportedByDataSourceError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- streamId() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- streamId() - Method in class org.apache.spark.storage.PythonStreamBlockId
- streamId() - Method in class org.apache.spark.storage.StreamBlockId
- streamId() - Method in class org.apache.spark.streaming.receiver.Receiver
-
Get the unique identifier the receiver input stream that this receiver is associated with.
- streamId() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- streamIdToInputInfo() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
- STREAMING_WRITE - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table supports append writes in streaming execution mode.
- StreamingConf - Class in org.apache.spark.streaming
- StreamingConf() - Constructor for class org.apache.spark.streaming.StreamingConf
- StreamingContext - Class in org.apache.spark.streaming
-
Deprecated.This is deprecated as of Spark 3.4.0. There are no longer updates to DStream and it's a legacy project. There is a newer and easier to use streaming engine in Spark called Structured Streaming. You should use Spark Structured Streaming for your streaming applications.
- StreamingContext(String) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Deprecated.Recreate a StreamingContext from a checkpoint file.
- StreamingContext(String, String, Duration, String, Seq<String>, Map<String, String>) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create a StreamingContext by providing the details necessary for creating a new SparkContext.
- StreamingContext(String, Configuration) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Deprecated.Recreate a StreamingContext from a checkpoint file.
- StreamingContext(String, SparkContext) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Deprecated.Recreate a StreamingContext from a checkpoint file using an existing SparkContext.
- StreamingContext(SparkConf, Duration) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create a StreamingContext by providing the configuration necessary for a new SparkContext.
- StreamingContext(SparkContext, Duration) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create a StreamingContext using an existing SparkContext.
- StreamingContextPythonHelper - Class in org.apache.spark.streaming
- StreamingContextPythonHelper() - Constructor for class org.apache.spark.streaming.StreamingContextPythonHelper
- StreamingContextState - Enum Class in org.apache.spark.streaming
-
:: DeveloperApi :: Represents the state of a StreamingContext.
- StreamingDataWriterFactory - Interface in org.apache.spark.sql.connector.write.streaming
-
A factory of
DataWriter
returned byStreamingWrite.createStreamingWriterFactory(PhysicalWriteInfo)
, which is responsible for creating and initializing the actual data writer at executor side. - streamingIntoViewNotSupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- StreamingKMeans - Class in org.apache.spark.mllib.clustering
-
StreamingKMeans provides methods for configuring a streaming k-means analysis, training the model on streaming, and using the model to make predictions on streaming data.
- StreamingKMeans() - Constructor for class org.apache.spark.mllib.clustering.StreamingKMeans
- StreamingKMeans(int, double, String) - Constructor for class org.apache.spark.mllib.clustering.StreamingKMeans
- StreamingKMeansModel - Class in org.apache.spark.mllib.clustering
-
StreamingKMeansModel extends MLlib's KMeansModel for streaming algorithms, so it can keep track of a continuously updated weight associated with each cluster, and also update the model by doing a single iteration of the standard k-means algorithm.
- StreamingKMeansModel(Vector[], double[]) - Constructor for class org.apache.spark.mllib.clustering.StreamingKMeansModel
- StreamingLinearAlgorithm<M extends GeneralizedLinearModel,
A extends GeneralizedLinearAlgorithm<M>> - Class in org.apache.spark.mllib.regression -
StreamingLinearAlgorithm implements methods for continuously training a generalized linear model on streaming data, and using it for prediction on (possibly different) streaming data.
- StreamingLinearAlgorithm() - Constructor for class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
- StreamingLinearRegressionWithSGD - Class in org.apache.spark.mllib.regression
-
Train or predict a linear regression model on streaming data.
- StreamingLinearRegressionWithSGD() - Constructor for class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Construct a StreamingLinearRegression object with default parameters: {stepSize: 0.1, numIterations: 50, miniBatchFraction: 1.0}.
- StreamingListener - Interface in org.apache.spark.streaming.scheduler
-
:: DeveloperApi :: A listener interface for receiving information about an ongoing streaming computation.
- StreamingListenerBatchCompleted - Class in org.apache.spark.streaming.scheduler
- StreamingListenerBatchCompleted(BatchInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
- StreamingListenerBatchStarted - Class in org.apache.spark.streaming.scheduler
- StreamingListenerBatchStarted(BatchInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
- StreamingListenerBatchSubmitted - Class in org.apache.spark.streaming.scheduler
- StreamingListenerBatchSubmitted(BatchInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
- StreamingListenerEvent - Interface in org.apache.spark.streaming.scheduler
-
:: DeveloperApi :: Base trait for events related to StreamingListener
- StreamingListenerOutputOperationCompleted - Class in org.apache.spark.streaming.scheduler
- StreamingListenerOutputOperationCompleted(OutputOperationInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
- StreamingListenerOutputOperationStarted - Class in org.apache.spark.streaming.scheduler
- StreamingListenerOutputOperationStarted(OutputOperationInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
- StreamingListenerReceiverError - Class in org.apache.spark.streaming.scheduler
- StreamingListenerReceiverError(ReceiverInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
- StreamingListenerReceiverStarted - Class in org.apache.spark.streaming.scheduler
- StreamingListenerReceiverStarted(ReceiverInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
- StreamingListenerReceiverStopped - Class in org.apache.spark.streaming.scheduler
- StreamingListenerReceiverStopped(ReceiverInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
- StreamingListenerStreamingStarted - Class in org.apache.spark.streaming.scheduler
- StreamingListenerStreamingStarted(long) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
- StreamingLogisticRegressionWithSGD - Class in org.apache.spark.mllib.classification
-
Train or predict a logistic regression model on streaming data.
- StreamingLogisticRegressionWithSGD() - Constructor for class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Construct a StreamingLogisticRegression object with default parameters: {stepSize: 0.1, numIterations: 50, miniBatchFraction: 1.0, regParam: 0.0}.
- StreamingQuery - Interface in org.apache.spark.sql.api
-
A handle to a query that is executing continuously in the background as new data arrives.
- StreamingQuery - Interface in org.apache.spark.sql.streaming
- StreamingQueryException - Exception in org.apache.spark.sql.streaming
-
Exception that stopped a
StreamingQuery
. - StreamingQueryException(String, Throwable, String, String, String, Map<String, String>) - Constructor for exception org.apache.spark.sql.streaming.StreamingQueryException
- StreamingQueryListener - Class in org.apache.spark.sql.streaming
-
Interface for listening to events related to
StreamingQueries
. - StreamingQueryListener() - Constructor for class org.apache.spark.sql.streaming.StreamingQueryListener
- StreamingQueryListener.Event - Interface in org.apache.spark.sql.streaming
-
Base type of
StreamingQueryListener
events - StreamingQueryListener.QueryIdleEvent - Class in org.apache.spark.sql.streaming
-
Event representing that query is idle and waiting for new data to process.
- StreamingQueryListener.QueryIdleEvent$ - Class in org.apache.spark.sql.streaming
- StreamingQueryListener.QueryProgressEvent - Class in org.apache.spark.sql.streaming
-
Event representing any progress updates in a query.
- StreamingQueryListener.QueryProgressEvent$ - Class in org.apache.spark.sql.streaming
- StreamingQueryListener.QueryStartedEvent - Class in org.apache.spark.sql.streaming
-
Event representing the start of a query param: id A unique query id that persists across restarts.
- StreamingQueryListener.QueryStartedEvent$ - Class in org.apache.spark.sql.streaming
- StreamingQueryListener.QueryTerminatedEvent - Class in org.apache.spark.sql.streaming
-
Event representing that termination of a query.
- StreamingQueryListener.QueryTerminatedEvent$ - Class in org.apache.spark.sql.streaming
- StreamingQueryManager - Class in org.apache.spark.sql.streaming
-
A class to manage all the
StreamingQuery
active in aSparkSession
. - StreamingQueryProgress - Class in org.apache.spark.sql.streaming
-
Information about progress made in the execution of a
StreamingQuery
during a trigger. - StreamingQueryProgressSerializer - Class in org.apache.spark.status.protobuf.sql
- StreamingQueryProgressSerializer() - Constructor for class org.apache.spark.status.protobuf.sql.StreamingQueryProgressSerializer
- StreamingQueryStatus - Class in org.apache.spark.sql.streaming
-
Reports information about the instantaneous status of a streaming query.
- streamingSourcesDoNotSupportCommonExecutionModeError(Seq<String>, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- StreamingStatistics - Class in org.apache.spark.status.api.v1.streaming
- StreamingTest - Class in org.apache.spark.mllib.stat.test
-
Performs online 2-sample significance testing for a stream of (Boolean, Double) pairs.
- StreamingTest() - Constructor for class org.apache.spark.mllib.stat.test.StreamingTest
- StreamingTestMethod - Interface in org.apache.spark.mllib.stat.test
-
Significance testing methods for
StreamingTest
. - StreamingWrite - Interface in org.apache.spark.sql.connector.write.streaming
-
An interface that defines how to write the data to data source in streaming queries.
- StreamInputInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi :: Track the information of input stream at specified batch time.
- StreamInputInfo(int, long, Map<String, Object>) - Constructor for class org.apache.spark.streaming.scheduler.StreamInputInfo
- streamJoinStreamWithoutEqualityPredicateUnsupportedError(LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- streamName() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
- streams() - Method in class org.apache.spark.sql.SparkSession
-
Returns a
StreamingQueryManager
that allows managing all theStreamingQuery
s active onthis
. - streams() - Method in class org.apache.spark.sql.SQLContext
- StreamSinkProvider - Interface in org.apache.spark.sql.sources
-
::Experimental:: Implemented by objects that can produce a streaming
Sink
for a specific format or system. - StreamSourceProvider - Interface in org.apache.spark.sql.sources
-
::Experimental:: Implemented by objects that can produce a streaming
Source
for a specific format or system. - string() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type string. - STRING - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- STRING() - Static method in class org.apache.spark.api.r.SerializationFormats
- STRING() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable string type.
- StringArrayParam - Class in org.apache.spark.ml.param
-
Specialized version of
Param[Array[String}
for Java. - StringArrayParam(Params, String, String) - Constructor for class org.apache.spark.ml.param.StringArrayParam
- StringArrayParam(Params, String, String, Function1<String[], Object>) - Constructor for class org.apache.spark.ml.param.StringArrayParam
- StringContains - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a string that contains the stringvalue
. - StringContains(String, String) - Constructor for class org.apache.spark.sql.sources.StringContains
- StringEndsWith - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a string that ends withvalue
. - StringEndsWith(String, String) - Constructor for class org.apache.spark.sql.sources.StringEndsWith
- stringHalfWidth(String) - Static method in class org.apache.spark.util.Utils
-
Return the number of half widths in a given string.
- StringIndexer - Class in org.apache.spark.ml.feature
-
A label indexer that maps string column(s) of labels to ML column(s) of label indices.
- StringIndexer() - Constructor for class org.apache.spark.ml.feature.StringIndexer
- StringIndexer(String) - Constructor for class org.apache.spark.ml.feature.StringIndexer
- StringIndexerBase - Interface in org.apache.spark.ml.feature
-
Base trait for
StringIndexer
andStringIndexerModel
. - StringIndexerModel - Class in org.apache.spark.ml.feature
-
Model fitted by
StringIndexer
. - StringIndexerModel(String[]) - Constructor for class org.apache.spark.ml.feature.StringIndexerModel
- StringIndexerModel(String[][]) - Constructor for class org.apache.spark.ml.feature.StringIndexerModel
- StringIndexerModel(String, String[]) - Constructor for class org.apache.spark.ml.feature.StringIndexerModel
- StringIndexerModel(String, String[][]) - Constructor for class org.apache.spark.ml.feature.StringIndexerModel
- stringIndexerOrderType() - Method in class org.apache.spark.ml.feature.RFormula
- stringIndexerOrderType() - Method in interface org.apache.spark.ml.feature.RFormulaBase
-
Param for how to order categories of a string FEATURE column used by
StringIndexer
. - stringIndexerOrderType() - Method in class org.apache.spark.ml.feature.RFormulaModel
- stringOrderType() - Method in class org.apache.spark.ml.feature.StringIndexer
- stringOrderType() - Method in interface org.apache.spark.ml.feature.StringIndexerBase
-
Param for how to order labels of string column.
- stringOrderType() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- StringRRDD<T> - Class in org.apache.spark.api.r
-
An RDD that stores R objects as Array[String].
- StringRRDD(RDD<T>, byte[], String, byte[], Object[], ClassTag<T>) - Constructor for class org.apache.spark.api.r.StringRRDD
- StringStartsWith - Class in org.apache.spark.sql.sources
-
A filter that evaluates to
true
iff the attribute evaluates to a string that starts withvalue
. - StringStartsWith(String, String) - Constructor for class org.apache.spark.sql.sources.StringStartsWith
- StringToColumn(StringContext) - Constructor for class org.apache.spark.sql.SQLImplicits.StringToColumn
- stringToField() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
- stringToField() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- stringToSeq(String) - Static method in class org.apache.spark.util.Utils
- StringType - Class in org.apache.spark.sql.types
-
The data type representing
String
values. - StringType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the StringType object.
- StringType() - Constructor for class org.apache.spark.sql.types.StringType
- StringTypeExpression - Class in org.apache.spark.sql.types
- StringTypeExpression() - Constructor for class org.apache.spark.sql.types.StringTypeExpression
- stripDollars(String) - Method in interface org.apache.spark.util.SparkClassUtils
-
Remove trailing dollar signs from qualified class name, and return the trailing part after the last dollar sign in the middle
- stripDollars(String) - Static method in class org.apache.spark.util.Utils
- stripPackages(String) - Method in interface org.apache.spark.util.SparkClassUtils
-
Remove the packages from full qualified class name
- stronglyConnectedComponents(int) - Method in class org.apache.spark.graphx.GraphOps
-
Compute the strongly connected component (SCC) of each vertex and return a graph with the vertex value containing the lowest vertex id in the SCC containing that vertex.
- StronglyConnectedComponents - Class in org.apache.spark.graphx.lib
-
Strongly connected components algorithm implementation.
- StronglyConnectedComponents() - Constructor for class org.apache.spark.graphx.lib.StronglyConnectedComponents
- struct(String, String...) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column that composes multiple input columns.
- struct(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column that composes multiple input columns.
- struct(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column.
- struct(StructType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type struct. - struct(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column.
- struct(Seq<StructField>) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type struct. - StructField - Class in org.apache.spark.sql.types
-
A field inside a StructType.
- StructField(String, DataType, boolean, Metadata) - Constructor for class org.apache.spark.sql.types.StructField
- StructType - Class in org.apache.spark.sql.types
-
A
StructType
object can be constructed by - StructType() - Constructor for class org.apache.spark.sql.types.StructType
-
No-arg constructor for kryo.
- StructType(StructField[]) - Constructor for class org.apache.spark.sql.types.StructType
- structTypeToV2Columns(StructType) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Converts a StructType to DS v2 columns, which decodes the StructField metadata to v2 column comment and default value or generation expression.
- stsCredentials(String, String) - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
Use STS to assume an IAM role for temporary session-based authentication.
- stsCredentials(String, String, String) - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
Use STS to assume an IAM role for temporary session-based authentication.
- StudentTTest - Class in org.apache.spark.mllib.stat.test
-
Performs Students's 2-sample t-test.
- StudentTTest() - Constructor for class org.apache.spark.mllib.stat.test.StudentTTest
- subClass() - Method in class org.apache.spark.ErrorInfo
- subgraph(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - Method in class org.apache.spark.graphx.Graph
-
Restricts the graph to only the vertices and edges satisfying the predicates.
- subgraph(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - Method in class org.apache.spark.graphx.impl.GraphImpl
- SUBMISSION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.JobData
- SUBMISSION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- SUBMISSION_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- submissionTime() - Method in class org.apache.spark.scheduler.StageInfo
-
When this stage was submitted from the DAGScheduler to a TaskScheduler.
- submissionTime() - Method in interface org.apache.spark.SparkStageInfo
- submissionTime() - Method in class org.apache.spark.SparkStageInfoImpl
- submissionTime() - Method in class org.apache.spark.status.api.v1.JobData
- submissionTime() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- submissionTime() - Method in class org.apache.spark.status.api.v1.StageData
- submissionTime() - Method in class org.apache.spark.status.LiveJob
- submissionTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
- submitJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, Function0<R>) - Method in interface org.apache.spark.JobSubmitter
-
Submit a job for execution and return a FutureAction holding the result.
- submitJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, Function0<R>) - Method in class org.apache.spark.SparkContext
-
Submit a job for execution and return a FutureJob holding the result.
- submitTasks(TaskSet) - Method in interface org.apache.spark.scheduler.TaskScheduler
- SUBMITTED - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The application has been submitted to the cluster.
- subModels() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- subModels() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- subprocessExitedError(int, org.apache.spark.util.CircularBuffer, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- subqueryExpressionInLambdaOrHigherOrderFunctionNotAllowedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- subqueryNotAllowedInMergeCondition(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- subqueryReturnMoreThanOneColumn(int, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- subsamplingRate() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- subsamplingRate() - Method in class org.apache.spark.ml.classification.GBTClassifier
- subsamplingRate() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- subsamplingRate() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- subsamplingRate() - Method in class org.apache.spark.ml.clustering.LDA
- subsamplingRate() - Method in class org.apache.spark.ml.clustering.LDAModel
- subsamplingRate() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
For Online optimizer only:
LDAParams.optimizer()
= "online". - subsamplingRate() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- subsamplingRate() - Method in class org.apache.spark.ml.regression.GBTRegressor
- subsamplingRate() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- subsamplingRate() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- subsamplingRate() - Method in interface org.apache.spark.ml.tree.TreeEnsembleParams
-
Fraction of the training data used for learning each decision tree, in range (0, 1].
- subsamplingRate() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- subsetAccuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns subset accuracy (for equal sets of labels)
- substituteAppId(String, String) - Static method in class org.apache.spark.util.Utils
-
Replaces all the {{APP_ID}} occurrences with the App Id.
- substituteAppNExecIds(String, String, String) - Static method in class org.apache.spark.util.Utils
-
Replaces all the {{EXECUTOR_ID}} occurrences with the Executor Id and {{APP_ID}} occurrences with the App Id.
- substr(int, int) - Method in class org.apache.spark.sql.Column
-
An expression that returns a substring.
- substr(Column, Column) - Method in class org.apache.spark.sql.Column
-
An expression that returns a substring.
- substr(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the substring of
str
that starts atpos
, or the slice of byte array that starts atpos
. - substr(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the substring of
str
that starts atpos
and is of lengthlen
, or the slice of byte array that starts atpos
and is of lengthlen
. - substring(Column, int, int) - Static method in class org.apache.spark.sql.functions
-
Substring starts at
pos
and is of lengthlen
when str is String type or returns the slice of byte array that starts atpos
in byte and is of lengthlen
when str is Binary type - substring(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Substring starts at
pos
and is of lengthlen
when str is String type or returns the slice of byte array that starts atpos
in byte and is of lengthlen
when str is Binary type - substring_index(Column, String, int) - Static method in class org.apache.spark.sql.functions
-
Returns the substring from string str before count occurrences of the delimiter delim.
- subtract(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
- subtract(JavaDoubleRDD) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaDoubleRDD, int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaDoubleRDD, Partitioner) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaPairRDD<K, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaPairRDD<K, V>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaPairRDD<K, V>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaRDD<T>) - Method in class org.apache.spark.api.java.JavaRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaRDD<T>, int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(JavaRDD<T>, Partitioner) - Method in class org.apache.spark.api.java.JavaRDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(Term) - Static method in class org.apache.spark.ml.feature.Dot
- subtract(Term) - Static method in class org.apache.spark.ml.feature.EmptyTerm
- subtract(Term) - Method in interface org.apache.spark.ml.feature.Term
-
Fold by adding deletion terms to the left.
- subtract(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Subtracts the given block matrix
other
fromthis
block matrix:this - other
. - subtract(RDD<T>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(RDD<T>, int) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD with the elements from
this
that are not inother
. - subtract(RDD<T>, Partitioner, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD with the elements from
this
that are not inother
. - subtractByKey(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the pairs from
this
whose keys are not inother
. - subtractByKey(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the pairs from
this
whose keys are not inother
. - subtractByKey(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the pairs from
this
whose keys are not inother
. - subtractByKey(RDD<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the pairs from
this
whose keys are not inother
. - subtractByKey(RDD<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the pairs from
this
whose keys are not inother
. - subtractByKey(RDD<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the pairs from
this
whose keys are not inother
. - subtractMetrics(TaskMetrics, TaskMetrics) - Static method in class org.apache.spark.status.LiveEntityHelpers
-
Subtract m2 values from m1.
- SUCCEEDED - Enum constant in enum class org.apache.spark.JobExecutionStatus
- SUCCEEDED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- SUCCEEDED_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- succeededTasks() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- succeededTasks() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- succeededTasks() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- success(T) - Static method in class org.apache.spark.ml.feature.RFormulaParser
- Success - Class in org.apache.spark
-
:: DeveloperApi :: Task succeeded.
- Success() - Constructor for class org.apache.spark.Success
- Success() - Static method in class org.apache.spark.ml.feature.RFormulaParser
- SUCCESS - Enum constant in enum class org.apache.spark.status.api.v1.TaskStatus
- successful() - Method in class org.apache.spark.scheduler.TaskInfo
- successJobIds() - Method in class org.apache.spark.status.api.v1.sql.ExecutionData
- sum() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Add up the elements in this RDD.
- sum() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Add up the elements in this RDD.
- sum() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the sum of elements added to the accumulator.
- sum() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the sum of elements added to the accumulator.
- sum() - Method in class org.apache.spark.util.StatCounter
- sum(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sum of all values in the given column.
- sum(String...) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the sum for each numeric columns for each group.
- sum(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- sum(MapFunction<T, Double>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Deprecated.Sum aggregate function for floating point (double) type.
- sum(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- sum(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sum of all values in the expression.
- sum(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- sum(Seq<String>) - Method in class org.apache.spark.sql.api.RelationalGroupedDataset
-
Compute the sum for each numeric columns for each group.
- sum(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
- sum(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Deprecated.Sum aggregate function for floating point (double) type.
- Sum - Class in org.apache.spark.sql.connector.expressions.aggregate
-
An aggregate function that returns the summation of all the values in a group.
- Sum(Expression, boolean) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Sum
- Sum() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- sum_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sum of distinct values in the expression.
- sumApprox(long) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Approximate operation to return the sum within a timeout.
- sumApprox(long, double) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Approximate operation to return the sum within a timeout.
- sumApprox(long, Double) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Approximate operation to return the sum within a timeout.
- sumDistinct(String) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use sum_distinct. Since 3.2.0.
- sumDistinct(Column) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use sum_distinct. Since 3.2.0.
- sumLong(MapFunction<T, Long>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Deprecated.Sum aggregate function for integral (long, i.e.
- sumLong(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Deprecated.Sum aggregate function for integral (long, i.e.
- Summarizer - Class in org.apache.spark.ml.stat
-
Tools for vectorized statistics on MLlib Vectors.
- Summarizer() - Constructor for class org.apache.spark.ml.stat.Summarizer
- summary() - Method in class org.apache.spark.ml.classification.FMClassificationModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.clustering.KMeansModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
Gets R-like summary of model on training set.
- summary() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
Gets summary (e.g.
- summary() - Method in interface org.apache.spark.ml.util.HasTrainingSummary
-
Gets summary of model on training set.
- summary() - Method in interface org.apache.spark.QueryContext
- summary(String...) - Method in class org.apache.spark.sql.api.Dataset
-
Computes specified statistics for numeric and string columns.
- summary(String...) - Method in class org.apache.spark.sql.Dataset
- summary(Column) - Method in class org.apache.spark.ml.stat.SummaryBuilder
- summary(Column, Column) - Method in class org.apache.spark.ml.stat.SummaryBuilder
-
Returns an aggregate object that contains the summary of the column with the requested metrics.
- summary(Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Computes specified statistics for numeric and string columns.
- summary(Seq<String>) - Method in class org.apache.spark.sql.Dataset
- SummaryBuilder - Class in org.apache.spark.ml.stat
-
A builder object that provides summary statistics about a given column.
- SummaryBuilder() - Constructor for class org.apache.spark.ml.stat.SummaryBuilder
- sumMat() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- SUPPORT_COLUMN_DEFAULT_VALUE - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCatalogCapability
-
Signals that the TableCatalog supports defining column default value as expression in CREATE/REPLACE/ALTER TABLE.
- supportColumnarReads(InputPartition) - Method in interface org.apache.spark.sql.connector.read.PartitionReaderFactory
-
Returns true if the given
InputPartition
should be read by Spark in a columnar way. - supportCompletePushDown(Aggregation) - Method in interface org.apache.spark.sql.connector.read.SupportsPushDownAggregates
-
Whether the datasource support complete aggregation push-down.
- SUPPORTED - Enum constant in enum class org.apache.spark.sql.connector.read.Scan.ColumnarSupportMode
- supportedCustomMetrics() - Method in interface org.apache.spark.sql.connector.read.Scan
-
Returns an array of supported custom metrics with name and description.
- supportedCustomMetrics() - Method in interface org.apache.spark.sql.connector.write.Write
-
Returns an array of supported custom metrics with name and description.
- supportedFeatureSubsetStrategies() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
Accessor for supported featureSubsetStrategy settings: auto, all, onethird, sqrt, log2
- supportedFeatureSubsetStrategies() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
Accessor for supported featureSubsetStrategy settings: auto, all, onethird, sqrt, log2
- supportedFeatureSubsetStrategies() - Static method in class org.apache.spark.mllib.tree.RandomForest
-
List of supported feature subset sampling strategies.
- supportedImpurities() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
Accessor for supported impurities: entropy, gini
- supportedImpurities() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
Accessor for supported impurity settings: entropy, gini
- supportedImpurities() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
Accessor for supported impurities: variance
- supportedImpurities() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
Accessor for supported impurity settings: variance
- supportedLossTypes() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
Accessor for supported loss settings: logistic
- supportedLossTypes() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
Accessor for supported loss settings: squared (L2), absolute (L1)
- supportedOptimizers() - Method in class org.apache.spark.ml.clustering.LDA
- supportedOptimizers() - Method in class org.apache.spark.ml.clustering.LDAModel
- supportedOptimizers() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Supported values for Param
LDAParams.optimizer()
. - supportedSelectorTypes() - Static method in class org.apache.spark.mllib.feature.ChiSqSelector
-
Set of selector types that ChiSqSelector supports.
- SUPPORTS_CREATE_TABLE_WITH_GENERATED_COLUMNS - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCatalogCapability
-
Signals that the TableCatalog supports defining generated columns upon table creation in SQL.
- SUPPORTS_CREATE_TABLE_WITH_IDENTITY_COLUMNS - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCatalogCapability
-
Signals that the TableCatalog supports defining identity columns upon table creation in SQL.
- SupportsAdmissionControl - Interface in org.apache.spark.sql.connector.read.streaming
-
A mix-in interface for
SparkDataStream
streaming sources to signal that they can control the rate of data ingested into the system. - SupportsAtomicPartitionManagement - Interface in org.apache.spark.sql.connector.catalog
-
An atomic partition interface of
Table
to operate multiple partitions atomically. - SupportsCatalogOptions - Interface in org.apache.spark.sql.connector.catalog
-
An interface, which TableProviders can implement, to support table existence checks and creation through a catalog, without having to use table identifiers.
- supportsColumnarInput(Seq<Attribute>) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Can
convertColumnarBatchToCachedBatch()
be called instead ofconvertInternalRowToCachedBatch()
for this given schema? True if it can and false if it cannot. - supportsColumnarOutput(StructType) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Can
convertCachedBatchToColumnarBatch()
be called instead ofconvertCachedBatchToInternalRow()
for this given schema? True if it can and false if it cannot. - supportsDataType(DataType) - Static method in class org.apache.spark.sql.avro.AvroUtils
- supportsDataType(DataType) - Method in interface org.apache.spark.sql.sources.CreatableRelationProvider
-
Check if the relation supports the given data type.
- SupportsDelete - Interface in org.apache.spark.sql.connector.catalog
-
A mix-in interface for
Table
delete support. - SupportsDeleteV2 - Interface in org.apache.spark.sql.connector.catalog
-
A mix-in interface for
Table
delete support. - SupportsDelta - Interface in org.apache.spark.sql.connector.write
-
A mix-in interface for
RowLevelOperation
. - SupportsDynamicOverwrite - Interface in org.apache.spark.sql.connector.write
-
Write builder trait for tables that support dynamic partition overwrite.
- supportsExternalMetadata() - Method in interface org.apache.spark.sql.connector.catalog.TableProvider
-
Returns true if the source has the ability of accepting external table metadata when getting tables.
- SupportsIndex - Interface in org.apache.spark.sql.connector.catalog.index
-
Table methods for working with index
- supportsLimit() - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- supportsLimit() - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- supportsLimit() - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns ture if dialect supports LIMIT clause.
- supportsLimit() - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
- supportsLimit() - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- supportsLimit() - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- supportsLimit() - Method in class org.apache.spark.sql.jdbc.OracleDialect
- supportsLimit() - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- SupportsMetadataColumns - Interface in org.apache.spark.sql.connector.catalog
-
An interface for exposing data columns for a table that are not in the table schema.
- SupportsNamespaces - Interface in org.apache.spark.sql.connector.catalog
-
Catalog methods for working with namespaces.
- supportsOffset() - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- supportsOffset() - Method in class org.apache.spark.sql.jdbc.DB2Dialect
- supportsOffset() - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Returns ture if dialect supports OFFSET clause.
- supportsOffset() - Method in class org.apache.spark.sql.jdbc.MySQLDialect
- supportsOffset() - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- supportsOffset() - Method in class org.apache.spark.sql.jdbc.OracleDialect
- supportsOffset() - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- SupportsOverwrite - Interface in org.apache.spark.sql.connector.write
-
Write builder trait for tables that support overwrite by filter.
- SupportsOverwriteV2 - Interface in org.apache.spark.sql.connector.write
-
Write builder trait for tables that support overwrite by filter.
- SupportsPartitionManagement - Interface in org.apache.spark.sql.connector.catalog
-
A partition interface of
Table
. - SupportsPushDownAggregates - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
ScanBuilder
. - SupportsPushDownFilters - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
ScanBuilder
. - SupportsPushDownLimit - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
ScanBuilder
. - SupportsPushDownOffset - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
ScanBuilder
. - SupportsPushDownRequiredColumns - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
ScanBuilder
. - SupportsPushDownTableSample - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
Scan
. - SupportsPushDownTopN - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
ScanBuilder
. - SupportsPushDownV2Filters - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
ScanBuilder
. - SupportsRead - Interface in org.apache.spark.sql.connector.catalog
-
A mix-in interface of
Table
, to indicate that it's readable. - supportsReliableStorage() - Method in interface org.apache.spark.shuffle.api.ShuffleDriverComponents
-
Does this shuffle component support reliable storage - external to the lifecycle of the executor host ? For example, writing shuffle data to a distributed filesystem or persisting it in a remote shuffle service.
- SupportsReportOrdering - Interface in org.apache.spark.sql.connector.read
-
A mix in interface for
Scan
. - SupportsReportPartitioning - Interface in org.apache.spark.sql.connector.read
-
A mix in interface for
Scan
. - SupportsReportStatistics - Interface in org.apache.spark.sql.connector.read
-
A mix in interface for
Scan
. - SupportsRowLevelOperations - Interface in org.apache.spark.sql.connector.catalog
-
A mix-in interface for
Table
row-level operations support. - SupportsRuntimeFiltering - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
Scan
. - SupportsRuntimeV2Filtering - Interface in org.apache.spark.sql.connector.read
-
A mix-in interface for
Scan
. - SupportsStreamSourceMetadataColumns - Interface in org.apache.spark.sql.sources
-
Implemented by StreamSourceProvider objects that can generate file metadata columns.
- supportsTableSample() - Method in class org.apache.spark.sql.jdbc.DatabricksDialect
- supportsTableSample() - Method in class org.apache.spark.sql.jdbc.JdbcDialect
- supportsTableSample() - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- supportsTableSample() - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- SupportsTriggerAvailableNow - Interface in org.apache.spark.sql.connector.read.streaming
-
An interface for streaming sources that supports running in Trigger.AvailableNow mode, which will process all the available data at the beginning of the query in (possibly) multiple batches.
- SupportsTruncate - Interface in org.apache.spark.sql.connector.write
-
Write builder trait for tables that support truncation.
- SupportsWrite - Interface in org.apache.spark.sql.connector.catalog
-
A mix-in interface of
Table
, to indicate that it's writable. - surrogateDF() - Method in class org.apache.spark.ml.feature.ImputerModel
- suspended() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- SVDPlusPlus - Class in org.apache.spark.graphx.lib
-
Implementation of SVD++ algorithm.
- SVDPlusPlus() - Constructor for class org.apache.spark.graphx.lib.SVDPlusPlus
- SVDPlusPlus.Conf - Class in org.apache.spark.graphx.lib
-
Configuration parameters for SVDPlusPlus.
- SVMDataGenerator - Class in org.apache.spark.mllib.util
-
Generate sample data used for SVM.
- SVMDataGenerator() - Constructor for class org.apache.spark.mllib.util.SVMDataGenerator
- SVMModel - Class in org.apache.spark.mllib.classification
-
Model for Support Vector Machines (SVMs).
- SVMModel(Vector, double) - Constructor for class org.apache.spark.mllib.classification.SVMModel
- SVMWithSGD - Class in org.apache.spark.mllib.classification
-
Train a Support Vector Machine (SVM) using Stochastic Gradient Descent.
- SVMWithSGD() - Constructor for class org.apache.spark.mllib.classification.SVMWithSGD
-
Construct a SVM object with default parameters: {stepSize: 1.0, numIterations: 100, regParam: 0.01, miniBatchFraction: 1.0}.
- symbolToColumn(Symbol) - Method in class org.apache.spark.sql.SQLImplicits
-
An implicit conversion that turns a Scala
Symbol
into aColumn
. - symlink(File, File) - Static method in class org.apache.spark.util.Utils
-
Creates a symlink.
- symmetricEigs(Function1<DenseVector<Object>, DenseVector<Object>>, int, int, double, int) - Static method in class org.apache.spark.mllib.linalg.EigenValueDecomposition
-
Compute the leading k eigenvalues and eigenvectors on a symmetric square matrix using ARPACK.
- synchronizers() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- syr(double, Vector, DenseMatrix) - Static method in class org.apache.spark.ml.linalg.BLAS
-
A := alpha * x * x^T^ + A
- syr(double, Vector, DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
A := alpha * x * x^T^ + A
- SYSTEM_DEFAULT() - Static method in class org.apache.spark.sql.types.DecimalType
- SYSTEM_PROPERTIES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- systemProperties() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
T
- t() - Method in class org.apache.spark.SerializableWritable
- table(int) - Method in interface org.apache.spark.ui.PagedTable
- table(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Returns the specified table/view as a
DataFrame
. - table(String) - Method in class org.apache.spark.sql.api.SparkSession
-
Returns the specified table/view as a
DataFrame
. - table(String) - Method in class org.apache.spark.sql.DataFrameReader
- table(String) - Method in class org.apache.spark.sql.SparkSession
- table(String) - Method in class org.apache.spark.sql.SQLContext
- table(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Define a Streaming DataFrame on a Table.
- Table - Class in org.apache.spark.sql.catalog
-
A table in Spark, as returned by the
listTables
method inCatalog
. - Table - Interface in org.apache.spark.sql.connector.catalog
-
An interface representing a logical structured data set of a data source.
- Table(String, String, String[], String, String, boolean) - Constructor for class org.apache.spark.sql.catalog.Table
- Table(String, String, String, String, boolean) - Constructor for class org.apache.spark.sql.catalog.Table
- TABLE_CLASS_NOT_STRIPED() - Static method in class org.apache.spark.ui.UIUtils
- TABLE_CLASS_STRIPED() - Static method in class org.apache.spark.ui.UIUtils
- TABLE_CLASS_STRIPED_SORTABLE() - Static method in class org.apache.spark.ui.UIUtils
- TABLE_RESERVED_PROPERTIES() - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
The list of reserved table properties, which can not be removed or changed directly by the syntax: {{ ALTER TABLE ...
- tableAlreadyExistsError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableAlreadyExistsError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableAlreadyExistsError(Identifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- TableCapability - Enum Class in org.apache.spark.sql.connector.catalog
-
Capabilities that can be provided by a
Table
implementation. - TableCatalog - Interface in org.apache.spark.sql.connector.catalog
-
Catalog methods for working with Tables.
- TableCatalogCapability - Enum Class in org.apache.spark.sql.connector.catalog
-
Capabilities that can be provided by a
TableCatalog
implementation. - TableChange - Interface in org.apache.spark.sql.connector.catalog
-
TableChange subclasses represent requested changes to a table.
- TableChange.AddColumn - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to add a field.
- TableChange.After - Class in org.apache.spark.sql.connector.catalog
-
Column position AFTER means the specified column should be put after the given `column`.
- TableChange.ClusterBy - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to alter clustering columns for a table.
- TableChange.ColumnChange - Interface in org.apache.spark.sql.connector.catalog
- TableChange.ColumnPosition - Interface in org.apache.spark.sql.connector.catalog
- TableChange.DeleteColumn - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to delete a field.
- TableChange.First - Class in org.apache.spark.sql.connector.catalog
-
Column position FIRST means the specified column should be the first column.
- TableChange.RemoveProperty - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to remove a table property.
- TableChange.RenameColumn - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to rename a field.
- TableChange.SetProperty - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to set a table property.
- TableChange.UpdateColumnComment - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to update the comment of a field.
- TableChange.UpdateColumnDefaultValue - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to update the default value of a field.
- TableChange.UpdateColumnNullability - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to update the nullability of a field.
- TableChange.UpdateColumnPosition - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to update the position of a field.
- TableChange.UpdateColumnType - Class in org.apache.spark.sql.connector.catalog
-
A TableChange to update the type of a field.
- tableCssClass() - Method in interface org.apache.spark.ui.PagedTable
- tableDoesNotSupportAtomicPartitionManagementError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableDoesNotSupportDeletesError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableDoesNotSupportPartitionManagementError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableDoesNotSupportReadsError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableDoesNotSupportTruncatesError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableDoesNotSupportWritesError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableExists(String) - Method in class org.apache.spark.sql.api.Catalog
-
Check if the table or view with the specified name exists.
- tableExists(String, String) - Method in class org.apache.spark.sql.api.Catalog
-
Check if the table or view with the specified name exists in the specified database under the Hive Metastore.
- tableExists(Identifier) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
- tableExists(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Test whether a table exists using an
identifier
from the catalog. - tableId() - Method in interface org.apache.spark.ui.PagedTable
- tableIdentifierExistsError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- TableIdentifierHelper(TableIdentifier) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TableIdentifierHelper
- tableIdentifierNotConvertedToHadoopFsRelationError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- TableIndex - Class in org.apache.spark.sql.connector.catalog.index
-
Index in a table
- TableIndex(String, String, NamedReference[], Map<NamedReference, Properties>, Properties) - Constructor for class org.apache.spark.sql.connector.catalog.index.TableIndex
- tableIndexNotSupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableIsNotRowLevelOperationTableError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableNames() - Method in class org.apache.spark.sql.SQLContext
- tableNames(String) - Method in class org.apache.spark.sql.SQLContext
- tableNotSpecifyDatabaseError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableNotSpecifyLocationUriError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableNotSupportStreamingWriteError(String, Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableOrViewAlreadyExistsError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableOrViewNotFound(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableProperty(String, String) - Method in interface org.apache.spark.sql.CreateTableWriter
-
Add a table property.
- tableProperty(String, String) - Method in class org.apache.spark.sql.DataFrameWriterV2
- TableProvider - Interface in org.apache.spark.sql.connector.catalog
-
The base interface for v2 data sources which don't have a real catalog.
- tables() - Method in class org.apache.spark.sql.SQLContext
- tables(String) - Method in class org.apache.spark.sql.SQLContext
- tableSampleByBytesUnsupportedError(String, SqlBaseParser.SampleMethodContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- TableScan - Interface in org.apache.spark.sql.sources
-
A BaseRelation that can produce all of its tuples as an RDD of Row objects.
- tableType() - Method in class org.apache.spark.sql.catalog.Table
- tableValuedFunctionFailedToAnalyseInPythonError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableValuedFunctionRequiredMetadataIncompatibleWithCall(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableValuedFunctionRequiredMetadataInvalid(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tableValuedFunctionTooManyTableArgumentsError(int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- TableWritePrivilege - Enum Class in org.apache.spark.sql.connector.catalog
-
The table write privileges that will be provided when loading a table.
- tail(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns the last
n
rows in the Dataset. - tail(int) - Method in class org.apache.spark.sql.Dataset
- take(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Take the first num elements of the RDD.
- take(int) - Method in class org.apache.spark.rdd.RDD
-
Take the first num elements of the RDD.
- take(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns the first
n
rows in the Dataset. - takeAsList(int) - Method in class org.apache.spark.sql.api.Dataset
-
Returns the first
n
rows in the Dataset as a list. - takeAsync(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of the
take
action, which returns a future for retrieving the firstnum
elements of this RDD. - takeAsync(int) - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Returns a future for retrieving the first num elements of the RDD.
- takeOrdered(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the first k (smallest) elements from this RDD using the natural ordering for T while maintain the order.
- takeOrdered(int, Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the first k (smallest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.
- takeOrdered(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.
- takeSample(boolean, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
- takeSample(boolean, int, long) - Method in interface org.apache.spark.api.java.JavaRDDLike
- takeSample(boolean, int, long) - Method in class org.apache.spark.rdd.RDD
-
Return a fixed-size sampled subset of this RDD in an array
- tallSkinnyQR(boolean) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Compute QR decomposition for
RowMatrix
. - tan(String) - Static method in class org.apache.spark.sql.functions
- tan(Column) - Static method in class org.apache.spark.sql.functions
- tanh(String) - Static method in class org.apache.spark.sql.functions
- tanh(Column) - Static method in class org.apache.spark.sql.functions
- targetStorageLevel() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- targetStorageLevel() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- task() - Method in class org.apache.spark.CleanupTaskWeakReference
- TASK_COUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- TASK_DESERIALIZATION_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
- TASK_DESERIALIZATION_TIME() - Static method in class org.apache.spark.ui.ToolTips
- TASK_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- TASK_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- TASK_INDEX() - Static method in class org.apache.spark.status.TaskIndexNames
- TASK_LOCALITY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- TASK_LOCALITY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- TASK_METRICS_DISTRIBUTIONS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- TASK_METRICS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- TASK_PARTITION_ID() - Static method in class org.apache.spark.status.TaskIndexNames
- TASK_RESOURCES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- TASK_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- TASK_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- taskAttemptId() - Method in class org.apache.spark.BarrierTaskContext
- taskAttemptId() - Method in class org.apache.spark.TaskContext
-
An ID that is unique to this task attempt (within the same SparkContext, no two task attempts will share the same attempt ID).
- TaskCommitDenied - Class in org.apache.spark
-
:: DeveloperApi :: Task requested the driver to commit, but was denied.
- TaskCommitDenied(int, int, int) - Constructor for class org.apache.spark.TaskCommitDenied
- TaskCompletionListener - Interface in org.apache.spark.util
-
:: DeveloperApi ::
- TaskContext - Class in org.apache.spark
-
Contextual information about a task which can be read or mutated during execution.
- TaskContext() - Constructor for class org.apache.spark.TaskContext
- taskCpus() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
- TaskData - Class in org.apache.spark.status.api.v1
- TaskDetailsClassNames - Class in org.apache.spark.ui.jobs
-
Names of the CSS classes corresponding to each type of task detail.
- TaskDetailsClassNames() - Constructor for class org.apache.spark.ui.jobs.TaskDetailsClassNames
- taskEndFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- TaskEndReason - Interface in org.apache.spark
-
:: DeveloperApi :: Various possible reasons why a task ended.
- taskEndReasonFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- taskEndReasonToJson(TaskEndReason, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- taskEndToJson(SparkListenerTaskEnd, JsonGenerator, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- taskExecutorMetrics() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- TaskFailedReason - Interface in org.apache.spark
-
:: DeveloperApi :: Various possible reasons why a task failed.
- taskFailedWhileWritingRowsError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- TaskFailureListener - Interface in org.apache.spark.util
-
:: DeveloperApi ::
- taskFailures() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- taskFailures() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- taskFailures() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
- taskFailures() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- taskGettingResultFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- taskGettingResultToJson(SparkListenerTaskGettingResult, JsonGenerator, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- taskHasNotLockedBlockError(long, BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- taskId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
- taskId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
- taskId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.TaskThreadDump
- taskId() - Method in class org.apache.spark.scheduler.local.KillTask
- taskId() - Method in class org.apache.spark.scheduler.local.StatusUpdate
- taskId() - Method in class org.apache.spark.scheduler.TaskInfo
- taskId() - Method in class org.apache.spark.status.api.v1.TaskData
- taskId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockTaskInfo
- taskId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockVisibility
- taskId() - Method in class org.apache.spark.storage.TaskResultBlockId
- taskIndex() - Method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- TaskIndexNames - Class in org.apache.spark.status
-
Tasks have a lot of indices that are used in a few different places.
- TaskIndexNames() - Constructor for class org.apache.spark.status.TaskIndexNames
- taskInfo() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- taskInfo() - Method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
- taskInfo() - Method in class org.apache.spark.scheduler.SparkListenerTaskStart
- TaskInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi :: Information about a running task attempt inside a TaskSet.
- TaskInfo(long, int, int, int, long, String, String, Enumeration.Value, boolean) - Constructor for class org.apache.spark.scheduler.TaskInfo
- TaskInfo(long, int, int, long, String, String, Enumeration.Value, boolean) - Constructor for class org.apache.spark.scheduler.TaskInfo
-
This api doesn't contains partitionId, please use the new api.
- taskInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- taskInfoToJson(TaskInfo, JsonGenerator, JsonProtocolOptions, boolean) - Static method in class org.apache.spark.util.JsonProtocol
- TaskKilled - Class in org.apache.spark
-
:: DeveloperApi :: Task was killed intentionally and needs to be rescheduled.
- TaskKilled(String, Seq<AccumulableInfo>, Seq<AccumulatorV2<?, ?>>, Seq<Object>) - Constructor for class org.apache.spark.TaskKilled
- TaskKilledException - Exception in org.apache.spark
-
:: DeveloperApi :: Exception thrown when a task is explicitly killed (i.e., task failure is expected).
- TaskKilledException() - Constructor for exception org.apache.spark.TaskKilledException
- TaskKilledException(String) - Constructor for exception org.apache.spark.TaskKilledException
- taskLocality() - Method in class org.apache.spark.scheduler.TaskInfo
- taskLocality() - Method in class org.apache.spark.status.api.v1.TaskData
- TaskLocality - Class in org.apache.spark.scheduler
- TaskLocality() - Constructor for class org.apache.spark.scheduler.TaskLocality
- TaskLocation - Interface in org.apache.spark.scheduler
-
A location where a task should run.
- TaskMetricDistributions - Class in org.apache.spark.status.api.v1
- taskMetrics() - Method in class org.apache.spark.BarrierTaskContext
- taskMetrics() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- taskMetrics() - Method in class org.apache.spark.scheduler.StageInfo
- taskMetrics() - Method in class org.apache.spark.status.api.v1.TaskData
- taskMetrics() - Method in class org.apache.spark.TaskContext
- TaskMetrics - Class in org.apache.spark.status.api.v1
- taskMetricsDistributions() - Method in class org.apache.spark.status.api.v1.StageData
- taskMetricsFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- taskMetricsToJson(TaskMetrics, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- TaskResourceRequest - Class in org.apache.spark.resource
-
A task resource request.
- TaskResourceRequest(String, double) - Constructor for class org.apache.spark.resource.TaskResourceRequest
- taskResourceRequestFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- taskResourceRequestMapFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- taskResourceRequestMapToJson(Map<String, TaskResourceRequest>, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- TaskResourceRequests - Class in org.apache.spark.resource
-
A set of task resource requests.
- TaskResourceRequests() - Constructor for class org.apache.spark.resource.TaskResourceRequests
- taskResourceRequestToJson(TaskResourceRequest, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- taskResources() - Method in class org.apache.spark.resource.ResourceProfile
- taskResources() - Method in class org.apache.spark.resource.ResourceProfileBuilder
- taskResources() - Method in class org.apache.spark.status.api.v1.ResourceProfileInfo
- taskResources() - Method in class org.apache.spark.status.LiveResourceProfile
- taskResourcesJMap() - Method in class org.apache.spark.resource.ResourceProfile
-
(Java-specific) gets a Java Map of resources to TaskResourceRequest
- taskResourcesJMap() - Method in class org.apache.spark.resource.ResourceProfileBuilder
-
(Java-specific) gets a Java Map of resources to TaskResourceRequest
- TaskResult<T> - Interface in org.apache.spark.scheduler
- TASKRESULT() - Static method in class org.apache.spark.storage.BlockId
- TaskResultBlockId - Class in org.apache.spark.storage
- TaskResultBlockId(long) - Constructor for class org.apache.spark.storage.TaskResultBlockId
- TaskResultLost - Class in org.apache.spark
-
:: DeveloperApi :: The task finished successfully, but the result was lost from the executor's block manager before it was fetched.
- TaskResultLost() - Constructor for class org.apache.spark.TaskResultLost
- tasks() - Method in class org.apache.spark.status.api.v1.StageData
- TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
- TaskScheduler - Interface in org.apache.spark.scheduler
-
Low-level task scheduler interface, currently implemented exclusively by
TaskSchedulerImpl
. - TaskSchedulerIsSet - Class in org.apache.spark
-
An event that SparkContext uses to notify HeartbeatReceiver that SparkContext.taskScheduler is created.
- TaskSchedulerIsSet() - Constructor for class org.apache.spark.TaskSchedulerIsSet
- TaskSorting - Enum Class in org.apache.spark.status.api.v1
- taskStartFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- taskStartToJson(SparkListenerTaskStart, JsonGenerator, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- TaskState - Class in org.apache.spark
- TaskState() - Constructor for class org.apache.spark.TaskState
- TaskStatus - Enum Class in org.apache.spark.status.api.v1
- taskSucceeded(int, Object) - Method in interface org.apache.spark.scheduler.JobListener
- TaskThreadDump(long) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.TaskThreadDump
- TaskThreadDump$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.TaskThreadDump$
- taskTime() - Method in class org.apache.spark.status.api.v1.ExecutorMetricsDistributions
- taskTime() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
- taskTime() - Method in class org.apache.spark.status.LiveExecutorStageSummary
- taskType() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- TEMP_DIR_SHUTDOWN_PRIORITY() - Static method in class org.apache.spark.util.ShutdownHookManager
-
The shutdown priority of temp directory must be lower than the SparkContext shutdown priority.
- TEMP_LOCAL() - Static method in class org.apache.spark.storage.BlockId
- TEMP_SHUFFLE() - Static method in class org.apache.spark.storage.BlockId
- tempFileWith(File) - Static method in class org.apache.spark.util.Utils
-
Returns a path of temporary file which is in the same directory with
path
. - temporaryViewWithSchemaBindingMode(SqlBaseParser.StatementContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- tempViewNotCachedForAnalyzingColumnsError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- tempViewNotSupportStreamingWriteError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- TeradataDialect - Class in org.apache.spark.sql.jdbc
- TeradataDialect() - Constructor for class org.apache.spark.sql.jdbc.TeradataDialect
- Term - Interface in org.apache.spark.ml.feature
-
R formula terms.
- terminateProcess(Process, long) - Static method in class org.apache.spark.util.Utils
-
Terminates a process waiting for at most the specified duration.
- test(Dataset<?>, String, String, double...) - Static method in class org.apache.spark.ml.stat.KolmogorovSmirnovTest
-
Convenience function to conduct a one-sample, two-sided Kolmogorov-Smirnov test for probability distribution equality.
- test(Dataset<?>, String, String, Seq<Object>) - Static method in class org.apache.spark.ml.stat.KolmogorovSmirnovTest
- test(Dataset<?>, String, Function<Double, Double>) - Static method in class org.apache.spark.ml.stat.KolmogorovSmirnovTest
- test(Dataset<?>, String, Function1<Object, Object>) - Static method in class org.apache.spark.ml.stat.KolmogorovSmirnovTest
- test(Dataset<Row>, String, String) - Static method in class org.apache.spark.ml.stat.ANOVATest
- test(Dataset<Row>, String, String) - Static method in class org.apache.spark.ml.stat.ChiSquareTest
-
Conduct Pearson's independence test for every feature against the label.
- test(Dataset<Row>, String, String) - Static method in class org.apache.spark.ml.stat.FValueTest
- test(Dataset<Row>, String, String, boolean) - Static method in class org.apache.spark.ml.stat.ANOVATest
- test(Dataset<Row>, String, String, boolean) - Static method in class org.apache.spark.ml.stat.ChiSquareTest
- test(Dataset<Row>, String, String, boolean) - Static method in class org.apache.spark.ml.stat.FValueTest
- TEST() - Static method in class org.apache.spark.storage.BlockId
- TEST_ACCUM() - Static method in class org.apache.spark.InternalAccumulator
- testCommandAvailable(String) - Static method in class org.apache.spark.TestUtils
-
Test if a command is available.
- TestGroupState<S> - Interface in org.apache.spark.sql.streaming
-
:: Experimental ::
- testOneSample(RDD<Object>, String, double...) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
-
A convenience function that allows running the KS test for 1 set of sample data against a named distribution
- testOneSample(RDD<Object>, String, Seq<Object>) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
- testOneSample(RDD<Object>, RealDistribution) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
- testOneSample(RDD<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
- TestResult<DF> - Interface in org.apache.spark.mllib.stat.test
-
Trait for hypothesis test results.
- TestUtils - Class in org.apache.spark
-
Utilities for tests.
- TestUtils() - Constructor for class org.apache.spark.TestUtils
- text(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads text files and returns a
DataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. - text(String) - Method in class org.apache.spark.sql.DataFrameReader
- text(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
in a text file at the specified path. - text(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads text files and returns a
DataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. - text(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads text files and returns a
DataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. - text(String...) - Method in class org.apache.spark.sql.DataFrameReader
- text(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads text files and returns a
DataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. - text(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- textDataSourceWithMultiColumnsError(StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- textFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
- textFile(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads text files and returns a
Dataset
of String. - textFile(String) - Method in class org.apache.spark.sql.DataFrameReader
- textFile(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads text file(s) and returns a
Dataset
of String. - textFile(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads text files and returns a
Dataset
of String. - textFile(String...) - Method in class org.apache.spark.sql.DataFrameReader
- textFile(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
- textFile(String, int) - Method in class org.apache.spark.SparkContext
-
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
- textFile(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads text files and returns a
Dataset
of String. - textFile(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- textFileStream(String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as text files (using key as LongWritable, value as Text and input format as TextInputFormat).
- textFileStream(String) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as text files (using key as LongWritable, value as Text and input format as TextInputFormat).
- textResponderToServlet(Function1<HttpServletRequest, String>) - Static method in class org.apache.spark.ui.JettyUtils
- thenComparing(Comparator<? super T>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- thenComparing(Comparator<? super T>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- thenComparing(Comparator<? super T>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- thenComparing(Comparator<? super T>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- thenComparing(Comparator<? super T>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- thenComparing(Comparator<? super T>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- thenComparing(Comparator<? super T>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- thenComparing(Function<? super T, ? extends U>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- thenComparing(Function<? super T, ? extends U>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- thenComparing(Function<? super T, ? extends U>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- thenComparing(Function<? super T, ? extends U>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- thenComparing(Function<? super T, ? extends U>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- thenComparing(Function<? super T, ? extends U>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- thenComparing(Function<? super T, ? extends U>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- thenComparing(Function<? super T, ? extends U>, Comparator<? super U>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- thenComparingDouble(ToDoubleFunction<? super T>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- thenComparingDouble(ToDoubleFunction<? super T>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- thenComparingDouble(ToDoubleFunction<? super T>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- thenComparingDouble(ToDoubleFunction<? super T>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- thenComparingDouble(ToDoubleFunction<? super T>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- thenComparingDouble(ToDoubleFunction<? super T>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- thenComparingDouble(ToDoubleFunction<? super T>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- thenComparingInt(ToIntFunction<? super T>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- thenComparingInt(ToIntFunction<? super T>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- thenComparingInt(ToIntFunction<? super T>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- thenComparingInt(ToIntFunction<? super T>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- thenComparingInt(ToIntFunction<? super T>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- thenComparingInt(ToIntFunction<? super T>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- thenComparingInt(ToIntFunction<? super T>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- thenComparingLong(ToLongFunction<? super T>) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- thenComparingLong(ToLongFunction<? super T>) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- thenComparingLong(ToLongFunction<? super T>) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- thenComparingLong(ToLongFunction<? super T>) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- thenComparingLong(ToLongFunction<? super T>) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- thenComparingLong(ToLongFunction<? super T>) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- thenComparingLong(ToLongFunction<? super T>) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- theta() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- theta() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
- theta() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
- theta() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
- thisClassName() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
Hard-code class name string in case it changes in the future
- thisClassName() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
Hard-code class name string in case it changes in the future
- thisClassName() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
- thisFormatVersion() - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
- thisFormatVersion() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
- thisFormatVersion() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
- thisFormatVersion() - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
- thisFormatVersion() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
- threadId() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- threadName() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- ThreadStackTrace - Class in org.apache.spark.status.api.v1
- ThreadStackTrace(long, String, Thread.State, StackTrace, Option<Object>, String, Seq<String>, Seq<String>, Seq<String>, Option<String>, Option<String>, boolean, boolean, boolean, int) - Constructor for class org.apache.spark.status.api.v1.ThreadStackTrace
- threadState() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
- ThreadUtils - Class in org.apache.spark.util
- ThreadUtils() - Constructor for class org.apache.spark.util.ThreadUtils
- threshold() - Method in class org.apache.spark.ml.classification.LinearSVC
- threshold() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- threshold() - Method in interface org.apache.spark.ml.classification.LinearSVCParams
-
Param for threshold in binary classification prediction.
- threshold() - Method in class org.apache.spark.ml.classification.LogisticRegression
- threshold() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- threshold() - Method in class org.apache.spark.ml.feature.Binarizer
-
Param for threshold used to binarize continuous features.
- threshold() - Method in interface org.apache.spark.ml.param.shared.HasThreshold
-
Param for threshold in binary classification prediction, in range [0, 1].
- threshold() - Method in class org.apache.spark.ml.tree.ContinuousSplit
- threshold() - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
- threshold() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- threshold() - Method in class org.apache.spark.mllib.tree.model.Split
- thresholds() - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
- thresholds() - Method in class org.apache.spark.ml.classification.ProbabilisticClassifier
- thresholds() - Method in class org.apache.spark.ml.feature.Binarizer
-
Array of threshold used to binarize continuous features.
- thresholds() - Method in interface org.apache.spark.ml.param.shared.HasThresholds
-
Param for Thresholds in multi-class classification to adjust the probability of predicting each class.
- thresholds() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns thresholds in descending order.
- throughOrigin() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
param for whether the regression is through the origin.
- throwBalls(int, RDD<?>, double, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
- time() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.MiscellaneousProcessAdded
- time() - Method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
- time() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- time() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- time() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
Deprecated.
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
- time() - Method in class org.apache.spark.scheduler.SparkListenerJobEnd
- time() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
- time() - Method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
Deprecated.
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
- time() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
-
Time when the exception occurred
- time() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
- time(Function0<T>) - Method in class org.apache.spark.sql.api.SparkSession
-
Executes some code block and prints to stdout the time taken to execute the block.
- Time - Class in org.apache.spark.streaming
-
This is a simple class that represents an absolute instant of time.
- Time(long) - Constructor for class org.apache.spark.streaming.Time
- timeIt(int, Function0<BoxedUnit>, Option<Function0<BoxedUnit>>) - Static method in class org.apache.spark.util.Utils
-
Timing method based on iterations that permit JVM JIT optimization.
- TimeMode - Class in org.apache.spark.sql.streaming
-
Represents the time modes (used for specifying timers and ttl) possible for the Dataset operations
transformWithState
. - TimeMode() - Constructor for class org.apache.spark.sql.streaming.TimeMode
- timeout(Duration) - Method in class org.apache.spark.streaming.StateSpec
-
Set the duration after which the state of an idle key will be removed.
- TIMER() - Static method in class org.apache.spark.metrics.sink.StatsdMetricType
- TimerValues - Interface in org.apache.spark.sql.streaming
-
Class used to provide access to timer values for processing and event time populated before method invocations using the arbitrary state API v2.
- times(byte, byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- times(double, double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- times(double, double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- times(float, float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- times(float, float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- times(int) - Method in class org.apache.spark.streaming.Duration
- times(int, int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- times(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Method executed for repeating a task for side effects.
- times(long, long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- times(short, short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- times(Decimal, Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- times(Decimal, Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- timestamp() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new
StructField
of type timestamp. - timestamp() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryIdleEvent
- timestamp() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
- timestamp() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- TIMESTAMP - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- TIMESTAMP - Static variable in class org.apache.spark.types.variant.VariantUtil
- TIMESTAMP() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable timestamp type.
- timestamp_add(String, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Adds the specified number of units to the given timestamp.
- timestamp_diff(String, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Gets the difference between the timestamps in the specified units by truncating the fraction part.
- TIMESTAMP_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- TIMESTAMP_LTZ() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
- timestamp_micros(Column) - Static method in class org.apache.spark.sql.functions
-
Creates timestamp from the number of microseconds since UTC epoch.
- timestamp_millis(Column) - Static method in class org.apache.spark.sql.functions
-
Creates timestamp from the number of milliseconds since UTC epoch.
- TIMESTAMP_NTZ - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- TIMESTAMP_NTZ - Static variable in class org.apache.spark.types.variant.VariantUtil
- timestamp_seconds(Column) - Static method in class org.apache.spark.sql.functions
-
Converts the number of seconds from the Unix epoch (1970-01-01T00:00:00Z) to a timestamp.
- TIMESTAMP_TZ() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
- timestampAddOverflowError(long, int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- TimestampNTZType - Class in org.apache.spark.sql.types
-
The timestamp without time zone type represents a local time in microsecond precision, which is independent of time zone.
- TimestampNTZType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the TimestampNTZType object.
- TimestampNTZType() - Constructor for class org.apache.spark.sql.types.TimestampNTZType
- TimestampType - Class in org.apache.spark.sql.types
-
The timestamp type represents a time instant in microsecond precision.
- TimestampType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the TimestampType object.
- TimestampType() - Constructor for class org.apache.spark.sql.types.TimestampType
- TimestampTypeExpression - Class in org.apache.spark.sql.types
- TimestampTypeExpression() - Constructor for class org.apache.spark.sql.types.TimestampTypeExpression
- timeStringAsMs(String) - Static method in class org.apache.spark.util.Utils
-
Convert a time parameter such as (50s, 100ms, or 250us) to milliseconds for internal use.
- timeStringAsSeconds(String) - Static method in class org.apache.spark.util.Utils
-
Convert a time parameter such as (50s, 100ms, or 250us) to seconds for internal use.
- timeTakenMs(Function0<T>) - Static method in class org.apache.spark.util.Utils
-
Records the duration of running `body`.
- TimeTrackingOutputStream - Class in org.apache.spark.storage
-
Intercepts write calls and tracks total time spent writing in order to update shuffle write metrics.
- TimeTrackingOutputStream(ShuffleWriteMetricsReporter, OutputStream) - Constructor for class org.apache.spark.storage.TimeTrackingOutputStream
- timeTravelUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- timeUnit() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
- TIMING_DATA() - Static method in class org.apache.spark.api.r.SpecialLengths
- title() - Method in interface org.apache.spark.sql.ExtendedExplainGenerator
- to(StructType) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new DataFrame where each row is reconciled to match the specified schema.
- to(StructType) - Method in class org.apache.spark.sql.Dataset
- to(Time, Duration) - Method in class org.apache.spark.streaming.Time
- to_avro(Column) - Static method in class org.apache.spark.sql.avro.functions
-
Converts a column into binary of avro format.
- to_avro(Column, String) - Static method in class org.apache.spark.sql.avro.functions
-
Converts a column into binary of avro format.
- to_binary(Column) - Static method in class org.apache.spark.sql.functions
-
Converts the input
e
to a binary value based on the default format "hex". - to_binary(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Converts the input
e
to a binary value based on the suppliedformat
. - to_char(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Convert
e
to a string based on theformat
. - to_csv(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a column containing a
StructType
into a CSV string with the specified schema. - to_csv(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Converts a column containing a
StructType
into a CSV string with the specified schema. - to_date(Column) - Static method in class org.apache.spark.sql.functions
-
Converts the column into
DateType
by casting rules toDateType
. - to_date(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts the column into a
DateType
with a specified format - TO_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- TO_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- to_json(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a column containing a
StructType
,ArrayType
or aMapType
into a JSON string with the specified schema. - to_json(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Converts a column containing a
StructType
,ArrayType
or aMapType
into a JSON string with the specified schema. - to_json(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Converts a column containing a
StructType
,ArrayType
or aMapType
into a JSON string with the specified schema. - to_number(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Convert string 'e' to a number based on the string format 'format'.
- to_timestamp(Column) - Static method in class org.apache.spark.sql.functions
-
Converts to a timestamp by casting rules to
TimestampType
. - to_timestamp(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts time string with the given pattern to timestamp.
- to_timestamp_ltz(Column) - Static method in class org.apache.spark.sql.functions
-
Parses the
timestamp
expression with the default format to a timestamp without time zone. - to_timestamp_ltz(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Parses the
timestamp
expression with theformat
expression to a timestamp without time zone. - to_timestamp_ntz(Column) - Static method in class org.apache.spark.sql.functions
-
Parses the
timestamp
expression with the default format to a timestamp without time zone. - to_timestamp_ntz(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Parses the
timestamp_str
expression with theformat
expression to a timestamp without time zone. - to_unix_timestamp(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the UNIX timestamp of the given time.
- to_unix_timestamp(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the UNIX timestamp of the given time.
- to_utc_timestamp(Column, String) - Static method in class org.apache.spark.sql.functions
-
Given a timestamp like '2017-07-14 02:40:00.0', interprets it as a time in the given time zone, and renders that time as a timestamp in UTC.
- to_utc_timestamp(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Given a timestamp like '2017-07-14 02:40:00.0', interprets it as a time in the given time zone, and renders that time as a timestamp in UTC.
- to_varchar(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Convert
e
to a string based on theformat
. - to_variant_object(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a column containing nested inputs (array/map/struct) into a variants where maps and structs are converted to variant objects which are unordered unlike SQL structs.
- to_xml(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a column containing a
StructType
into a XML string with the specified schema. - to_xml(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Converts a column containing a
StructType
into a XML string with the specified schema. - toApacheCommonsStats(StatCounter) - Method in interface org.apache.spark.mllib.stat.test.StreamingTestMethod
-
Implicit adapter to convert between streaming summary statistics type and the type required by the t-testing libraries.
- toApi() - Method in class org.apache.spark.status.LiveRDDDistribution
- toApi() - Method in class org.apache.spark.status.LiveResourceProfile
- toApi() - Method in class org.apache.spark.status.LiveStage
- toArray() - Method in class org.apache.spark.input.PortableDataStream
-
Read the file as a byte array
- toArray() - Method in class org.apache.spark.ml.linalg.DenseVector
- toArray() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts to a dense array in column major.
- toArray() - Method in class org.apache.spark.ml.linalg.SparseVector
- toArray() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts the instance to a double array.
- toArray() - Method in class org.apache.spark.mllib.linalg.DenseVector
- toArray() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Converts to a dense array in column major.
- toArray() - Method in class org.apache.spark.mllib.linalg.SparseVector
- toArray() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the instance to a double array.
- toArrowField(String, DataType, boolean, String, boolean) - Static method in class org.apache.spark.sql.util.ArrowUtils
-
Maps field from Spark to Arrow.
- toArrowSchema(StructType, String, boolean, boolean) - Static method in class org.apache.spark.sql.util.ArrowUtils
-
Maps schema from Spark to Arrow.
- toArrowType(DataType, String, boolean) - Static method in class org.apache.spark.sql.util.ArrowUtils
-
Maps data type from Spark to Arrow.
- toAttributes() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.ColumnsHelper
- toAvroType(DataType, boolean, String, String) - Static method in class org.apache.spark.sql.avro.SchemaConverters
-
Converts a Spark SQL schema to a corresponding Avro schema.
- toBatch() - Method in interface org.apache.spark.sql.connector.read.Scan
-
Returns the physical representation of this scan for batch query.
- toBatch() - Method in interface org.apache.spark.sql.connector.write.DeltaWrite
- toBatch() - Method in interface org.apache.spark.sql.connector.write.Write
-
Returns a
BatchWrite
to write data to batch source. - toBigDecimal() - Method in class org.apache.spark.sql.types.Decimal
- toBlockMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to BlockMatrix.
- toBlockMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Converts to BlockMatrix.
- toBlockMatrix(int, int) - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to BlockMatrix.
- toBlockMatrix(int, int) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Converts to BlockMatrix.
- toBooleanArray() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- toBreeze() - Method in interface org.apache.spark.mllib.linalg.distributed.DistributedMatrix
-
Collects data and assembles a local dense breeze matrix (for test only).
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- toBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- toByte() - Method in class org.apache.spark.sql.types.Decimal
- toByteArray() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- toByteArray() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Serializes this
CountMinSketch
and returns the serialized form. - toByteBuffer() - Method in interface org.apache.spark.storage.BlockData
- toByteBuffer() - Method in class org.apache.spark.storage.DiskBlockData
- toChunkedByteBuffer(Function1<Object, ByteBuffer>) - Method in interface org.apache.spark.storage.BlockData
- toChunkedByteBuffer(Function1<Object, ByteBuffer>) - Method in class org.apache.spark.storage.DiskBlockData
- toColumn() - Method in class org.apache.spark.sql.expressions.Aggregator
-
Returns this
Aggregator
as aTypedColumn
that can be used inDataset
operations. - toConf(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- toConfVal(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- toContinuousStream(String) - Method in interface org.apache.spark.sql.connector.read.Scan
-
Returns the physical representation of this scan for streaming query with continuous mode.
- toCoordinateMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Converts to CoordinateMatrix.
- toCoordinateMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Converts this matrix to a
CoordinateMatrix
. - toCryptoConf(SparkConf) - Static method in class org.apache.spark.security.CryptoStreamUtils
- toDataFrame(JavaRDD<byte[]>, StructType, SparkSession) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
R callable function to create a
DataFrame
from aJavaRDD
of serialized ArrowRecordBatches. - toDayTimeIntervalANSIString(long, byte, byte) - Static method in class org.apache.spark.util.DayTimeIntervalUtils
- toDDL() - Method in class org.apache.spark.sql.types.StructField
-
Returns a string containing a schema in DDL format.
- toDDL() - Method in class org.apache.spark.sql.types.StructType
-
Returns a string containing a schema in DDL format.
- toDebugString() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
A description of this RDD and its recursive dependencies for debugging.
- toDebugString() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
-
Full description of model
- toDebugString() - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
-
Full description of model
- toDebugString() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Print the full model to a string.
- toDebugString() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Print the full model to a string.
- toDebugString() - Method in class org.apache.spark.rdd.RDD
-
A description of this RDD and its recursive dependencies for debugging.
- toDebugString() - Method in class org.apache.spark.SparkConf
-
Return a string listing all keys and values, one per line.
- toDebugString() - Method in class org.apache.spark.sql.types.Decimal
- toDegrees(String) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use degrees. Since 2.1.0.
- toDegrees(Column) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use degrees. Since 2.1.0.
- toDense() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix while maintaining the layout of the current matrix.
- toDense() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts this vector to a dense vector.
- toDense() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a
DenseMatrix
from the givenSparseMatrix
. - toDense() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts this vector to a dense vector.
- toDenseColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix in column major order.
- toDenseMatrix(boolean) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix.
- toDenseRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix in row major order.
- toDF() - Method in class org.apache.spark.sql.api.Dataset
-
Converts this strongly typed collection of data to generic Dataframe.
- toDF() - Method in class org.apache.spark.sql.Dataset
- toDF() - Method in class org.apache.spark.sql.DatasetHolder
- toDF(String...) - Method in class org.apache.spark.sql.api.Dataset
-
Converts this strongly typed collection of data to generic
DataFrame
with columns renamed. - toDF(String...) - Method in class org.apache.spark.sql.Dataset
- toDF(Seq<String>) - Method in class org.apache.spark.sql.api.Dataset
-
Converts this strongly typed collection of data to generic
DataFrame
with columns renamed. - toDF(Seq<String>) - Method in class org.apache.spark.sql.Dataset
- toDF(Seq<String>) - Method in class org.apache.spark.sql.DatasetHolder
- toDouble() - Method in class org.apache.spark.sql.types.Decimal
- toDouble(byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- toDouble(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- toDouble(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- toDouble(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- toDouble(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- toDouble(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- toDouble(long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- toDouble(short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- toDouble(Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- toDouble(Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- toDoubleArray() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- toDS() - Method in class org.apache.spark.sql.DatasetHolder
- toDSOption(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toDSOption(String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toDSOption(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toDSOption(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toDSOption(String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toEdgeTriplet() - Method in class org.apache.spark.graphx.EdgeContext
-
Converts the edge and vertex properties into an
EdgeTriplet
for convenience. - toErrorString() - Method in class org.apache.spark.ExceptionFailure
- toErrorString() - Method in class org.apache.spark.ExecutorLostFailure
- toErrorString() - Method in class org.apache.spark.FetchFailed
- toErrorString() - Static method in class org.apache.spark.Resubmitted
- toErrorString() - Method in class org.apache.spark.TaskCommitDenied
- toErrorString() - Method in interface org.apache.spark.TaskFailedReason
-
Error message displayed in the web UI.
- toErrorString() - Method in class org.apache.spark.TaskKilled
- toErrorString() - Static method in class org.apache.spark.TaskResultLost
- toErrorString() - Static method in class org.apache.spark.UnknownReason
- toFloat() - Method in class org.apache.spark.sql.types.Decimal
- toFloat(byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- toFloat(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- toFloat(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- toFloat(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- toFloat(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- toFloat(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- toFloat(long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- toFloat(short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- toFloat(Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- toFloat(Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- toFloatArray() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- toFormattedString() - Method in class org.apache.spark.streaming.Duration
- toFractionalResource(long) - Static method in class org.apache.spark.resource.ResourceAmountUtils
- toImmutableArraySeq() - Method in class org.apache.spark.util.ArrayImplicits.SparkArrayOps
-
Wraps an Array[T] as an immutable.ArraySeq[T] without copying.
- toIndexedRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Converts to IndexedRowMatrix.
- toIndexedRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to IndexedRowMatrix.
- toInputStream() - Method in interface org.apache.spark.storage.BlockData
- toInputStream() - Method in class org.apache.spark.storage.DiskBlockData
- toInsertableRelation() - Method in interface org.apache.spark.sql.connector.write.V1Write
- toInt() - Method in class org.apache.spark.sql.types.Decimal
- toInt() - Method in class org.apache.spark.storage.StorageLevel
- toInt(byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- toInt(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- toInt(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- toInt(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- toInt(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- toInt(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- toInt(long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- toInt(short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- toInt(Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- toInt(Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- toIntArray() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- toInternalResource(double) - Static method in class org.apache.spark.resource.ResourceAmountUtils
- toJavaBigDecimal() - Method in class org.apache.spark.sql.types.Decimal
- toJavaBigInteger() - Method in class org.apache.spark.sql.types.Decimal
- toJavaDStream() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Convert to a JavaDStream
- toJavaRDD() - Method in class org.apache.spark.rdd.RDD
- toJavaRDD() - Method in class org.apache.spark.sql.Dataset
-
Returns the content of the Dataset as a
JavaRDD
ofT
s. - toJson() - Method in class org.apache.spark.mllib.linalg.DenseVector
- toJson() - Method in class org.apache.spark.mllib.linalg.SparseVector
- toJson() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the vector to a JSON string.
- toJson() - Method in class org.apache.spark.resource.ResourceInformation
- toJson(ZoneId) - Method in class org.apache.spark.types.variant.Variant
- toJson(Matrix) - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
-
Coverts the Matrix to a JSON string.
- toJson(Vector) - Static method in class org.apache.spark.ml.linalg.JsonVectorConverter
-
Coverts the vector to a JSON string.
- toJSON() - Method in class org.apache.spark.sql.api.Dataset
-
Returns the content of the Dataset as a Dataset of JSON strings.
- toJSON() - Method in class org.apache.spark.sql.Dataset
- toJsonString() - Method in class org.apache.spark.ui.flamegraph.FlamegraphNode
- toJsonString(Function1<JsonGenerator, BoxedUnit>) - Static method in class org.apache.spark.util.JsonProtocol
- toJsonString(Function1<JsonGenerator, BoxedUnit>) - Method in interface org.apache.spark.util.JsonUtils
- toJValue() - Method in class org.apache.spark.resource.ResourceInformationJson
- TOKEN_KIND() - Static method in class org.apache.spark.kafka010.KafkaTokenUtil
- Tokenizer - Class in org.apache.spark.ml.feature
-
A tokenizer that converts the input string to lowercase and then splits it by white spaces.
- Tokenizer() - Constructor for class org.apache.spark.ml.feature.Tokenizer
- Tokenizer(String) - Constructor for class org.apache.spark.ml.feature.Tokenizer
- tokens() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
- tol() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- tol() - Method in class org.apache.spark.ml.classification.FMClassifier
- tol() - Method in class org.apache.spark.ml.classification.LinearSVC
- tol() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- tol() - Method in class org.apache.spark.ml.classification.LogisticRegression
- tol() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- tol() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- tol() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- tol() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- tol() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- tol() - Method in class org.apache.spark.ml.clustering.KMeans
- tol() - Method in class org.apache.spark.ml.clustering.KMeansModel
- tol() - Method in interface org.apache.spark.ml.param.shared.HasTol
-
Param for the convergence tolerance for iterative algorithms (>= 0).
- tol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- tol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- tol() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- tol() - Method in class org.apache.spark.ml.regression.FMRegressor
- tol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- tol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- tol() - Method in class org.apache.spark.ml.regression.LinearRegression
- tol() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- toLocal() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
Convert this distributed model to a local representation.
- toLocal() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Convert model to a local model.
- toLocalIterator() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an iterator that contains all of the elements in this RDD.
- toLocalIterator() - Method in class org.apache.spark.rdd.RDD
-
Return an iterator that contains all of the elements in this RDD.
- toLocalIterator() - Method in class org.apache.spark.sql.api.Dataset
-
Returns an iterator that contains all rows in this Dataset.
- toLocalIterator() - Method in class org.apache.spark.sql.Dataset
- toLocalMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Collect the distributed matrix on the driver as a
DenseMatrix
. - toLong() - Method in class org.apache.spark.sql.types.Decimal
- toLong(byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- toLong(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- toLong(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
- toLong(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- toLong(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
- toLong(int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- toLong(long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- toLong(short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- toLong(Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
- toLong(Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- toLongArray() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- toLowercase() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Indicates whether to convert all characters to lowercase before tokenizing.
- toMapWithIndex(Iterable<K>) - Method in interface org.apache.spark.util.SparkCollectionUtils
-
Same function as
keys.zipWithIndex.toMap
, but has perf gain. - toMetadata() - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to ML metadata
- toMetadata() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to ML metadata
- toMetadata() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- toMetadata(Metadata) - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to ML metadata with some existing metadata.
- toMetadata(Metadata) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to ML metadata with some existing metadata.
- toMetadata(Metadata) - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- toMicroBatchStream(String) - Method in interface org.apache.spark.sql.connector.read.Scan
-
Returns the physical representation of this scan for streaming query with micro-batch mode.
- toNetty() - Method in interface org.apache.spark.storage.BlockData
-
Returns a Netty-friendly wrapper for the block's data.
- toNetty() - Method in class org.apache.spark.storage.DiskBlockData
-
Returns a Netty-friendly wrapper for the block's data.
- toNettyForSsl() - Method in interface org.apache.spark.storage.BlockData
-
Returns a Netty-friendly wrapper for the block's data.
- toNettyForSsl() - Method in class org.apache.spark.storage.DiskBlockData
-
Returns a Netty-friendly wrapper for the block's data.
- toNullable() - Method in class org.apache.spark.sql.types.ArrayType
-
Returns the same data type but set all nullability fields are true (
StructField.nullable
,ArrayType.containsNull
, andMapType.valueContainsNull
). - toNullable() - Method in class org.apache.spark.sql.types.MapType
-
Returns the same data type but set all nullability fields are true (
StructField.nullable
,ArrayType.containsNull
, andMapType.valueContainsNull
). - toNullable() - Method in class org.apache.spark.sql.types.StructType
-
Returns the same data type but set all nullability fields are true (
StructField.nullable
,ArrayType.containsNull
, andMapType.valueContainsNull
). - toOld() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
-
Convert to spark.mllib DecisionTreeModel (losing some information)
- toOld() - Method in interface org.apache.spark.ml.tree.Split
-
Convert to old Split format
- tooltip(String, String) - Static method in class org.apache.spark.ui.UIUtils
- ToolTips - Class in org.apache.spark.ui.storage
- ToolTips - Class in org.apache.spark.ui
- ToolTips() - Constructor for class org.apache.spark.ui.storage.ToolTips
- ToolTips() - Constructor for class org.apache.spark.ui.ToolTips
- tooManyArrayElementsError(long, int) - Static method in class org.apache.spark.errors.SparkCoreErrors
- tooManyArrayElementsError(long, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toOps(T, ClassTag<VD>) - Method in interface org.apache.spark.graphx.impl.VertexPartitionBaseOpsConstructor
- top(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the top k (largest) elements from this RDD using the natural ordering for T and maintains the order.
- top(int, Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the top k (largest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.
- top(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the top k (largest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.
- toPairDStreamFunctions(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Static method in class org.apache.spark.streaming.dstream.DStream
- toPath() - Method in class org.apache.spark.paths.SparkPath
- topByKey(int, Ordering<V>) - Method in class org.apache.spark.mllib.rdd.MLPairRDDFunctions
-
Returns the top k (largest) elements for each key from this RDD as defined by the specified implicit Ordering[T].
- topDocumentsPerTopic(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Return the top documents for each topic
- topicAssignments() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- topicConcentration() - Method in class org.apache.spark.ml.clustering.LDA
- topicConcentration() - Method in class org.apache.spark.ml.clustering.LDAModel
- topicConcentration() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
- topicConcentration() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- topicConcentration() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
- topicConcentration() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- topicDistribution(Vector) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Predicts the topic mixture distribution for a document (often called "theta" in the literature).
- topicDistributionCol() - Method in class org.apache.spark.ml.clustering.LDA
- topicDistributionCol() - Method in class org.apache.spark.ml.clustering.LDAModel
- topicDistributionCol() - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Output column with estimates of the topic mixture distribution for each document (often called "theta" in the literature).
- topicDistributions() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
For each document in the training set, return the distribution over topics for that document ("theta_doc").
- topicDistributions(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Java-friendly version of
topicDistributions
- topicDistributions(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Predicts the topic mixture distribution for each document (often called "theta" in the literature).
- topics() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- topicsMatrix() - Method in class org.apache.spark.ml.clustering.LDAModel
-
Inferred topics, where each topic is represented by a distribution over terms.
- topicsMatrix() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- topicsMatrix() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Inferred topics, where each topic is represented by a distribution over terms.
- topicsMatrix() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- topK(Iterator<Tuple2<String, Object>>, int) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
Gets the top k words in terms of word counts.
- toPlainString() - Method in class org.apache.spark.sql.types.Decimal
- toPMML() - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to a String in PMML format
- toPMML(OutputStream) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to the OutputStream in PMML format
- toPMML(String) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to a local file in PMML format
- toPMML(StreamResult) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to the stream result in PMML format
- toPMML(SparkContext, String) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to a directory on a distributed file system in PMML format
- topNode() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
- Topology - Interface in org.apache.spark.ml.ann
-
Trait for the artificial neural network (ANN) topology properties
- topologyFile() - Method in class org.apache.spark.storage.FileBasedTopologyMapper
- topologyInfo() - Method in class org.apache.spark.storage.BlockManagerId
- topologyMap() - Method in class org.apache.spark.storage.FileBasedTopologyMapper
- TopologyMapper - Class in org.apache.spark.storage
-
::DeveloperApi:: TopologyMapper provides topology information for a given host param: conf SparkConf to get required properties, if needed
- TopologyMapper(SparkConf) - Constructor for class org.apache.spark.storage.TopologyMapper
- TopologyModel - Interface in org.apache.spark.ml.ann
-
Trait for ANN topology model
- toPredict() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
- topTopicsPerDocument(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
For each document, return the top k weighted topics for that document and their weights.
- toQualifiedNameParts(CatalogPlugin) - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
- toRadians(String) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use radians. Since 2.1.0.
- toRadians(Column) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Use radians. Since 2.1.0.
- toRDD(JavaDoubleRDD) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
- toRDD(JavaPairRDD<K, V>) - Static method in class org.apache.spark.api.java.JavaPairRDD
- toRDD(JavaRDD<T>) - Static method in class org.apache.spark.api.java.JavaRDD
- toResourceInformation() - Method in class org.apache.spark.resource.ResourceInformationJson
- toRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to RowMatrix, dropping row indices after grouping by row index.
- toRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Drops row indices and converts this matrix to a
RowMatrix
. - toScalaBigInt() - Method in class org.apache.spark.sql.types.Decimal
- toSeq() - Method in class org.apache.spark.ml.param.ParamMap
-
Converts this param map to a sequence of param pairs.
- toSeq() - Method in interface org.apache.spark.sql.Row
-
Return a Scala Seq representing the row.
- toShort() - Method in class org.apache.spark.sql.types.Decimal
- toShortArray() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- toSparkContext(JavaSparkContext) - Static method in class org.apache.spark.api.java.JavaSparkContext
- toSparse() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix while maintaining the layout of the current matrix.
- toSparse() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts this vector to a sparse vector with all explicit zeros removed.
- toSparse() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a
SparseMatrix
from the givenDenseMatrix
. - toSparse() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts this vector to a sparse vector with all explicit zeros removed.
- toSparseColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix in column major order.
- toSparseMatrix(boolean) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix.
- toSparseRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix in row major order.
- toSparseWithSize(int) - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts this vector to a sparse vector with all explicit zeros removed when the size is known.
- toSparseWithSize(int) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts this vector to a sparse vector with all explicit zeros removed when the size is known.
- toSplit() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
- toSplitInfo(Class<?>, String, InputSplit) - Static method in class org.apache.spark.scheduler.SplitInfo
- toSplitInfo(Class<?>, String, InputSplit) - Static method in class org.apache.spark.scheduler.SplitInfo
- toSQLConf(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLConf(String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLConf(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLConf(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLConf(String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLConfVal(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLConfVal(String) - Method in interface org.apache.spark.sql.errors.QueryErrorsBase
- toSQLConfVal(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLExpr(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLExpr(Expression) - Method in interface org.apache.spark.sql.errors.QueryErrorsBase
- toSQLExpr(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLId(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLId(String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLId(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLId(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLId(String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLId(Seq<String>) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLId(Seq<String>) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLId(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLId(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLId(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLStmt(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLStmt(String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLStmt(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLStmt(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLStmt(String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSqlType(Schema) - Static method in class org.apache.spark.sql.avro.SchemaConverters
-
Converts an Avro schema to a corresponding Spark SQL schema.
- toSqlType(Schema, boolean, String, int) - Static method in class org.apache.spark.sql.avro.SchemaConverters
-
Converts an Avro schema to a corresponding Spark SQL schema.
- toSqlType(Schema, Map<String, String>) - Static method in class org.apache.spark.sql.avro.SchemaConverters
-
Deprecated.using toSqlType(..., useStableIdForUnionType: Boolean) instead. Since 4.0.0.
- toSQLType(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLType(String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLType(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLType(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLType(String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLType(org.apache.spark.sql.types.AbstractDataType) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLType(org.apache.spark.sql.types.AbstractDataType) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLType(org.apache.spark.sql.types.AbstractDataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLType(org.apache.spark.sql.types.AbstractDataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLType(org.apache.spark.sql.types.AbstractDataType) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLValue(double) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLValue(double) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLValue(double) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(double) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(double) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLValue(float) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLValue(float) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLValue(float) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(float) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(float) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLValue(int) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLValue(int) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLValue(int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(int) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLValue(long) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLValue(long) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLValue(long) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(long) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLValue(short) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLValue(short) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLValue(short) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(short) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(short) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLValue(Object, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(Object, DataType) - Method in interface org.apache.spark.sql.errors.QueryErrorsBase
- toSQLValue(Object, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLValue(String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLValue(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toSQLValue(UTF8String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- toSQLValue(UTF8String) - Method in interface org.apache.spark.sql.errors.DataTypeErrorsBase
- toSQLValue(UTF8String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- toSQLValue(UTF8String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- toSQLValue(UTF8String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- toStreaming() - Method in interface org.apache.spark.sql.connector.write.Write
-
Returns a
StreamingWrite
to write data to streaming source. - toString() - Method in class org.apache.spark.api.java.JavaRDD
- toString() - Method in class org.apache.spark.api.java.Optional
- toString() - Method in class org.apache.spark.broadcast.Broadcast
- toString() - Static method in class org.apache.spark.CleanAccum
- toString() - Static method in class org.apache.spark.CleanBroadcast
- toString() - Static method in class org.apache.spark.CleanCheckpoint
- toString() - Static method in class org.apache.spark.CleanRDD
- toString() - Static method in class org.apache.spark.CleanShuffle
- toString() - Static method in class org.apache.spark.CleanSparkListener
- toString() - Method in class org.apache.spark.ContextBarrierId
- toString() - Static method in class org.apache.spark.ErrorInfo
- toString() - Static method in class org.apache.spark.ErrorMessageFormat
- toString() - Static method in class org.apache.spark.ErrorStateInfo
- toString() - Static method in class org.apache.spark.ErrorSubInfo
- toString() - Static method in class org.apache.spark.ExceptionFailure
- toString() - Static method in class org.apache.spark.ExecutorLostFailure
- toString() - Static method in class org.apache.spark.ExecutorRegistered
- toString() - Static method in class org.apache.spark.ExecutorRemoved
- toString() - Static method in class org.apache.spark.FetchFailed
- toString() - Method in class org.apache.spark.graphx.EdgeDirection
- toString() - Method in class org.apache.spark.graphx.EdgeTriplet
- toString() - Method in class org.apache.spark.ml.attribute.Attribute
- toString() - Method in class org.apache.spark.ml.attribute.AttributeGroup
- toString() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- toString() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- toString() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- toString() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- toString() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- toString() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- toString() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- toString() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- toString() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- toString() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- toString() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- toString() - Static method in class org.apache.spark.ml.clustering.ClusterData
- toString() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
- toString() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- toString() - Method in class org.apache.spark.ml.clustering.KMeansModel
- toString() - Method in class org.apache.spark.ml.clustering.LocalLDAModel
- toString() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- toString() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- toString() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- toString() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- toString() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- toString() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- toString() - Method in class org.apache.spark.ml.feature.Binarizer
- toString() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- toString() - Method in class org.apache.spark.ml.feature.Bucketizer
- toString() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- toString() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- toString() - Method in class org.apache.spark.ml.feature.DCT
- toString() - Method in class org.apache.spark.ml.feature.ElementwiseProduct
- toString() - Method in class org.apache.spark.ml.feature.FeatureHasher
- toString() - Method in class org.apache.spark.ml.feature.HashingTF
- toString() - Method in class org.apache.spark.ml.feature.IDFModel
- toString() - Method in class org.apache.spark.ml.feature.ImputerModel
- toString() - Method in class org.apache.spark.ml.feature.Interaction
- toString() - Method in class org.apache.spark.ml.feature.LabeledPoint
- toString() - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- toString() - Method in class org.apache.spark.ml.feature.MinHashLSHModel
- toString() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- toString() - Method in class org.apache.spark.ml.feature.NGram
- toString() - Method in class org.apache.spark.ml.feature.Normalizer
- toString() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- toString() - Method in class org.apache.spark.ml.feature.PCAModel
- toString() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
- toString() - Method in class org.apache.spark.ml.feature.RegexTokenizer
- toString() - Method in class org.apache.spark.ml.feature.RFormula
- toString() - Method in class org.apache.spark.ml.feature.RFormulaModel
- toString() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- toString() - Method in class org.apache.spark.ml.feature.SQLTransformer
- toString() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- toString() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- toString() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- toString() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- toString() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- toString() - Method in class org.apache.spark.ml.feature.VectorAssembler
- toString() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- toString() - Method in class org.apache.spark.ml.feature.VectorSizeHint
- toString() - Method in class org.apache.spark.ml.feature.VectorSlicer
- toString() - Method in class org.apache.spark.ml.feature.Word2VecModel
- toString() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- toString() - Method in class org.apache.spark.ml.linalg.DenseVector
- toString() - Method in interface org.apache.spark.ml.linalg.Matrix
-
A human readable representation of the matrix
- toString() - Method in class org.apache.spark.ml.linalg.SparseVector
- toString() - Method in class org.apache.spark.ml.param.Param
- toString() - Method in class org.apache.spark.ml.param.ParamMap
- toString() - Method in class org.apache.spark.ml.recommendation.ALSModel
- toString() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- toString() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- toString() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- toString() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- toString() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- toString() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
- toString() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- toString() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- toString() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- toString() - Static method in class org.apache.spark.ml.SaveInstanceEnd
- toString() - Static method in class org.apache.spark.ml.SaveInstanceStart
- toString() - Static method in class org.apache.spark.ml.TransformEnd
- toString() - Static method in class org.apache.spark.ml.TransformStart
- toString() - Method in interface org.apache.spark.ml.tree.DecisionTreeModel
-
Summary of the model
- toString() - Method in class org.apache.spark.ml.tree.InternalNode
- toString() - Method in class org.apache.spark.ml.tree.LeafNode
- toString() - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
-
Summary of the model
- toString() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- toString() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- toString() - Method in interface org.apache.spark.ml.util.Identifiable
- toString() - Static method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
- toString() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
- toString() - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
- toString() - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
- toString() - Method in class org.apache.spark.mllib.classification.SVMModel
- toString() - Static method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
- toString() - Static method in class org.apache.spark.mllib.feature.VocabWord
- toString() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
- toString() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
- toString() - Method in class org.apache.spark.mllib.linalg.DenseVector
- toString() - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
- toString() - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
- toString() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
A human readable representation of the matrix
- toString() - Method in class org.apache.spark.mllib.linalg.SparseVector
- toString() - Static method in class org.apache.spark.mllib.recommendation.Rating
- toString() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
-
Print a summary of the model.
- toString() - Static method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
- toString() - Method in class org.apache.spark.mllib.regression.LabeledPoint
- toString() - Method in class org.apache.spark.mllib.stat.test.BinarySample
- toString() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
- toString() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
- toString() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
String explaining the hypothesis test result.
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- toString() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Print a summary of the model.
- toString() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
- toString() - Method in class org.apache.spark.mllib.tree.model.Node
- toString() - Method in class org.apache.spark.mllib.tree.model.Predict
- toString() - Method in class org.apache.spark.mllib.tree.model.Split
- toString() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Print a summary of the model.
- toString() - Method in class org.apache.spark.partial.BoundedDouble
- toString() - Method in class org.apache.spark.partial.PartialResult
- toString() - Method in class org.apache.spark.paths.SparkPath
- toString() - Static method in class org.apache.spark.rdd.CheckpointState
- toString() - Static method in class org.apache.spark.rdd.DeterministicLevel
- toString() - Method in class org.apache.spark.rdd.RDD
- toString() - Static method in class org.apache.spark.RequestMethod
- toString() - Method in class org.apache.spark.resource.ExecutorResourceRequest
- toString() - Method in class org.apache.spark.resource.ExecutorResourceRequests
- toString() - Method in class org.apache.spark.resource.ResourceInformation
- toString() - Static method in class org.apache.spark.resource.ResourceInformationJson
- toString() - Method in class org.apache.spark.resource.ResourceProfile
- toString() - Method in class org.apache.spark.resource.ResourceProfileBuilder
- toString() - Method in class org.apache.spark.resource.TaskResourceRequest
- toString() - Method in class org.apache.spark.resource.TaskResourceRequests
- toString() - Static method in class org.apache.spark.scheduler.AccumulableInfo
- toString() - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
- toString() - Static method in class org.apache.spark.scheduler.ExcludedExecutor
- toString() - Static method in class org.apache.spark.scheduler.ExecutorKilled
- toString() - Method in class org.apache.spark.scheduler.InputFormatInfo
- toString() - Static method in class org.apache.spark.scheduler.local.KillTask
- toString() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
- toString() - Static method in class org.apache.spark.scheduler.local.StatusUpdate
- toString() - Static method in class org.apache.spark.scheduler.local.StopExecutor
- toString() - Static method in class org.apache.spark.scheduler.LossReasonPending
- toString() - Static method in class org.apache.spark.scheduler.SchedulingMode
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
Deprecated.
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerLogStart
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
Deprecated.
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerResourceProfileAdded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
- toString() - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
- toString() - Method in class org.apache.spark.scheduler.SplitInfo
- toString() - Static method in class org.apache.spark.scheduler.TaskLocality
- toString() - Method in class org.apache.spark.SerializableWritable
- toString() - Method in class org.apache.spark.sql.catalog.CatalogMetadata
- toString() - Method in class org.apache.spark.sql.catalog.Column
- toString() - Method in class org.apache.spark.sql.catalog.Database
- toString() - Method in class org.apache.spark.sql.catalog.Function
- toString() - Method in class org.apache.spark.sql.catalog.Table
- toString() - Method in class org.apache.spark.sql.Column
- toString() - Method in class org.apache.spark.sql.connector.catalog.ColumnDefaultValue
- toString() - Method in class org.apache.spark.sql.connector.catalog.IdentityColumnSpec
- toString() - Method in class org.apache.spark.sql.connector.catalog.TableChange.After
- toString() - Method in class org.apache.spark.sql.connector.catalog.TableChange.First
- toString() - Method in class org.apache.spark.sql.connector.catalog.ViewInfo
- toString() - Static method in class org.apache.spark.sql.connector.distributions.UnspecifiedDistributionImpl
- toString() - Method in record class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
Returns a string representation of this record class.
- toString() - Method in class org.apache.spark.sql.connector.expressions.ClusterByTransform
- toString() - Method in class org.apache.spark.sql.connector.expressions.filter.AlwaysFalse
- toString() - Method in class org.apache.spark.sql.connector.expressions.filter.AlwaysTrue
- toString() - Method in enum class org.apache.spark.sql.connector.expressions.NullOrdering
- toString() - Method in enum class org.apache.spark.sql.connector.expressions.SortDirection
- toString() - Method in class org.apache.spark.sql.connector.read.streaming.CompositeReadLimit
- toString() - Method in class org.apache.spark.sql.connector.read.streaming.Offset
- toString() - Method in class org.apache.spark.sql.connector.read.streaming.ReadAllAvailable
- toString() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxBytes
- toString() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxFiles
- toString() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMaxRows
- toString() - Method in class org.apache.spark.sql.connector.read.streaming.ReadMinRows
- toString() - Method in class org.apache.spark.sql.Dataset
- toString() - Static method in class org.apache.spark.sql.jdbc.DatabricksDialect
- toString() - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
- toString() - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
- toString() - Static method in class org.apache.spark.sql.jdbc.JdbcType
- toString() - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
- toString() - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
- toString() - Static method in class org.apache.spark.sql.jdbc.SnowflakeDialect
- toString() - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
- toString() - Method in class org.apache.spark.sql.KeyValueGroupedDataset
- toString() - Method in interface org.apache.spark.sql.RelationalGroupedDataset.GroupType
- toString() - Method in class org.apache.spark.sql.RelationalGroupedDataset
- toString() - Method in interface org.apache.spark.sql.Row
- toString() - Static method in class org.apache.spark.sql.scripting.SqlScriptingInterpreter
- toString() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- toString() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- toString() - Static method in class org.apache.spark.sql.sources.And
- toString() - Static method in class org.apache.spark.sql.sources.CollatedEqualNullSafe
- toString() - Static method in class org.apache.spark.sql.sources.CollatedEqualTo
- toString() - Static method in class org.apache.spark.sql.sources.CollatedGreaterThan
- toString() - Static method in class org.apache.spark.sql.sources.CollatedGreaterThanOrEqual
- toString() - Static method in class org.apache.spark.sql.sources.CollatedIn
- toString() - Static method in class org.apache.spark.sql.sources.CollatedLessThan
- toString() - Static method in class org.apache.spark.sql.sources.CollatedLessThanOrEqual
- toString() - Static method in class org.apache.spark.sql.sources.CollatedStringContains
- toString() - Static method in class org.apache.spark.sql.sources.CollatedStringEndsWith
- toString() - Static method in class org.apache.spark.sql.sources.CollatedStringStartsWith
- toString() - Static method in class org.apache.spark.sql.sources.EqualNullSafe
- toString() - Static method in class org.apache.spark.sql.sources.EqualTo
- toString() - Static method in class org.apache.spark.sql.sources.GreaterThan
- toString() - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
- toString() - Method in class org.apache.spark.sql.sources.In
- toString() - Static method in class org.apache.spark.sql.sources.IsNotNull
- toString() - Static method in class org.apache.spark.sql.sources.IsNull
- toString() - Static method in class org.apache.spark.sql.sources.LessThan
- toString() - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
- toString() - Static method in class org.apache.spark.sql.sources.Not
- toString() - Static method in class org.apache.spark.sql.sources.Or
- toString() - Static method in class org.apache.spark.sql.sources.StringContains
- toString() - Static method in class org.apache.spark.sql.sources.StringEndsWith
- toString() - Static method in class org.apache.spark.sql.sources.StringStartsWith
- toString() - Method in interface org.apache.spark.sql.streaming.QueryInfo
-
Returns the string representation of QueryInfo object
- toString() - Method in class org.apache.spark.sql.streaming.SinkProgress
- toString() - Method in class org.apache.spark.sql.streaming.SourceProgress
- toString() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
- toString() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
- toString() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
- toString() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
- toString() - Static method in class org.apache.spark.sql.streaming.TTLConfig
- toString() - Method in class org.apache.spark.sql.types.CharType
- toString() - Method in class org.apache.spark.sql.types.Decimal
- toString() - Method in class org.apache.spark.sql.types.DecimalType
- toString() - Method in class org.apache.spark.sql.types.Metadata
- toString() - Method in class org.apache.spark.sql.types.StructField
- toString() - Method in class org.apache.spark.sql.types.StructType
- toString() - Method in class org.apache.spark.sql.types.VarcharType
- toString() - Static method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
- toString() - Static method in class org.apache.spark.status.api.v1.ApplicationInfo
- toString() - Static method in class org.apache.spark.status.api.v1.sql.Metric
- toString() - Static method in class org.apache.spark.status.api.v1.sql.Node
- toString() - Method in class org.apache.spark.status.api.v1.StackTrace
- toString() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
Returns a string representation of this thread stack trace w.r.t java.lang.management.ThreadInfo(JDK 8)'s toString.
- toString() - Method in class org.apache.spark.storage.BlockId
- toString() - Method in class org.apache.spark.storage.BlockManagerId
- toString() - Static method in class org.apache.spark.storage.BroadcastBlockId
- toString() - Static method in class org.apache.spark.storage.CacheId
- toString() - Static method in class org.apache.spark.storage.PythonStreamBlockId
- toString() - Static method in class org.apache.spark.storage.RDDBlockId
- toString() - Method in class org.apache.spark.storage.RDDInfo
- toString() - Static method in class org.apache.spark.storage.ShuffleBlockBatchId
- toString() - Static method in class org.apache.spark.storage.ShuffleBlockChunkId
- toString() - Static method in class org.apache.spark.storage.ShuffleBlockId
- toString() - Static method in class org.apache.spark.storage.ShuffleChecksumBlockId
- toString() - Static method in class org.apache.spark.storage.ShuffleDataBlockId
- toString() - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
- toString() - Static method in class org.apache.spark.storage.ShuffleMergedBlockId
- toString() - Static method in class org.apache.spark.storage.ShuffleMergedDataBlockId
- toString() - Static method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
- toString() - Static method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
- toString() - Static method in class org.apache.spark.storage.ShufflePushBlockId
- toString() - Method in class org.apache.spark.storage.StorageLevel
- toString() - Static method in class org.apache.spark.storage.StreamBlockId
- toString() - Static method in class org.apache.spark.storage.TaskResultBlockId
- toString() - Method in class org.apache.spark.streaming.Duration
- toString() - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
- toString() - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
- toString() - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
- toString() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
- toString() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
- toString() - Method in class org.apache.spark.streaming.State
- toString() - Method in class org.apache.spark.streaming.Time
- toString() - Static method in class org.apache.spark.TaskCommitDenied
- toString() - Static method in class org.apache.spark.TaskKilled
- toString() - Static method in class org.apache.spark.TaskState
- toString() - Method in class org.apache.spark.unsafe.types.CalendarInterval
- toString() - Method in class org.apache.spark.util.AccumulatorV2
- toString() - Method in class org.apache.spark.util.MutablePair
- toString() - Method in class org.apache.spark.util.StatCounter
- toString(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
-
A human readable representation of the matrix with maximum lines and width
- toString(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
A human readable representation of the matrix with maximum lines and width
- toStructField() - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to a
StructField
. - toStructField() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to a StructField.
- toStructField() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- toStructField(Metadata) - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to a
StructField
with some existing metadata. - toStructField(Metadata) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to a StructField with some existing metadata.
- toStructField(Metadata) - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- toTable(String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Starts the execution of the streaming query, which will continually output results to the given table as new data arrives.
- TOTAL_BLOCKS_FETCHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- TOTAL_CORES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- TOTAL_CORES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- TOTAL_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- TOTAL_GC_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- TOTAL_INPUT_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- TOTAL_OFF_HEAP_STORAGE_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- TOTAL_ON_HEAP_STORAGE_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- TOTAL_SHUFFLE_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- TOTAL_SHUFFLE_WRITE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- TOTAL_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- totalBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
- totalBytesRead(ShuffleReadMetrics) - Static method in class org.apache.spark.ui.jobs.ApiHelper
- totalCores() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
- totalCores() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- totalCores() - Method in class org.apache.spark.status.api.v1.ProcessSummary
- totalCount() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Total count of items added to this
CountMinSketch
so far. - totalDelay() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
- totalDelay() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
Time taken for all the jobs of this batch to finish processing from the time they were submitted.
- totalDiskSize() - Method in class org.apache.spark.ui.storage.ExecutorStreamSummary
- totalDuration() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- totalGCTime() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- totalInputBytes() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- totalIterations() - Method in interface org.apache.spark.ml.classification.TrainingSummary
-
Number of training iterations.
- totalIterations() - Method in class org.apache.spark.ml.regression.LinearRegressionTrainingSummary
-
Number of training iterations until termination
- totalMemSize() - Method in class org.apache.spark.ui.storage.ExecutorStreamSummary
- totalNumNodes() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- totalNumNodes() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- totalNumNodes() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- totalNumNodes() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- totalNumNodes() - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
-
Total number of nodes, summed over all trees in the ensemble.
- totalNumNodes() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Get total number of nodes, summed over all trees in the ensemble.
- totalOffHeapStorageMemory() - Method in interface org.apache.spark.SparkExecutorInfo
- totalOffHeapStorageMemory() - Method in class org.apache.spark.SparkExecutorInfoImpl
- totalOffHeapStorageMemory() - Method in class org.apache.spark.status.api.v1.MemoryMetrics
- totalOnHeapStorageMemory() - Method in interface org.apache.spark.SparkExecutorInfo
- totalOnHeapStorageMemory() - Method in class org.apache.spark.SparkExecutorInfoImpl
- totalOnHeapStorageMemory() - Method in class org.apache.spark.status.api.v1.MemoryMetrics
- totalShuffleRead() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- totalShuffleWrite() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- totalTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
- toTuple() - Method in class org.apache.spark.graphx.EdgeTriplet
- toUnscaledLong() - Method in class org.apache.spark.sql.types.Decimal
- toUri() - Method in class org.apache.spark.paths.SparkPath
- toV1TableScan(SQLContext) - Method in interface org.apache.spark.sql.connector.read.V1Scan
-
Create an `BaseRelation` with `TableScan` that can scan data from DataSource v1 to RDD[Row].
- toV2() - Method in class org.apache.spark.sql.sources.AlwaysFalse
- toV2() - Method in class org.apache.spark.sql.sources.AlwaysTrue
- toV2() - Method in class org.apache.spark.sql.sources.And
- toV2() - Method in class org.apache.spark.sql.sources.CollatedFilter
- toV2() - Method in class org.apache.spark.sql.sources.EqualNullSafe
- toV2() - Method in class org.apache.spark.sql.sources.EqualTo
- toV2() - Method in class org.apache.spark.sql.sources.GreaterThan
- toV2() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
- toV2() - Method in class org.apache.spark.sql.sources.In
- toV2() - Method in class org.apache.spark.sql.sources.IsNotNull
- toV2() - Method in class org.apache.spark.sql.sources.IsNull
- toV2() - Method in class org.apache.spark.sql.sources.LessThan
- toV2() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
- toV2() - Method in class org.apache.spark.sql.sources.Not
- toV2() - Method in class org.apache.spark.sql.sources.Or
- toV2() - Method in class org.apache.spark.sql.sources.StringContains
- toV2() - Method in class org.apache.spark.sql.sources.StringEndsWith
- toV2() - Method in class org.apache.spark.sql.sources.StringStartsWith
- toVirtualHosts(Seq<String>) - Static method in class org.apache.spark.ui.JettyUtils
- toYearMonthIntervalANSIString(int, byte, byte) - Static method in class org.apache.spark.util.YearMonthIntervalUtils
- train(JavaRDD<LabeledPoint>, BoostingStrategy) - Static method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Java-friendly API for
org.apache.spark.mllib.tree.GradientBoostedTrees.train
- train(RDD<ALS.Rating<ID>>, int, int, int, int, double, boolean, double, boolean, StorageLevel, StorageLevel, int, long, ClassTag<ID>, Ordering<ID>) - Static method in class org.apache.spark.ml.recommendation.ALS
-
Implementation of the ALS algorithm.
- train(RDD<Vector>, int, int) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
Trains a k-means model using specified parameters and the default values for unspecified.
- train(RDD<Vector>, int, int, String) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
Trains a k-means model using the given set of parameters.
- train(RDD<Vector>, int, int, String, long) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
Trains a k-means model using the given set of parameters.
- train(RDD<Rating>, int, int) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<Rating>, int, int, double) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<Rating>, int, int, double, int) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<Rating>, int, int, double, int, long) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.classification.NaiveBayes
-
Trains a Naive Bayes model given an RDD of
(label, features)
pairs. - train(RDD<LabeledPoint>, double) - Static method in class org.apache.spark.mllib.classification.NaiveBayes
-
Trains a Naive Bayes model given an RDD of
(label, features)
pairs. - train(RDD<LabeledPoint>, double, String) - Static method in class org.apache.spark.mllib.classification.NaiveBayes
-
Trains a Naive Bayes model given an RDD of
(label, features)
pairs. - train(RDD<LabeledPoint>, int) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double, double) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double, double, Vector) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, BoostingStrategy) - Static method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Method to train a gradient boosting model.
- train(RDD<LabeledPoint>, Strategy) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- trainClassifier(JavaRDD<LabeledPoint>, int, Map<Integer, Integer>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Java-friendly API for
org.apache.spark.mllib.tree.RandomForest.trainClassifier
- trainClassifier(JavaRDD<LabeledPoint>, int, Map<Integer, Integer>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Java-friendly API for
org.apache.spark.mllib.tree.DecisionTree.trainClassifier
- trainClassifier(RDD<LabeledPoint>, int, Map<Object, Object>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for binary or multiclass classification.
- trainClassifier(RDD<LabeledPoint>, int, Map<Object, Object>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model for binary or multiclass classification.
- trainClassifier(RDD<LabeledPoint>, Strategy, int, String, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for binary or multiclass classification.
- trainImpl(RDD<Tuple2<Object, Vector>>, int, String) - Method in interface org.apache.spark.ml.regression.FactorizationMachines
- trainImplicit(RDD<Rating>, int, int) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a subset of products.
- trainImplicit(RDD<Rating>, int, int, double, double) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a subset of products.
- trainImplicit(RDD<Rating>, int, int, double, int, double) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a subset of products.
- trainImplicit(RDD<Rating>, int, int, double, int, double, long) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' given by users to some products, in the form of (userID, productID, preference) pairs.
- trainingCost() - Method in class org.apache.spark.ml.clustering.BisectingKMeansSummary
- trainingCost() - Method in class org.apache.spark.ml.clustering.KMeansSummary
- trainingCost() - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
- trainingCost() - Method in class org.apache.spark.mllib.clustering.KMeansModel
- trainingLogLikelihood() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
- trainingSummary() - Method in interface org.apache.spark.ml.util.HasTrainingSummary
- TrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for training results.
- trainOn(JavaDStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Java-friendly version of
trainOn
. - trainOn(JavaDStream<LabeledPoint>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Java-friendly version of
trainOn
. - trainOn(DStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Update the clustering model by training on batches of data from a DStream.
- trainOn(DStream<LabeledPoint>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Update the model by training on batches of data from a DStream.
- trainRatio() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- trainRatio() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- trainRatio() - Method in interface org.apache.spark.ml.tuning.TrainValidationSplitParams
-
Param for ratio between train and validation data.
- trainRegressor(JavaRDD<LabeledPoint>, Map<Integer, Integer>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Java-friendly API for
org.apache.spark.mllib.tree.RandomForest.trainRegressor
- trainRegressor(JavaRDD<LabeledPoint>, Map<Integer, Integer>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Java-friendly API for
org.apache.spark.mllib.tree.DecisionTree.trainRegressor
- trainRegressor(RDD<LabeledPoint>, Strategy, int, String, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for regression.
- trainRegressor(RDD<LabeledPoint>, Map<Object, Object>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for regression.
- trainRegressor(RDD<LabeledPoint>, Map<Object, Object>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model for regression.
- TrainValidationSplit - Class in org.apache.spark.ml.tuning
-
Validation for hyper-parameter tuning.
- TrainValidationSplit() - Constructor for class org.apache.spark.ml.tuning.TrainValidationSplit
- TrainValidationSplit(String) - Constructor for class org.apache.spark.ml.tuning.TrainValidationSplit
- TrainValidationSplitModel - Class in org.apache.spark.ml.tuning
-
Model from train validation split.
- TrainValidationSplitModel.TrainValidationSplitModelWriter - Class in org.apache.spark.ml.tuning
-
Writer for TrainValidationSplitModel.
- TrainValidationSplitParams - Interface in org.apache.spark.ml.tuning
-
Params for
TrainValidationSplit
andTrainValidationSplitModel
. - transferMapSpillFile(File, long[], long[]) - Method in interface org.apache.spark.shuffle.api.SingleSpillShuffleMapOutputWriter
-
Transfer a file that contains the bytes of all the partitions written by this map task.
- transferred() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
- transferTo(WritableByteChannel, long) - Method in class org.apache.spark.storage.ReadableChannelFileRegion
- transform() - Method in interface org.apache.spark.sql.connector.catalog.MetadataColumn
-
The
Transform
used to produce this metadata column from data rows, or null. - transform(Iterable<?>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document into a sparse term frequency vector (Java version).
- transform(String) - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Transforms a word to its vector representation
- transform(List<JavaDStream<?>>, Function2<List<JavaRDD<?>>, Time, JavaRDD<T>>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.
- transform(Function<R, JavaRDD<U>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
- transform(Function2<R, Time, JavaRDD<U>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
- transform(JavaRDD<D>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document to term frequency vectors (Java version).
- transform(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDFModel
-
Transforms term frequency (TF) vectors to TF-IDF vectors (Java version).
- transform(JavaRDD<Vector>) - Method in interface org.apache.spark.mllib.feature.VectorTransformer
-
Applies transformation on a JavaRDD[Vector].
- transform(Vector) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
-
Applies transformation on a vector.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.ElementwiseProduct
-
Does the hadamard product transformation.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.IDFModel
-
Transforms a term frequency (TF) vector to a TF-IDF vector
- transform(Vector) - Method in class org.apache.spark.mllib.feature.Normalizer
-
Applies unit length normalization on a vector.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.PCAModel
-
Transform a vector by computed Principal Components.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.StandardScalerModel
-
Applies standardization transformation on a vector.
- transform(Vector) - Method in interface org.apache.spark.mllib.feature.VectorTransformer
-
Applies transformation on a vector.
- transform(RDD<D>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document to term frequency vectors.
- transform(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDFModel
-
Transforms term frequency (TF) vectors to TF-IDF vectors.
- transform(RDD<Vector>) - Method in interface org.apache.spark.mllib.feature.VectorTransformer
-
Applies transformation on an RDD[Vector].
- transform(Column, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Returns an array of elements after applying a transformation to each element in the input array.
- transform(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Returns an array of elements after applying a transformation to each element in the input array.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.ClassificationModel
-
Transforms dataset by reading from
PredictionModel.featuresCol()
, and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()
of typeDouble
- raw predictions (confidences) asClassificationModel.rawPredictionCol()
of typeVector
. - transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.OneVsRestModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
Transforms dataset by reading from
PredictionModel.featuresCol()
, and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()
of typeDouble
- raw predictions (confidences) asClassificationModel.rawPredictionCol()
of typeVector
- probability of each class asProbabilisticClassificationModel.probabilityCol()
of typeVector
. - transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.KMeansModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Transforms the input dataset.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Binarizer
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Bucketizer
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.ColumnPruner
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.FeatureHasher
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.HashingTF
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.IDFModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.ImputerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.IndexToString
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Interaction
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.PCAModel
-
Transform a vector by computed Principal Components.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.RFormulaModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.RobustScalerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.SQLTransformer
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.StandardScalerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorAssembler
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorSizeHint
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorSlicer
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Transform a sentence column to a vector column to represent the whole sentence.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
The transform method first generates the association rules according to the frequent itemsets.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.PipelineModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.PredictionModel
-
Transforms dataset by reading from
PredictionModel.featuresCol()
, callingpredict
, and storing the predictions as a new columnPredictionModel.predictionCol()
. - transform(Dataset<?>) - Method in class org.apache.spark.ml.recommendation.ALSModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.Transformer
-
Transforms the input dataset.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- transform(Dataset<?>) - Method in class org.apache.spark.ml.UnaryTransformer
- transform(Dataset<?>, ParamMap) - Method in class org.apache.spark.ml.Transformer
-
Transforms the dataset with provided parameter map as additional parameters.
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Method in class org.apache.spark.ml.Transformer
-
Transforms the dataset with optional parameters
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.Transformer
-
Transforms the dataset with optional parameters
- transform(Seq<DStream<?>>, Function2<Seq<RDD<?>>, Time, RDD<T>>, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.
- transform(Iterable<Object>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document into a sparse term frequency vector.
- transform(Function1<RDD<T>, RDD<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
- transform(Function1<Dataset<T>, Dataset<U>>) - Method in class org.apache.spark.sql.api.Dataset
-
Concise syntax for chaining custom transformations.
- transform(Function1<Try<T>, Try<S>>, ExecutionContext) - Method in class org.apache.spark.ComplexFutureAction
- transform(Function1<Try<T>, Try<S>>, ExecutionContext) - Method in class org.apache.spark.SimpleFutureAction
- transform(Function2<RDD<T>, Time, RDD<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
- Transform - Interface in org.apache.spark.sql.connector.expressions
-
Represents a transform function in the public logical expression API.
- transform_keys(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a function to every key-value pair in a map and returns a map with the results of those applications as the new keys for the pairs.
- transform_values(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a function to every key-value pair in a map and returns a map with the results of those applications as the new values for the pairs.
- transformationsAndActionsNotInvokedByDriverError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- TransformEnd - Class in org.apache.spark.ml
-
Event fired after
Transformer.transform
. - TransformEnd() - Constructor for class org.apache.spark.ml.TransformEnd
- transformer() - Method in class org.apache.spark.ml.TransformEnd
- transformer() - Method in class org.apache.spark.ml.TransformStart
- Transformer - Class in org.apache.spark.ml
-
Abstract class for transformers that transform one dataset into another.
- Transformer() - Constructor for class org.apache.spark.ml.Transformer
- TransformHelper(Seq<Transform>) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TransformHelper
- transformImpl(Dataset<?>) - Method in class org.apache.spark.ml.classification.ClassificationModel
- transformNotSupportQuantifierError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- transformOutputColumnSchema(StructField, String, boolean, boolean) - Static method in class org.apache.spark.ml.feature.OneHotEncoderCommon
-
Prepares the
StructField
with proper metadata forOneHotEncoder
's output column. - transformSchema(StructType) - Method in class org.apache.spark.ml.classification.ClassificationModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.OneVsRest
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.OneVsRestModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.GaussianMixture
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.KMeans
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.KMeansModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.LDA
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.LDAModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Binarizer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Bucketizer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ColumnPruner
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.CountVectorizer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.DCT
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ElementwiseProduct
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.FeatureHasher
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.HashingTF
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.IDF
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.IDFModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Imputer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ImputerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.IndexToString
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Interaction
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MinHashLSH
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MinMaxScaler
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Normalizer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.OneHotEncoder
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.PCA
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.PCAModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.RFormula
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.RFormulaModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.RobustScaler
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.RobustScalerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.SQLTransformer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StandardScaler
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StandardScalerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StopWordsRemover
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StringIndexer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StringIndexerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorAssembler
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorIndexer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorSizeHint
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorSlicer
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Word2Vec
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Word2VecModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.fpm.FPGrowth
- transformSchema(StructType) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.Pipeline
- transformSchema(StructType) - Method in class org.apache.spark.ml.PipelineModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.PipelineStage
-
Check transform validity and derive the output schema from the input schema.
- transformSchema(StructType) - Method in class org.apache.spark.ml.PredictionModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.Predictor
- transformSchema(StructType) - Method in class org.apache.spark.ml.recommendation.ALS
- transformSchema(StructType) - Method in class org.apache.spark.ml.recommendation.ALSModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.IsotonicRegression
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.CrossValidator
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- transformSchema(StructType) - Method in class org.apache.spark.ml.UnaryTransformer
- transformSchemaImpl(StructType) - Method in interface org.apache.spark.ml.tuning.ValidatorParams
- TransformStart - Class in org.apache.spark.ml
-
Event fired before
Transformer.transform
. - TransformStart() - Constructor for class org.apache.spark.ml.TransformStart
- transformToPair(List<JavaDStream<?>>, Function2<List<JavaRDD<?>>, Time, JavaPairRDD<K, V>>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream.
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
- transformWith(DStream<U>, Function2<RDD<T>, RDD<U>, RDD<V>>, ClassTag<U>, ClassTag<V>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
- transformWith(DStream<U>, Function3<RDD<T>, RDD<U>, Time, RDD<V>>, ClassTag<U>, ClassTag<V>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
- transformWith(Function1<Try<T>, Future<S>>, ExecutionContext) - Method in class org.apache.spark.ComplexFutureAction
- transformWith(Function1<Try<T>, Future<S>>, ExecutionContext) - Method in class org.apache.spark.SimpleFutureAction
- transformWithSerdeUnsupportedError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function on each RDD of 'this' DStream and 'other' DStream.
- translate(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Translate any character in the src by a character in replaceString.
- transpose() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- transpose() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Transpose the Matrix.
- transpose() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- transpose() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- transpose() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Transpose this
BlockMatrix
. - transpose() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Transposes this CoordinateMatrix.
- transpose() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Transpose the Matrix.
- transpose() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- transpose() - Method in class org.apache.spark.sql.api.Dataset
-
Transposes a DataFrame, switching rows to columns.
- transpose() - Method in class org.apache.spark.sql.Dataset
- transpose(Column) - Method in class org.apache.spark.sql.api.Dataset
-
Transposes a DataFrame such that the values in the specified index column become the new columns of the DataFrame.
- transpose(Column) - Method in class org.apache.spark.sql.Dataset
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
org.apache.spark.api.java.JavaRDDLike.treeAggregate
with suggested depth 2. - treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregates the elements of this RDD in a multi-level tree pattern.
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
org.apache.spark.api.java.JavaRDDLike.treeAggregate
with a parameter to do the final aggregation on the executor. - treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
RDD.treeAggregate(U, scala.Function2<U, T, U>, scala.Function2<U, U, U>, int, scala.reflect.ClassTag<U>)
with a parameter to do the final aggregation on the executor - treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Aggregates the elements of this RDD in a multi-level tree pattern.
- TreeClassifierParams - Interface in org.apache.spark.ml.tree
-
Parameters for Decision Tree-based classification algorithms.
- TreeEnsembleClassifierParams - Interface in org.apache.spark.ml.tree
-
Parameters for Decision Tree-based ensemble classification algorithms.
- TreeEnsembleModel<M extends DecisionTreeModel> - Interface in org.apache.spark.ml.tree
-
Abstraction for models which are ensembles of decision trees
- TreeEnsembleParams - Interface in org.apache.spark.ml.tree
-
Parameters for Decision Tree-based ensemble algorithms.
- TreeEnsembleRegressorParams - Interface in org.apache.spark.ml.tree
-
Parameters for Decision Tree-based ensemble regression algorithms.
- treeId() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
- treeID() - Method in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
- treeReduce(Function2<T, T, T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
org.apache.spark.api.java.JavaRDDLike.treeReduce
with suggested depth 2. - treeReduce(Function2<T, T, T>, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Reduces the elements of this RDD in a multi-level tree pattern.
- treeReduce(Function2<T, T, T>, int) - Method in class org.apache.spark.rdd.RDD
-
Reduces the elements of this RDD in a multi-level tree pattern.
- TreeRegressorParams - Interface in org.apache.spark.ml.tree
-
Parameters for Decision Tree-based regression algorithms.
- trees() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- trees() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- trees() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- trees() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- trees() - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
-
Trees in this ensemble.
- trees() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- trees() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
- treeStrategy() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- treeString() - Method in class org.apache.spark.sql.types.StructType
- treeString(int) - Method in class org.apache.spark.sql.types.StructType
- treeWeights() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- treeWeights() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- treeWeights() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- treeWeights() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- treeWeights() - Method in interface org.apache.spark.ml.tree.TreeEnsembleModel
-
Weights for each tree, zippable with
TreeEnsembleModel.trees()
- treeWeights() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
- triangleCount() - Method in class org.apache.spark.graphx.GraphOps
-
Compute the number of triangles passing through each vertex.
- TriangleCount - Class in org.apache.spark.graphx.lib
-
Compute the number of triangles passing through each vertex.
- TriangleCount() - Constructor for class org.apache.spark.graphx.lib.TriangleCount
- trigger(Trigger) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Set the trigger for the stream query.
- Trigger - Class in org.apache.spark.sql.streaming
-
Policy used to indicate how often results should be produced by a [[StreamingQuery]].
- Trigger() - Constructor for class org.apache.spark.sql.streaming.Trigger
- TriggerHeapHistogram$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.TriggerHeapHistogram$
- TriggerThreadDump$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump$
- trim(Column) - Static method in class org.apache.spark.sql.functions
-
Trim the spaces from both ends for the specified string column.
- trim(Column, String) - Static method in class org.apache.spark.sql.functions
-
Trim the specified character from both ends for the specified string column.
- TrimHorizon() - Constructor for class org.apache.spark.streaming.kinesis.KinesisInitialPositions.TrimHorizon
- trimOptionUnsupportedError(int, SqlBaseParser.TrimContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- TripletFields - Class in org.apache.spark.graphx
-
Represents a subset of the fields of an [[EdgeTriplet]] or [[EdgeContext]].
- TripletFields() - Constructor for class org.apache.spark.graphx.TripletFields
-
Constructs a default TripletFields in which all fields are included.
- TripletFields(boolean, boolean, boolean) - Constructor for class org.apache.spark.graphx.TripletFields
- triplets() - Method in class org.apache.spark.graphx.Graph
-
An RDD containing the edge triplets, which are edges along with the vertex data associated with the adjacent vertices.
- triplets() - Method in class org.apache.spark.graphx.impl.GraphImpl
- TRUE - Static variable in class org.apache.spark.types.variant.VariantUtil
- truePositiveRate(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns true positive rate for a given label (category)
- truePositiveRateByLabel() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns true positive rate for each label (category).
- trunc(Column, String) - Static method in class org.apache.spark.sql.functions
-
Returns date truncated to the unit specified by the format.
- TruncatableTable - Interface in org.apache.spark.sql.connector.catalog
-
Represents a table which can be atomically truncated.
- truncate() - Method in interface org.apache.spark.sql.connector.write.SupportsOverwrite
- truncate() - Method in interface org.apache.spark.sql.connector.write.SupportsOverwriteV2
- truncate() - Method in interface org.apache.spark.sql.connector.write.SupportsTruncate
-
Configures a write to replace all existing data with data committed in the write.
- TRUNCATE - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table can be truncated in a write operation.
- truncateMultiPartitionUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- truncatePartition(InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.SupportsPartitionManagement
-
Truncate a partition in the table by completely removing partition data.
- truncatePartitions(InternalRow[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsAtomicPartitionManagement
-
Truncate an array of partitions atomically from table, and completely remove partitions data.
- truncateTable() - Method in interface org.apache.spark.sql.connector.catalog.SupportsDelete
- truncateTable() - Method in interface org.apache.spark.sql.connector.catalog.SupportsDeleteV2
- truncateTable() - Method in interface org.apache.spark.sql.connector.catalog.TruncatableTable
-
Truncate a table by removing all rows from the table atomically.
- truncateTableOnExternalTablesError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- truncateTablePartitionNotSupportedForNotPartitionedTablesError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- try_add(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the sum of
left
andright
and the result is null on overflow. - try_aes_decrypt(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of
input
. - try_aes_decrypt(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of
input
. - try_aes_decrypt(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of
input
. - try_aes_decrypt(Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
This is a special version of
aes_decrypt
that performs the same operation, but returns a NULL value instead of raising an error if the decryption cannot be performed. - try_avg(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the mean calculated from values of a group and the result is null on overflow.
- try_cast(String) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type and the result is null on failure.
- try_cast(DataType) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type and the result is null on failure.
- try_divide(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns
dividend
/
divisor
. - try_element_at(Column, Column) - Static method in class org.apache.spark.sql.functions
-
(array, index) - Returns element of array at given (1-based) index.
- try_mod(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the remainder of
dividend
/
divisor
. - try_multiply(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns
left
*
right
and the result is null on overflow. - try_parse_json(Column) - Static method in class org.apache.spark.sql.functions
-
Parses a JSON string and constructs a Variant value.
- try_reflect(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
This is a special version of
reflect
that performs the same operation, but returns a NULL value instead of raising an error if the invoke method thrown exception. - try_subtract(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns
left
-
right
and the result is null on overflow. - try_sum(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the sum calculated from values of a group and the result is null on overflow.
- try_to_binary(Column) - Static method in class org.apache.spark.sql.functions
-
This is a special version of
to_binary
that performs the same operation, but returns a NULL value instead of raising an error if the conversion cannot be performed. - try_to_binary(Column, Column) - Static method in class org.apache.spark.sql.functions
-
This is a special version of
to_binary
that performs the same operation, but returns a NULL value instead of raising an error if the conversion cannot be performed. - try_to_number(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Convert string
e
to a number based on the string formatformat
. - try_to_timestamp(Column) - Static method in class org.apache.spark.sql.functions
-
Parses the
s
to a timestamp. - try_to_timestamp(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Parses the
s
with theformat
to a timestamp. - try_url_decode(Column) - Static method in class org.apache.spark.sql.functions
-
This is a special version of
url_decode
that performs the same operation, but returns a NULL value instead of raising an error if the decoding cannot be performed. - try_variant_get(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Extracts a sub-variant from
v
according topath
, and then cast the sub-variant totargetType
. - tryCompare(T, T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- tryCompare(T, T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- tryCompare(T, T) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- tryCompare(T, T) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- tryCompare(T, T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- tryCompare(T, T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
- tryCompare(T, T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- tryLock(Function2<BlockInfo, Condition, BoxedUnit>) - Method in class org.apache.spark.storage.BlockInfoWrapper
- tryLog(Function0<T>) - Static method in class org.apache.spark.util.Utils
-
Executes the given block in a Try, logging any uncaught exceptions.
- tryLogNonFatalError(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Executes the given block.
- tryOrExit(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code that evaluates to Unit, forwarding any uncaught exceptions to the default UncaughtExceptionHandler
- tryOrIOException(Function0<T>) - Method in interface org.apache.spark.util.SparkErrorUtils
-
Execute a block of code that returns a value, re-throwing any non-fatal uncaught exceptions as IOException.
- tryOrIOException(Function0<T>) - Static method in class org.apache.spark.util.Utils
- tryOrStopSparkContext(SparkContext, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code that evaluates to Unit, stop SparkContext if there is any uncaught exception
- tryRecoverFromCheckpoint(String) - Method in class org.apache.spark.streaming.StreamingContextPythonHelper
-
This is a private method only for Python to implement
getOrCreate
. - tryWithResource(Function0<R>, Function1<R, T>) - Method in interface org.apache.spark.util.SparkErrorUtils
- tryWithResource(Function0<R>, Function1<R, T>) - Static method in class org.apache.spark.util.Utils
- tryWithSafeFinally(Function0<T>, Function0<BoxedUnit>) - Method in interface org.apache.spark.util.SparkErrorUtils
-
Execute a block of code, then a finally block, but if exceptions happen in the finally block, do not suppress the original exception.
- tryWithSafeFinally(Function0<T>, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
- tryWithSafeFinallyAndFailureCallbacks(Function0<T>, Function0<BoxedUnit>, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code and call the failure callbacks in the catch block.
- TTLConfig - Class in org.apache.spark.sql.streaming
-
TTL Configuration for state variable.
- TTLConfig(Duration) - Constructor for class org.apache.spark.sql.streaming.TTLConfig
- ttlDuration() - Method in class org.apache.spark.sql.streaming.TTLConfig
- tuple(Encoder<T1>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 1-ary tuples.
- tuple(Encoder<T1>, Encoder<T2>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 2-ary tuples.
- tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 3-ary tuples.
- tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>, Encoder<T4>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 4-ary tuples.
- tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>, Encoder<T4>, Encoder<T5>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 5-ary tuples.
- tValues() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
- tValues() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
- Tweedie$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
- TYPE() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- TYPE_INFO_MASK - Static variable in class org.apache.spark.types.variant.VariantUtil
- typed - Class in org.apache.spark.sql.expressions.javalang
-
Deprecated.As of release 3.0.0, please use the untyped builtin aggregate functions.
- typed - Class in org.apache.spark.sql.expressions.scalalang
-
Deprecated.please use untyped builtin aggregate functions. Since 3.0.0.
- typed() - Constructor for class org.apache.spark.sql.expressions.javalang.typed
-
Deprecated.
- typed() - Constructor for class org.apache.spark.sql.expressions.scalalang.typed
-
Deprecated.
- TypedColumn<T,
U> - Class in org.apache.spark.sql - TypedColumn(ColumnNode, Encoder<U>) - Constructor for class org.apache.spark.sql.TypedColumn
- typedlit(T, TypeTags.TypeTag<T>) - Static method in class org.apache.spark.sql.functions
-
Creates a
Column
of literal value. - typedLit(T, TypeTags.TypeTag<T>) - Static method in class org.apache.spark.sql.functions
-
Creates a
Column
of literal value. - typeName() - Method in class org.apache.spark.mllib.linalg.VectorUDT
- typeName() - Static method in class org.apache.spark.sql.types.BinaryType
- typeName() - Static method in class org.apache.spark.sql.types.BooleanType
- typeName() - Static method in class org.apache.spark.sql.types.ByteType
- typeName() - Method in class org.apache.spark.sql.types.CalendarIntervalType
- typeName() - Method in class org.apache.spark.sql.types.CharType
- typeName() - Method in class org.apache.spark.sql.types.DataType
-
Name of the type used in JSON serialization.
- typeName() - Static method in class org.apache.spark.sql.types.DateType
- typeName() - Method in class org.apache.spark.sql.types.DayTimeIntervalType
- typeName() - Method in class org.apache.spark.sql.types.DecimalType
- typeName() - Static method in class org.apache.spark.sql.types.DoubleType
- typeName() - Static method in class org.apache.spark.sql.types.FloatType
- typeName() - Static method in class org.apache.spark.sql.types.IntegerType
- typeName() - Static method in class org.apache.spark.sql.types.LongType
- typeName() - Method in class org.apache.spark.sql.types.NullType
- typeName() - Static method in class org.apache.spark.sql.types.ShortType
- typeName() - Method in class org.apache.spark.sql.types.StringType
-
Type name that is shown to the customer.
- typeName() - Method in class org.apache.spark.sql.types.TimestampNTZType
- typeName() - Static method in class org.apache.spark.sql.types.TimestampType
- typeName() - Method in class org.apache.spark.sql.types.VarcharType
- typeName() - Static method in class org.apache.spark.sql.types.VariantType
- typeName() - Method in class org.apache.spark.sql.types.YearMonthIntervalType
- typeof(Column) - Static method in class org.apache.spark.sql.functions
-
Return DDL-formatted type string for the data type of the input.
U
- U() - Method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
- U16_MAX - Static variable in class org.apache.spark.types.variant.VariantUtil
- U24_MAX - Static variable in class org.apache.spark.types.variant.VariantUtil
- U24_SIZE - Static variable in class org.apache.spark.types.variant.VariantUtil
- U32_SIZE - Static variable in class org.apache.spark.types.variant.VariantUtil
- U8_MAX - Static variable in class org.apache.spark.types.variant.VariantUtil
- ucase(Column) - Static method in class org.apache.spark.sql.functions
-
Returns
str
with all characters changed to uppercase. - udaf(Aggregator<IN, BUF, OUT>, Encoder<IN>) - Static method in class org.apache.spark.sql.functions
-
Obtains a
UserDefinedFunction
that wraps the givenAggregator
so that it may be used with untyped Data Frames. - udaf(Aggregator<IN, BUF, OUT>, TypeTags.TypeTag<IN>) - Static method in class org.apache.spark.sql.functions
-
Obtains a
UserDefinedFunction
that wraps the givenAggregator
so that it may be used with untyped Data Frames. - udf() - Method in class org.apache.spark.sql.api.SparkSession
-
A collection of methods for registering user-defined functions (UDF).
- udf() - Method in class org.apache.spark.sql.SparkSession
- udf() - Method in class org.apache.spark.sql.SQLContext
-
A collection of methods for registering user-defined functions (UDF).
- udf(Object, DataType) - Static method in class org.apache.spark.sql.functions
-
Deprecated.Scala `udf` method with return type parameter is deprecated. Please use Scala `udf` method without return type parameter. Since 3.0.0.
- udf(UDF0<?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF0 instance as user-defined function (UDF).
- udf(UDF1<?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF1 instance as user-defined function (UDF).
- udf(UDF10<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF10 instance as user-defined function (UDF).
- udf(UDF2<?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF2 instance as user-defined function (UDF).
- udf(UDF3<?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF3 instance as user-defined function (UDF).
- udf(UDF4<?, ?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF4 instance as user-defined function (UDF).
- udf(UDF5<?, ?, ?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF5 instance as user-defined function (UDF).
- udf(UDF6<?, ?, ?, ?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF6 instance as user-defined function (UDF).
- udf(UDF7<?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF7 instance as user-defined function (UDF).
- udf(UDF8<?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF8 instance as user-defined function (UDF).
- udf(UDF9<?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Static method in class org.apache.spark.sql.functions
-
Defines a Java UDF9 instance as user-defined function (UDF).
- udf(Function0<RT>, TypeTags.TypeTag<RT>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 0 arguments as user-defined function (UDF).
- udf(Function1<A1, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 1 arguments as user-defined function (UDF).
- udf(Function10<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 10 arguments as user-defined function (UDF).
- udf(Function2<A1, A2, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 2 arguments as user-defined function (UDF).
- udf(Function3<A1, A2, A3, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 3 arguments as user-defined function (UDF).
- udf(Function4<A1, A2, A3, A4, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 4 arguments as user-defined function (UDF).
- udf(Function5<A1, A2, A3, A4, A5, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 5 arguments as user-defined function (UDF).
- udf(Function6<A1, A2, A3, A4, A5, A6, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 6 arguments as user-defined function (UDF).
- udf(Function7<A1, A2, A3, A4, A5, A6, A7, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 7 arguments as user-defined function (UDF).
- udf(Function8<A1, A2, A3, A4, A5, A6, A7, A8, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 8 arguments as user-defined function (UDF).
- udf(Function9<A1, A2, A3, A4, A5, A6, A7, A8, A9, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>) - Static method in class org.apache.spark.sql.functions
-
Defines a Scala closure of 9 arguments as user-defined function (UDF).
- UDF0<R> - Interface in org.apache.spark.sql.api.java
-
A Spark SQL UDF that has 0 arguments.
- UDF1<T1,
R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 1 arguments.
- UDF10<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 10 arguments.
- UDF11<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 11 arguments.
- UDF12<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 12 arguments.
- UDF13<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 13 arguments.
- UDF14<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 14 arguments.
- UDF15<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 15 arguments.
- UDF16<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 16 arguments.
- UDF17<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 17 arguments.
- UDF18<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 18 arguments.
- UDF19<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 19 arguments.
- UDF2<T1,
T2, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 2 arguments.
- UDF20<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 20 arguments.
- UDF21<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 21 arguments.
- UDF22<T1,
T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 22 arguments.
- UDF3<T1,
T2, T3, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 3 arguments.
- UDF4<T1,
T2, T3, T4, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 4 arguments.
- UDF5<T1,
T2, T3, T4, T5, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 5 arguments.
- UDF6<T1,
T2, T3, T4, T5, T6, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 6 arguments.
- UDF7<T1,
T2, T3, T4, T5, T6, T7, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 7 arguments.
- UDF8<T1,
T2, T3, T4, T5, T6, T7, T8, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 8 arguments.
- UDF9<T1,
T2, T3, T4, T5, T6, T7, T8, T9, R> - Interface in org.apache.spark.sql.api.java -
A Spark SQL UDF that has 9 arguments.
- udfClassDoesNotImplementAnyUDFInterfaceError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- udfClassImplementMultiUDFInterfacesError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- udfClassWithTooManyTypeArgumentsError(int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- UDFRegistration - Class in org.apache.spark.sql.api
-
Functions for registering user-defined functions.
- UDFRegistration - Class in org.apache.spark.sql
-
Functions for registering user-defined functions.
- UDFRegistration() - Constructor for class org.apache.spark.sql.api.UDFRegistration
- udt() - Element in annotation interface org.apache.spark.sql.types.SQLUserDefinedType
-
Returns an instance of the UserDefinedType which can serialize and deserialize the user class to and from Catalyst built-in types.
- UDTFRegistration - Class in org.apache.spark.sql
-
Functions for registering user-defined table functions.
- UDTRegistration - Class in org.apache.spark.sql.types
-
This object keeps the mappings between user classes and their User Defined Types (UDTs).
- UDTRegistration() - Constructor for class org.apache.spark.sql.types.UDTRegistration
- uid() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- uid() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- uid() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- uid() - Method in class org.apache.spark.ml.classification.FMClassifier
- uid() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- uid() - Method in class org.apache.spark.ml.classification.GBTClassifier
- uid() - Method in class org.apache.spark.ml.classification.LinearSVC
- uid() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- uid() - Method in class org.apache.spark.ml.classification.LogisticRegression
- uid() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- uid() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- uid() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
- uid() - Method in class org.apache.spark.ml.classification.NaiveBayes
- uid() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- uid() - Method in class org.apache.spark.ml.classification.OneVsRest
- uid() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- uid() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- uid() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- uid() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- uid() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- uid() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- uid() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- uid() - Method in class org.apache.spark.ml.clustering.KMeans
- uid() - Method in class org.apache.spark.ml.clustering.KMeansModel
- uid() - Method in class org.apache.spark.ml.clustering.LDA
- uid() - Method in class org.apache.spark.ml.clustering.LDAModel
- uid() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- uid() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- uid() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- uid() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- uid() - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
- uid() - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
- uid() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- uid() - Method in class org.apache.spark.ml.feature.Binarizer
- uid() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
- uid() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- uid() - Method in class org.apache.spark.ml.feature.Bucketizer
- uid() - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- uid() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- uid() - Method in class org.apache.spark.ml.feature.ColumnPruner
- uid() - Method in class org.apache.spark.ml.feature.CountVectorizer
- uid() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- uid() - Method in class org.apache.spark.ml.feature.DCT
- uid() - Method in class org.apache.spark.ml.feature.ElementwiseProduct
- uid() - Method in class org.apache.spark.ml.feature.FeatureHasher
- uid() - Method in class org.apache.spark.ml.feature.HashingTF
- uid() - Method in class org.apache.spark.ml.feature.IDF
- uid() - Method in class org.apache.spark.ml.feature.IDFModel
- uid() - Method in class org.apache.spark.ml.feature.Imputer
- uid() - Method in class org.apache.spark.ml.feature.ImputerModel
- uid() - Method in class org.apache.spark.ml.feature.IndexToString
- uid() - Method in class org.apache.spark.ml.feature.Interaction
- uid() - Method in class org.apache.spark.ml.feature.MaxAbsScaler
- uid() - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- uid() - Method in class org.apache.spark.ml.feature.MinHashLSH
- uid() - Method in class org.apache.spark.ml.feature.MinHashLSHModel
- uid() - Method in class org.apache.spark.ml.feature.MinMaxScaler
- uid() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- uid() - Method in class org.apache.spark.ml.feature.NGram
- uid() - Method in class org.apache.spark.ml.feature.Normalizer
- uid() - Method in class org.apache.spark.ml.feature.OneHotEncoder
- uid() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- uid() - Method in class org.apache.spark.ml.feature.PCA
- uid() - Method in class org.apache.spark.ml.feature.PCAModel
- uid() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
- uid() - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
- uid() - Method in class org.apache.spark.ml.feature.RegexTokenizer
- uid() - Method in class org.apache.spark.ml.feature.RFormula
- uid() - Method in class org.apache.spark.ml.feature.RFormulaModel
- uid() - Method in class org.apache.spark.ml.feature.RobustScaler
- uid() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- uid() - Method in class org.apache.spark.ml.feature.SQLTransformer
- uid() - Method in class org.apache.spark.ml.feature.StandardScaler
- uid() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- uid() - Method in class org.apache.spark.ml.feature.StopWordsRemover
- uid() - Method in class org.apache.spark.ml.feature.StringIndexer
- uid() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- uid() - Method in class org.apache.spark.ml.feature.Tokenizer
- uid() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelector
- uid() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- uid() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- uid() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- uid() - Method in class org.apache.spark.ml.feature.VectorAssembler
- uid() - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- uid() - Method in class org.apache.spark.ml.feature.VectorIndexer
- uid() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- uid() - Method in class org.apache.spark.ml.feature.VectorSizeHint
- uid() - Method in class org.apache.spark.ml.feature.VectorSlicer
- uid() - Method in class org.apache.spark.ml.feature.Word2Vec
- uid() - Method in class org.apache.spark.ml.feature.Word2VecModel
- uid() - Method in class org.apache.spark.ml.fpm.FPGrowth
- uid() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- uid() - Method in class org.apache.spark.ml.fpm.PrefixSpan
- uid() - Method in class org.apache.spark.ml.Pipeline
- uid() - Method in class org.apache.spark.ml.PipelineModel
- uid() - Method in class org.apache.spark.ml.recommendation.ALS
- uid() - Method in class org.apache.spark.ml.recommendation.ALSModel
- uid() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
- uid() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- uid() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.FMRegressor
- uid() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.GBTRegressor
- uid() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- uid() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.IsotonicRegression
- uid() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.LinearRegression
- uid() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- uid() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- uid() - Method in class org.apache.spark.ml.tuning.CrossValidator
- uid() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- uid() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- uid() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- uid() - Method in interface org.apache.spark.ml.util.Identifiable
-
An immutable unique ID for the object and its derivatives.
- uiRoot() - Method in interface org.apache.spark.status.api.v1.ApiRequestContext
- uiRoot(HttpServletRequest) - Static method in class org.apache.spark.ui.UIUtils
- UIRoot - Interface in org.apache.spark.status.api.v1
-
This trait is shared by the all the root containers for application UI information -- the HistoryServer and the application UI.
- UIRootFromServletContext - Class in org.apache.spark.status.api.v1
- UIRootFromServletContext() - Constructor for class org.apache.spark.status.api.v1.UIRootFromServletContext
- UIUtils - Class in org.apache.spark.sql.streaming.ui
- UIUtils - Class in org.apache.spark.streaming.ui
- UIUtils - Class in org.apache.spark.ui
-
Utility functions for generating XML pages with spark content.
- UIUtils() - Constructor for class org.apache.spark.sql.streaming.ui.UIUtils
- UIUtils() - Constructor for class org.apache.spark.streaming.ui.UIUtils
- UIUtils() - Constructor for class org.apache.spark.ui.UIUtils
- uiWebUrl() - Method in class org.apache.spark.SparkContext
- UIWorkloadGenerator - Class in org.apache.spark.ui
-
Continuously generates jobs that expose various features of the WebUI (internal testing tool).
- UIWorkloadGenerator() - Constructor for class org.apache.spark.ui.UIWorkloadGenerator
- unableToCreateDatabaseAsFailedToCreateDirectoryError(CatalogDatabase, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unableToCreatePartitionPathError(Path, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unableToCreateTableAsFailedToCreateDirectoryError(String, Path, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unableToDeletePartitionPathError(Path, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unableToDropDatabaseAsFailedToDeleteDirectoryError(CatalogDatabase, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unableToDropTableAsFailedToDeleteDirectoryError(String, Path, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unableToLocateProtobufMessageError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unableToRegisterWithExternalShuffleServerError(Throwable) - Static method in class org.apache.spark.errors.SparkCoreErrors
- unableToRenamePartitionPathError(Path, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unableToRenameTableAsFailedToRenameDirectoryError(String, String, Path, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unapply(String) - Static method in class org.apache.spark.util.IntParam
- unapply(String) - Static method in class org.apache.spark.util.MemoryParam
- unapply(Throwable) - Static method in class org.apache.spark.util.CausedBy
- unapply(EdgeContext<VD, ED, A>) - Static method in class org.apache.spark.graphx.EdgeContext
-
Extractor mainly used for Graph#aggregateMessages*.
- unapply(DenseVector) - Static method in class org.apache.spark.ml.linalg.DenseVector
-
Extracts the value array from a dense vector.
- unapply(SparseVector) - Static method in class org.apache.spark.ml.linalg.SparseVector
- unapply(DenseVector) - Static method in class org.apache.spark.mllib.linalg.DenseVector
-
Extracts the value array from a dense vector.
- unapply(SparseVector) - Static method in class org.apache.spark.mllib.linalg.SparseVector
- unapply(Expression) - Static method in class org.apache.spark.sql.columnar.ExtractableLiteral
- unapply(Expression) - Static method in class org.apache.spark.sql.types.AnyTimestampTypeExpression
- unapply(Expression) - Static method in class org.apache.spark.sql.types.DecimalExpression
- unapply(Expression) - Static method in class org.apache.spark.sql.types.IntegralTypeExpression
-
Enables matching against IntegralType for expressions:
- unapply(Expression) - Static method in class org.apache.spark.sql.types.NumericTypeExpression
-
Enables matching against NumericType for expressions:
- unapply(Expression) - Static method in class org.apache.spark.sql.types.StringTypeExpression
-
Enables matching against StringType for expressions:
- unapply(Literal<T>) - Static method in class org.apache.spark.sql.connector.expressions.Lit
- unapply(NamedReference) - Static method in class org.apache.spark.sql.connector.expressions.Ref
- unapply(Transform) - Static method in class org.apache.spark.sql.connector.expressions.ClusterByTransform
- unapply(Transform) - Static method in class org.apache.spark.sql.connector.expressions.NamedTransform
- unapply(DataType) - Static method in class org.apache.spark.sql.types.DecimalType
- unapply(DecimalType) - Method in class org.apache.spark.sql.types.DecimalType.Fixed$
- unapply(Seq<String>) - Static method in class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier
- unapply(Seq<String>) - Method in class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
- unapply(Seq<String>) - Static method in class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier
- unapply(Seq<String>) - Method in class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier$
- unapply(Seq<String>) - Static method in class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace
- unapply(Seq<String>) - Method in class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace$
- unapply(Seq<String>) - Static method in class org.apache.spark.sql.connector.catalog.LookupCatalog.NonSessionCatalogAndIdentifier
- unapply(Seq<String>) - Method in class org.apache.spark.sql.connector.catalog.LookupCatalog.NonSessionCatalogAndIdentifier$
- unapply(Seq<String>) - Static method in class org.apache.spark.sql.connector.catalog.LookupCatalog.SessionCatalogAndIdentifier
- unapply(Seq<String>) - Method in class org.apache.spark.sql.connector.catalog.LookupCatalog.SessionCatalogAndIdentifier$
- UnaryTransformer<IN,
OUT, T extends UnaryTransformer<IN, OUT, T>> - Class in org.apache.spark.ml -
Abstract class for transformers that take one input column, apply transformation, and output the result as a new column.
- UnaryTransformer(TypeTags.TypeTag<IN>, TypeTags.TypeTag<OUT>) - Constructor for class org.apache.spark.ml.UnaryTransformer
- unbase64(Column) - Static method in class org.apache.spark.sql.functions
-
Decodes a BASE64 encoded string column and returns it as a binary column.
- unboundedFollowing() - Static method in class org.apache.spark.sql.expressions.Window
-
Value representing the last row in the partition, equivalent to "UNBOUNDED FOLLOWING" in SQL.
- unboundedPreceding() - Static method in class org.apache.spark.sql.expressions.Window
-
Value representing the first row in the partition, equivalent to "UNBOUNDED PRECEDING" in SQL.
- UnboundFunction - Interface in org.apache.spark.sql.connector.catalog.functions
-
Represents a user-defined function that is not bound to input types.
- UnboundProcedure - Interface in org.apache.spark.sql.connector.catalog.procedures
-
A procedure that is not bound to input types.
- unbroadcast(long, boolean, boolean) - Method in interface org.apache.spark.broadcast.BroadcastFactory
- uncacheTable(String) - Method in class org.apache.spark.sql.api.Catalog
-
Removes the specified table from the in-memory cache.
- uncacheTable(String) - Method in class org.apache.spark.sql.SQLContext
-
Removes the specified table from the in-memory cache.
- UNCAUGHT_EXCEPTION() - Static method in class org.apache.spark.util.SparkExitCode
-
The default uncaught exception handler was reached.
- UNCAUGHT_EXCEPTION_TWICE() - Static method in class org.apache.spark.util.SparkExitCode
-
The default uncaught exception handler was called and an exception was encountered while
- unclosedBracketedCommentError(String, Origin, Origin) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- UNCOMPRESSED - Enum constant in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- undefinedImageType() - Static method in class org.apache.spark.ml.image.ImageSchema
- underlyingSplit() - Method in class org.apache.spark.scheduler.SplitInfo
- unexpectedAccumulableUpdateValueError(Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unexpectedBlockManagerMasterEndpointResultError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- unexpectedFileSize(Path, File, long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unexpectedFormatForResetConfigurationError(SqlBaseParser.ResetConfigurationContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- unexpectedFormatForSetConfigurationError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- unexpectedInputDataTypeError(String, int, DataType, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unexpectedNullError(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unexpectedOperatorInCorrelatedSubquery(LogicalPlan, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unexpectedPartitionColumnPrefixError(String, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unexpectedPositionalArgument(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unexpectedPy4JServerError(Object) - Static method in class org.apache.spark.errors.SparkCoreErrors
- unexpectedRequiredParameter(String, Seq<InputParameter>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unexpectedSchemaTypeError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unexpectedShuffleBlockWithUnsupportedResolverError(ShuffleManager, BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
- unexpectedStateStoreVersion(long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unexpectedValueForLengthInFunctionError(String, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unexpectedValueForStartInFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unexpectedWindowFunctionFrameError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unhandledFilters(Filter[]) - Method in class org.apache.spark.sql.sources.BaseRelation
-
Returns the list of
Filter
s that this datasource may not be able to handle. - unhex(Column) - Static method in class org.apache.spark.sql.functions
-
Inverse of hex.
- UniformGenerator - Class in org.apache.spark.mllib.random
-
Generates i.i.d.
- UniformGenerator() - Constructor for class org.apache.spark.mllib.random.UniformGenerator
- uniformJavaRDD(JavaSparkContext, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.uniformJavaRDD
with the default number of partitions and the default seed. - uniformJavaRDD(JavaSparkContext, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.uniformJavaRDD
with the default seed. - uniformJavaRDD(JavaSparkContext, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.uniformRDD
. - uniformJavaVectorRDD(JavaSparkContext, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.uniformJavaVectorRDD
with the default number of partitions and the default seed. - uniformJavaVectorRDD(JavaSparkContext, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.uniformJavaVectorRDD
with the default seed. - uniformJavaVectorRDD(JavaSparkContext, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of
RandomRDDs.uniformVectorRDD
. - uniformRDD(SparkContext, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of
i.i.d.
samples from the uniform distributionU(0.0, 1.0)
. - uniformVectorRDD(SparkContext, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing
i.i.d.
samples drawn from the uniform distribution onU(0.0, 1.0)
. - union(JavaDoubleRDD) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return the union of this RDD and another one.
- union(JavaDoubleRDD...) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Build the union of JavaDoubleRDDs.
- union(JavaPairRDD<K, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return the union of this RDD and another one.
- union(JavaPairRDD<K, V>...) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Build the union of JavaPairRDDs.
- union(JavaRDD<T>) - Method in class org.apache.spark.api.java.JavaRDD
-
Return the union of this RDD and another one.
- union(JavaRDD<T>...) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Build the union of JavaRDDs.
- union(RDD<T>) - Method in class org.apache.spark.rdd.RDD
-
Return the union of this RDD and another one.
- union(RDD<T>, Seq<RDD<T>>, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Build the union of a list of RDDs passed as variable-length arguments.
- union(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing union of rows in this Dataset and another Dataset.
- union(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- union(JavaDStream<T>) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Return a new DStream by unifying data of another DStream with this DStream.
- union(JavaDStream<T>...) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a unified DStream from multiple DStreams of the same type and same slide duration.
- union(JavaPairDStream<K, V>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by unifying data of another DStream with this DStream.
- union(JavaPairDStream<K, V>...) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a unified DStream from multiple DStreams of the same type and same slide duration.
- union(DStream<T>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream by unifying data of another DStream with this DStream.
- union(Seq<JavaDoubleRDD>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Build the union of JavaDoubleRDDs.
- union(Seq<JavaPairRDD<K, V>>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Build the union of JavaPairRDDs.
- union(Seq<JavaRDD<T>>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Build the union of JavaRDDs.
- union(Seq<RDD<T>>, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Build the union of a list of RDDs.
- union(Seq<JavaDStream<T>>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a unified DStream from multiple DStreams of the same type and same slide duration.
- union(Seq<JavaPairDStream<K, V>>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.Create a unified DStream from multiple DStreams of the same type and same slide duration.
- union(Seq<DStream<T>>, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.Create a unified DStream from multiple DStreams of the same type and same slide duration.
- unionAll(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing union of rows in this Dataset and another Dataset.
- unionAll(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- unionByName(Dataset) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing union of rows in this Dataset and another Dataset.
- unionByName(Dataset, boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset containing union of rows in this Dataset and another Dataset.
- unionByName(Dataset<T>) - Method in class org.apache.spark.sql.Dataset
- unionByName(Dataset<T>, boolean) - Method in class org.apache.spark.sql.Dataset
- UnionRDD<T> - Class in org.apache.spark.rdd
- UnionRDD(SparkContext, Seq<RDD<T>>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.UnionRDD
- uniqueId() - Method in class org.apache.spark.storage.PythonStreamBlockId
- uniqueId() - Method in class org.apache.spark.storage.StreamBlockId
- UnivariateFeatureSelector - Class in org.apache.spark.ml.feature
-
Feature selector based on univariate statistical tests against labels.
- UnivariateFeatureSelector() - Constructor for class org.apache.spark.ml.feature.UnivariateFeatureSelector
- UnivariateFeatureSelector(String) - Constructor for class org.apache.spark.ml.feature.UnivariateFeatureSelector
- UnivariateFeatureSelectorModel - Class in org.apache.spark.ml.feature
-
Model fitted by
UnivariateFeatureSelectorModel
. - UnivariateFeatureSelectorParams - Interface in org.apache.spark.ml.feature
-
Params for
UnivariateFeatureSelector
andUnivariateFeatureSelectorModel
. - unix_date(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of days since 1970-01-01.
- unix_micros(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of microseconds since 1970-01-01 00:00:00 UTC.
- unix_millis(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of milliseconds since 1970-01-01 00:00:00 UTC.
- unix_seconds(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of seconds since 1970-01-01 00:00:00 UTC.
- unix_timestamp() - Static method in class org.apache.spark.sql.functions
-
Returns the current Unix timestamp (in seconds) as a long.
- unix_timestamp(Column) - Static method in class org.apache.spark.sql.functions
-
Converts time string in format yyyy-MM-dd HH:mm:ss to Unix timestamp (in seconds), using the default timezone and the default locale.
- unix_timestamp(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts time string with given pattern to Unix timestamp (in seconds).
- UNKNOWN - Enum constant in enum class org.apache.spark.JobExecutionStatus
- UNKNOWN - Enum constant in enum class org.apache.spark.launcher.SparkAppHandle.State
-
The application has not reported back yet.
- UNKNOWN - Enum constant in enum class org.apache.spark.status.api.v1.TaskStatus
- UNKNOWN_RESOURCE_PROFILE_ID() - Static method in class org.apache.spark.resource.ResourceProfile
- unknownColumnError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unknownHiveResourceTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- UnknownPartitioning - Class in org.apache.spark.sql.connector.read.partitioning
-
Represents a partitioning where rows are split across partitions in an unknown pattern.
- UnknownPartitioning(int) - Constructor for class org.apache.spark.sql.connector.read.partitioning.UnknownPartitioning
- unknownProtobufMessageTypeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- UnknownReason - Class in org.apache.spark
-
:: DeveloperApi :: We don't know why the task ended -- for example, because of a ClassNotFound exception when deserializing the task result.
- UnknownReason() - Constructor for class org.apache.spark.UnknownReason
- unlink(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
- unlink(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
- unlink(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
- unlink(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
- unlink(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
- unlink(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
- unlink(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
- unorderablePivotColError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- UNORDERED() - Static method in class org.apache.spark.rdd.DeterministicLevel
- unpack(File, File) - Static method in class org.apache.spark.util.Utils
-
Unpacks an archive file into the specified directory.
- unpackUpperTriangular(int, double[]) - Static method in class org.apache.spark.ml.impl.Utils
-
Convert an n * (n + 1) / 2 dimension array representing the upper triangular part of a matrix into an n * n array representing the full symmetric matrix (column major).
- unpersist() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
- unpersist() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
- unpersist() - Method in class org.apache.spark.api.java.JavaRDD
-
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
- unpersist() - Method in class org.apache.spark.broadcast.Broadcast
-
Asynchronously delete cached copies of this broadcast on the executors.
- unpersist() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Unpersist intermediate RDDs used in the computation.
- unpersist() - Method in class org.apache.spark.sql.api.Dataset
-
Mark the Dataset as non-persistent, and remove all blocks for it from memory and disk.
- unpersist() - Method in class org.apache.spark.sql.Dataset
- unpersist(boolean) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
- unpersist(boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
- unpersist(boolean) - Method in class org.apache.spark.api.java.JavaRDD
-
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
- unpersist(boolean) - Method in class org.apache.spark.broadcast.Broadcast
-
Delete cached copies of this broadcast on the executors.
- unpersist(boolean) - Method in class org.apache.spark.graphx.Graph
-
Uncaches both vertices and edges of this graph.
- unpersist(boolean) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
- unpersist(boolean) - Method in class org.apache.spark.graphx.impl.GraphImpl
- unpersist(boolean) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- unpersist(boolean) - Method in class org.apache.spark.rdd.RDD
-
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
- unpersist(boolean) - Method in class org.apache.spark.sql.api.Dataset
-
Mark the Dataset as non-persistent, and remove all blocks for it from memory and disk.
- unpersist(boolean) - Method in class org.apache.spark.sql.Dataset
- unpersistRDDFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- unpersistRDDToJson(SparkListenerUnpersistRDD, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
- unpersistVertices(boolean) - Method in class org.apache.spark.graphx.Graph
-
Uncaches only the vertices of this graph, leaving the edges alone.
- unpersistVertices(boolean) - Method in class org.apache.spark.graphx.impl.GraphImpl
- unpivot(Column[], String, String) - Method in class org.apache.spark.sql.api.Dataset
-
Unpivot a DataFrame from wide format to long format, optionally leaving identifier columns set.
- unpivot(Column[], String, String) - Method in class org.apache.spark.sql.Dataset
- unpivot(Column[], Column[], String, String) - Method in class org.apache.spark.sql.api.Dataset
-
Unpivot a DataFrame from wide format to long format, optionally leaving identifier columns set.
- unpivot(Column[], Column[], String, String) - Method in class org.apache.spark.sql.Dataset
- unpivotRequiresAttributes(String, String, Seq<NamedExpression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unpivotRequiresValueColumns() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unpivotValueDataTypeMismatchError(Seq<Seq<NamedExpression>>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unpivotValueSizeMismatchError(int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unpivotWithPivotInFromClauseNotAllowedError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- unreachableError(String) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- unreachableError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unreachableError$default$1() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- UNRECOGNIZED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
- UNRECOGNIZED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
- UNRECOGNIZED - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
- UnrecognizedBlockId - Exception in org.apache.spark.storage
- UnrecognizedBlockId(String) - Constructor for exception org.apache.spark.storage.UnrecognizedBlockId
- unrecognizedBlockIdError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- unrecognizedCompressionSchemaTypeIDError(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unrecognizedParameterName(String, String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unrecognizedParquetTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unrecognizedSchedulerModePropertyError(String, String) - Static method in class org.apache.spark.errors.SparkCoreErrors
- unrecognizedSqlTypeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unregister(QueryExecutionListener) - Method in class org.apache.spark.sql.util.ExecutionListenerManager
-
Unregisters the specified
QueryExecutionListener
. - unregisterDialect(JdbcDialect) - Static method in class org.apache.spark.sql.jdbc.JdbcDialects
-
Unregister a dialect.
- unreleasedThreadError(String, String, String, String, long, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- Unresolved() - Static method in class org.apache.spark.ml.attribute.AttributeType
-
Unresolved type.
- UnresolvedAttribute - Class in org.apache.spark.ml.attribute
-
An unresolved attribute.
- UnresolvedAttribute() - Constructor for class org.apache.spark.ml.attribute.UnresolvedAttribute
- unresolvedAttributeError(String, String, Seq<String>, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedColumnError(String, String[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedColumnError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedColumnError(Seq<String>, String[], Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedFieldError(String, Seq<String>, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedRoutineError(FunctionIdentifier, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedRoutineError(Seq<String>, Seq<String>, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedUsingColForJoinError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedVariableError(Seq<String>, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unresolvedVariableError(Seq<String>, Seq<String>, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unscaledValueTooLargeForPrecisionError(Decimal, int, int, QueryContext) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- unset() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Clears the input file block to default value.
- unset(String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Resets the configuration property for the given key.
- unspecified() - Static method in class org.apache.spark.sql.connector.distributions.Distributions
-
Creates a distribution where no promises are made about co-location of data.
- unspecified() - Static method in class org.apache.spark.sql.connector.distributions.LogicalDistributions
- UnspecifiedDistribution - Interface in org.apache.spark.sql.connector.distributions
-
A distribution where no promises are made about co-location of data.
- UnspecifiedDistributionImpl - Class in org.apache.spark.sql.connector.distributions
- UnspecifiedDistributionImpl() - Constructor for class org.apache.spark.sql.connector.distributions.UnspecifiedDistributionImpl
- UNSUPPORTED - Enum constant in enum class org.apache.spark.sql.connector.read.Scan.ColumnarSupportMode
- unsupportedAppendInBatchModeError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedArrayElementTypeBasedOnBinaryError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedArrayTypeError(Class<?>) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- unsupportedArrowTypeError(ArrowType) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- unsupportedArrowTypeError(ArrowType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedBatchReadError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedCommentNamespaceError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedCorrelatedReferenceDataTypeError(Expression, DataType, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedCorrelatedSubqueryInJoinConditionError(Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedCreateOrReplaceViewOnTableError(TableIdentifier, boolean) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedDataTypeError(Object) - Static method in class org.apache.spark.errors.SparkCoreErrors
- unsupportedDataTypeError(DataType) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- unsupportedDataTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedDeleteByConditionWithSubqueryError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedDropNamespaceError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedDynamicOverwriteInBatchModeError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedFieldNameError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedFunctionNameError(Seq<String>, ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- unsupportedHiveMetastoreVersionError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedIfNotExistsError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedJavaTypeError(Class<?>) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- unsupportedJDBCNamespaceChangeInCatalogError(Seq<NamespaceChange>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedJdbcTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedLateralJoinTypeError(ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- unsupportedLocalFileSchemeError(SqlBaseParser.InsertOverwriteDirContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- unsupportedMultipleBucketTransformsError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedNaturalJoinTypeError(JoinType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedOperandTypeForSizeFunctionError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedOperationError() - Static method in class org.apache.spark.errors.SparkCoreErrors
- unsupportedOperationExceptionError() - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- unsupportedOperationForDataTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedOutputModeForStreamingOperationError(OutputMode, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedOutputModeForStreamingOperationError(OutputMode, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedOverwriteByFilterInBatchModeError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedParameterExpression(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedPartitionTransformError(Transform) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedPurgePartitionError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedPurgeTableError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedRemoveNamespaceCommentError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedRoundingMode(Enumeration.Value) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- unsupportedStreamingOperatorWithoutWatermark(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedStreamingScanError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedTableChangeError(IllegalArgumentException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedTableChangeInJDBCCatalogError(TableChange) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedTableOperationError(TableIdentifier, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedTableOperationError(CatalogPlugin, Identifier, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedTruncateInBatchModeError(Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedUDFOuptutType(Expression, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- unsupportedUpdateColumnNullabilityError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- unsupportedUserSpecifiedSchemaError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- until(Time, Duration) - Method in class org.apache.spark.streaming.Time
- unwrap_udt(Column) - Static method in class org.apache.spark.sql.functions
-
Unwrap UDT data type column into its underlying type.
- unzipFilesFromFile(FileSystem, Path, File) - Static method in class org.apache.spark.util.Utils
-
Decompress a zip file into a local dir.
- upCastFailureError(String, Expression, DataType, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- UpCastRule - Class in org.apache.spark.sql.types
-
Rule that defines which upcasts are allow in Spark.
- UpCastRule() - Constructor for class org.apache.spark.sql.types.UpCastRule
- update() - Method in class org.apache.spark.scheduler.AccumulableInfo
- update() - Method in class org.apache.spark.status.api.v1.AccumulableInfo
- update(int, int, double) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Update element at (i, j)
- update(int, int, double) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Update element at (i, j)
- update(int, Object) - Method in class org.apache.spark.sql.expressions.MutableAggregationBuffer
-
Update the ith value of this buffer.
- update(int, Object) - Method in class org.apache.spark.sql.vectorized.ColumnarArray
- update(int, Object) - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
- update(int, Object) - Method in class org.apache.spark.sql.vectorized.ColumnarRow
- update(RDD<Vector>, double, String) - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
-
Perform a k-means update on a batch of data.
- update(MutableAggregationBuffer, Row) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.Updates the given aggregation buffer
buffer
with new input data frominput
. - update(S) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Update the value of the state.
- update(S) - Method in interface org.apache.spark.sql.streaming.ValueState
-
Update the value of the state.
- update(S) - Method in class org.apache.spark.streaming.State
-
Update the state with a new value.
- update(Map<String, Column>) - Method in class org.apache.spark.sql.WhenMatched
-
Specifies an action to update matched rows in the DataFrame with the provided column assignments.
- update(Map<String, Column>) - Method in class org.apache.spark.sql.WhenNotMatchedBySource
-
Specifies an action to update non-matched rows in the target DataFrame with the provided column assignments when not matched by the source.
- update(Seq<String>, long, long) - Method in class org.apache.spark.status.LiveRDDPartition
- update(Function1<Object, Object>) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Update all the values of this matrix using the function f.
- update(Function1<Object, Object>) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Update all the values of this matrix using the function f.
- update(S, InternalRow) - Method in interface org.apache.spark.sql.connector.catalog.functions.AggregateFunction
-
Update the aggregation state with a new row.
- update(T1, T2) - Method in class org.apache.spark.util.MutablePair
-
Updates this pair with new values and returns itself
- update(T, T, T) - Method in interface org.apache.spark.sql.connector.write.DeltaWriter
-
Updates a row.
- Update() - Static method in class org.apache.spark.sql.streaming.OutputMode
-
OutputMode in which only the rows that were updated in the streaming DataFrame/Dataset will be written to the sink every time there are some updates.
- UPDATE - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableWritePrivilege
-
The privilege for changing existing rows in th table.
- UPDATE - Enum constant in enum class org.apache.spark.sql.connector.write.RowLevelOperation.Command
- UPDATE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- updateAll() - Method in class org.apache.spark.sql.WhenMatched
-
Specifies an action to update all matched rows in the DataFrame.
- updateAll() - Method in class org.apache.spark.sql.WhenNotMatchedBySource
-
Specifies an action to update all non-matched rows in the target DataFrame when not matched by the source.
- updateAttributeGroupSize(StructType, String, int) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Update the size of a ML Vector column.
- UpdateBlockInfo() - Constructor for class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- UpdateBlockInfo(BlockManagerId, BlockId, StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- UpdateBlockInfo$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo$
- updateColumnComment(String[], String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for updating the comment of a field.
- updateColumnDefaultValue(String[], String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for updating the default value of a field.
- updateColumnNullability(String[], boolean) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for updating the nullability of a field.
- updateColumnPosition(String[], TableChange.ColumnPosition) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for updating the position of a field.
- updateColumnType(String[], DataType) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for updating the type of a field that is nullable.
- UPDATED_BLOCK_STATUSES() - Static method in class org.apache.spark.InternalAccumulator
- UpdateDelegationTokens(byte[]) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
- UpdateDelegationTokens$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens$
- UpdateExecutorLogLevel(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorLogLevel
- UpdateExecutorLogLevel$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorLogLevel$
- updateExecutorsLogLevel(String) - Method in interface org.apache.spark.scheduler.SchedulerBackend
- UpdateExecutorsLogLevel(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorsLogLevel
- UpdateExecutorsLogLevel$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateExecutorsLogLevel$
- updateExtraColumnMeta(Connection, ResultSetMetaData, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Get extra column metadata for the given column.
- updateExtraColumnMeta(Connection, ResultSetMetaData, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
- updateExtraColumnMeta(Connection, ResultSetMetaData, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.PostgresDialect
- updateField(StructType, StructField, boolean) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Update the metadata of an existing column.
- updateLocation(BlockManagerId) - Method in interface org.apache.spark.scheduler.MapStatus
- updateMapOutput(long, BlockManagerId) - Method in class org.apache.spark.ShuffleStatus
-
Update the map output location (e.g.
- updateMetrics(TaskMetrics) - Method in class org.apache.spark.status.LiveTask
-
Update the metrics for the task and return the difference between the previous and new values.
- updateNumeric(StructType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Update the numeric meta of an existing column.
- updateNumValues(StructType, String, int) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Update the number of values of an existing column.
- updatePrediction(Vector, double, DecisionTreeRegressionModel, double) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Add prediction from a new boosting iteration to an existing prediction.
- updatePrediction(TreePoint, double, DecisionTreeRegressionModel, double, Split[][]) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Add prediction from a new boosting iteration to an existing prediction.
- updatePredictionError(RDD<TreePoint>, RDD<Tuple2<Object, Object>>, double, DecisionTreeRegressionModel, Loss, Broadcast<Split[][]>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Update a zipped predictionError RDD (as obtained with computeInitialPredictionAndError)
- updatePredictionError(RDD<LabeledPoint>, RDD<Tuple2<Object, Object>>, double, DecisionTreeModel, Loss) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
Update a zipped predictionError RDD (as obtained with computeInitialPredictionAndError)
- Updater - Class in org.apache.spark.mllib.optimization
-
Class used to perform steps (weight update) using Gradient Descent methods.
- Updater() - Constructor for class org.apache.spark.mllib.optimization.Updater
- UpdateRDDBlockTaskInfo(RDDBlockId, long) - Constructor for class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockTaskInfo
- UpdateRDDBlockTaskInfo$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockTaskInfo$
- UpdateRDDBlockVisibility(long, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockVisibility
- UpdateRDDBlockVisibility$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockVisibility$
- updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
- updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
- updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
- updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>>, Partitioner, JavaPairRDD<K, S>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
- updateStateByKey(Function1<Iterator<Tuple3<K, Seq<V>, Option<S>>>, Iterator<Tuple2<K, S>>>, Partitioner, boolean, RDD<Tuple2<K, S>>, ClassTag<S>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
- updateStateByKey(Function1<Iterator<Tuple3<K, Seq<V>, Option<S>>>, Iterator<Tuple2<K, S>>>, Partitioner, boolean, ClassTag<S>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
- updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, int, ClassTag<S>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
- updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, Partitioner, RDD<Tuple2<K, S>>, ClassTag<S>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
- updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, Partitioner, ClassTag<S>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
- updateStateByKey(Function2<Seq<V>, Option<S>, Option<S>>, ClassTag<S>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
- updateStateByKey(Function4<Time, K, Seq<V>, Option<S>, Option<S>>, Partitioner, boolean, Option<RDD<Tuple2<K, S>>>, ClassTag<S>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
- updateValue(K, V) - Method in interface org.apache.spark.sql.streaming.MapState
-
Update value for given user key
- uploadArtifactToFs(Path, Path) - Method in class org.apache.spark.sql.artifact.ArtifactManager
- upper() - Method in class org.apache.spark.ml.feature.RobustScaler
- upper() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- upper() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
-
Upper quantile to calculate quantile range, shared by all features Default: 0.75
- upper(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a string column to upper case.
- upperBoundsOnCoefficients() - Method in class org.apache.spark.ml.classification.LogisticRegression
- upperBoundsOnCoefficients() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- upperBoundsOnCoefficients() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
The upper bounds on coefficients if fitting under bound constrained optimization.
- upperBoundsOnIntercepts() - Method in class org.apache.spark.ml.classification.LogisticRegression
- upperBoundsOnIntercepts() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- upperBoundsOnIntercepts() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
The upper bounds on intercepts if fitting under bound constrained optimization.
- url_decode(Column) - Static method in class org.apache.spark.sql.functions
-
Decodes a
str
in 'application/x-www-form-urlencoded' format using a specific encoding scheme. - url_encode(Column) - Static method in class org.apache.spark.sql.functions
-
Translates a string into 'application/x-www-form-urlencoded' format using a specific encoding scheme.
- urlEncoded() - Method in class org.apache.spark.paths.SparkPath
- USE_DISK_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- USE_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- useCommitCoordinator() - Method in interface org.apache.spark.sql.connector.write.BatchWrite
-
Returns whether Spark should use the commit coordinator to ensure that at most one task for each partition commits.
- useCommitCoordinator() - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingWrite
-
Returns whether Spark should use the commit coordinator to ensure that at most one task for each partition commits.
- USED_OFF_HEAP_STORAGE_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- USED_ON_HEAP_STORAGE_MEMORY_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- usedBy() - Method in class org.apache.spark.ErrorStateInfo
- useDefinedRecordReaderOrWriterClassesError(ParserRuleContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- useDictionaryEncodingWhenDictionaryOverflowError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- useDisk() - Method in class org.apache.spark.storage.StorageLevel
- usedOffHeapStorageMemory() - Method in interface org.apache.spark.SparkExecutorInfo
- usedOffHeapStorageMemory() - Method in class org.apache.spark.SparkExecutorInfoImpl
- usedOffHeapStorageMemory() - Method in class org.apache.spark.status.api.v1.MemoryMetrics
- usedOnHeapStorageMemory() - Method in interface org.apache.spark.SparkExecutorInfo
- usedOnHeapStorageMemory() - Method in class org.apache.spark.SparkExecutorInfoImpl
- usedOnHeapStorageMemory() - Method in class org.apache.spark.status.api.v1.MemoryMetrics
- useDst - Variable in class org.apache.spark.graphx.TripletFields
-
Indicates whether the destination vertex attribute is included.
- useEdge - Variable in class org.apache.spark.graphx.TripletFields
-
Indicates whether the edge attribute is included.
- useMemory() - Method in class org.apache.spark.storage.StorageLevel
- useNodeIdCache() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
- useNullableQuerySchema() - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
If true, mark all the fields of the query schema as nullable when executing CREATE/REPLACE TABLE ...
- useOffHeap() - Method in class org.apache.spark.storage.StorageLevel
- usePythonUDFInJoinConditionUnsupportedError(JoinType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- user() - Method in class org.apache.spark.ml.recommendation.ALS.Rating
- user() - Method in class org.apache.spark.mllib.recommendation.Rating
- user() - Static method in class org.apache.spark.sql.functions
-
Returns the user name of current execution context.
- USER_DEFAULT() - Static method in class org.apache.spark.sql.types.DecimalType
- userClass() - Method in class org.apache.spark.mllib.linalg.VectorUDT
- userClass() - Method in class org.apache.spark.sql.types.UserDefinedType
-
Class object for the UserType
- userCol() - Method in class org.apache.spark.ml.recommendation.ALS
- userCol() - Method in class org.apache.spark.ml.recommendation.ALSModel
- userCol() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
-
Param for the column name for user ids.
- UserDefinedAggregateFunc - Class in org.apache.spark.sql.connector.expressions.aggregate
-
The general representation of user defined aggregate function, which implements
AggregateFunc
, contains the upper-cased function name, the canonical function name, the `isDistinct` flag and all the inputs. - UserDefinedAggregateFunc(String, String, boolean, Expression[]) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.UserDefinedAggregateFunc
- UserDefinedAggregateFunction - Class in org.apache.spark.sql.expressions
-
Deprecated.UserDefinedAggregateFunction is deprecated. Aggregator[IN, BUF, OUT] should now be registered as a UDF via the functions.udaf(agg) method.
- UserDefinedAggregateFunction() - Constructor for class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.
- UserDefinedFunction - Class in org.apache.spark.sql.expressions
-
A user-defined function.
- UserDefinedFunction() - Constructor for class org.apache.spark.sql.expressions.UserDefinedFunction
- userDefinedPartitionNotFoundInJDBCRelationError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- UserDefinedScalarFunc - Class in org.apache.spark.sql.connector.expressions
-
The general representation of user defined scalar function, which contains the upper-cased function name, canonical function name and all the children expressions.
- UserDefinedScalarFunc(String, String, Expression[]) - Constructor for class org.apache.spark.sql.connector.expressions.UserDefinedScalarFunc
- UserDefinedType<UserType> - Class in org.apache.spark.sql.types
-
The data type for User Defined Types (UDTs).
- UserDefinedType() - Constructor for class org.apache.spark.sql.types.UserDefinedType
- userDefinedTypeNotAnnotatedAndRegisteredError(UserDefinedType<?>) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
- userDefinedTypeNotAnnotatedAndRegisteredError(UserDefinedType<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- userFactors() - Method in class org.apache.spark.ml.recommendation.ALSModel
- userFeatures() - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
- userPort(int, int) - Static method in class org.apache.spark.util.Utils
-
Returns the user port to try when trying to bind a service.
- userSpecifiedSchemaMismatchActualSchemaError(StructType, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- userSpecifiedSchemaUnsupportedByDataSourceError(TableProvider) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- userSpecifiedSchemaUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- userSpecifiedSchemaUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- useSrc - Variable in class org.apache.spark.graphx.TripletFields
-
Indicates whether the source vertex attribute is included.
- using(String) - Method in interface org.apache.spark.sql.CreateTableWriter
-
Specifies a provider for the underlying output data source.
- using(String) - Method in class org.apache.spark.sql.DataFrameWriterV2
- usingBoundConstrainedOptimization() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
- usingUntypedScalaUDFError() - Method in interface org.apache.spark.sql.errors.CompilationErrors
- usingUntypedScalaUDFError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- Utils - Class in org.apache.spark.ml.impl
- Utils - Class in org.apache.spark.status.protobuf
- Utils - Class in org.apache.spark.util
-
Various utility methods used by Spark.
- Utils() - Constructor for class org.apache.spark.ml.impl.Utils
- Utils() - Constructor for class org.apache.spark.status.protobuf.Utils
- Utils() - Constructor for class org.apache.spark.util.Utils
- uuid() - Static method in class org.apache.spark.sql.functions
-
Returns an universally unique identifier (UUID) string.
- UUIDFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
- UUIDToJson(UUID, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
V
- V() - Method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
- V1 - Enum constant in enum class org.apache.spark.util.sketch.BloomFilter.Version
-
BloomFilter
binary format version 1. - V1 - Enum constant in enum class org.apache.spark.util.sketch.CountMinSketch.Version
-
CountMinSketch
binary format version 1. - V1_BATCH_WRITE - Enum constant in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Signals that the table supports append writes using the V1 InsertableRelation interface.
- V1Scan - Interface in org.apache.spark.sql.connector.read
-
A trait that should be implemented by V1 DataSources that would like to leverage the DataSource V2 read code paths.
- v1Table() - Method in interface org.apache.spark.sql.connector.catalog.V2TableWithV1Fallback
- V1Write - Interface in org.apache.spark.sql.connector.write
-
A logical write that should be executed using V1 InsertableRelation interface.
- v2ColumnsToStructType(Column[]) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Converts DS v2 columns to StructType, which encodes column comment and default value to StructField metadata.
- V2ExpressionSQLBuilder - Class in org.apache.spark.sql.connector.util
-
The builder to generate SQL from V2 expressions.
- V2ExpressionSQLBuilder() - Constructor for class org.apache.spark.sql.connector.util.V2ExpressionSQLBuilder
- v2FunctionInvalidInputTypeLengthError(BoundFunction, Seq<Expression>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- v2references() - Static method in class org.apache.spark.sql.sources.AlwaysFalse
- v2references() - Static method in class org.apache.spark.sql.sources.AlwaysTrue
- v2references() - Method in class org.apache.spark.sql.sources.Filter
-
List of columns that are referenced by this filter.
- V2TableWithV1Fallback - Interface in org.apache.spark.sql.connector.catalog
-
A V2 table with V1 fallback support.
- validate() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Validates the block matrix info against the matrix data (
blocks
) and throws an exception if any error is found. - validateAndTransformField(StructType, String, DataType, String) - Method in interface org.apache.spark.ml.feature.StringIndexerBase
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.clustering.BisectingKMeansParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.clustering.GaussianMixtureParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.clustering.KMeansParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.clustering.LDAParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.IDFBase
-
Validate and transform the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.ImputerParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.LSHParams
-
Transform the Schema for LSH
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.MaxAbsScalerParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.MinMaxScalerParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.PCAParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.RobustScalerParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.StandardScalerParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.feature.Word2VecBase
-
Validate and transform the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.fpm.FPGrowthParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType) - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType, boolean) - Method in interface org.apache.spark.ml.feature.StringIndexerBase
-
Validates and transforms the input schema.
- validateAndTransformSchema(StructType, boolean) - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
-
Validates and transforms the input schema with the provided param map.
- validateAndTransformSchema(StructType, boolean) - Method in interface org.apache.spark.ml.regression.IsotonicRegressionBase
-
Validates and transforms input schema.
- validateAndTransformSchema(StructType, boolean, boolean) - Method in interface org.apache.spark.ml.feature.OneHotEncoderBase
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.classification.ClassifierParams
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.classification.ProbabilisticClassifierParams
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.PredictorParams
-
Validates and transforms the input schema with the provided param map.
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.regression.LinearRegressionParams
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.tree.DecisionTreeClassifierParams
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.tree.DecisionTreeRegressorParams
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.tree.TreeEnsembleClassifierParams
- validateAndTransformSchema(StructType, boolean, DataType) - Method in interface org.apache.spark.ml.tree.TreeEnsembleRegressorParams
- validateNoExtraCatalystFields(boolean) - Method in class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
-
Validate that there are no Catalyst fields which don't have a matching Avro field, throwing
IncompatibleSchemaException
if such extra fields are found. - validateNoExtraRequiredAvroFields() - Method in class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
-
Validate that there are no Avro fields which don't have a matching Catalyst field, throwing
IncompatibleSchemaException
if such extra fields are found. - validateStages(PipelineStage[]) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Check that all stages are Writable
- validateTaskCpusLargeEnough(SparkConf, int, int) - Static method in class org.apache.spark.resource.ResourceUtils
- validateURL(URI) - Static method in class org.apache.spark.util.Utils
-
Validate that a given URI is actually a valid URL as well.
- validateVectorCompatibleColumn(StructType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given column in the schema is one of the supporting vector type: Vector, Array[Float].
- validationIndicatorCol() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- validationIndicatorCol() - Method in class org.apache.spark.ml.classification.GBTClassifier
- validationIndicatorCol() - Method in interface org.apache.spark.ml.param.shared.HasValidationIndicatorCol
-
Param for name of the column that indicates whether each row is for training or for validation.
- validationIndicatorCol() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- validationIndicatorCol() - Method in class org.apache.spark.ml.regression.GBTRegressor
- validationMetrics() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- validationTol() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- validationTol() - Method in class org.apache.spark.ml.classification.GBTClassifier
- validationTol() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- validationTol() - Method in class org.apache.spark.ml.regression.GBTRegressor
- validationTol() - Method in interface org.apache.spark.ml.tree.GBTParams
-
Threshold for stopping early when fit with validation is used.
- validationTol() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
- ValidatorParams - Interface in org.apache.spark.ml.tuning
-
Common params for
TrainValidationSplitParams
andCrossValidatorParams
. - value - Variable in class org.apache.spark.types.variant.Variant.ObjectField
- value() - Method in class org.apache.spark.broadcast.Broadcast
-
Get the broadcasted value.
- value() - Method in class org.apache.spark.ComplexFutureAction
- value() - Method in interface org.apache.spark.FutureAction
-
The value of this Future.
- value() - Method in class org.apache.spark.ml.param.ParamPair
- value() - Method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
- value() - Method in class org.apache.spark.mllib.stat.test.BinarySample
- value() - Method in class org.apache.spark.scheduler.AccumulableInfo
- value() - Method in class org.apache.spark.SerializableWritable
- value() - Method in class org.apache.spark.SimpleFutureAction
- value() - Method in class org.apache.spark.sql.connector.catalog.NamespaceChange.SetProperty
- value() - Method in class org.apache.spark.sql.connector.catalog.TableChange.SetProperty
- value() - Method in class org.apache.spark.sql.connector.catalog.ViewChange.SetProperty
- value() - Method in class org.apache.spark.sql.connector.expressions.filter.AlwaysFalse
- value() - Method in class org.apache.spark.sql.connector.expressions.filter.AlwaysTrue
- value() - Method in interface org.apache.spark.sql.connector.expressions.Literal
-
Returns the literal value.
- value() - Method in interface org.apache.spark.sql.connector.metric.CustomTaskMetric
-
Returns the long value of custom task metric.
- value() - Method in class org.apache.spark.sql.sources.CollatedEqualNullSafe
- value() - Method in class org.apache.spark.sql.sources.CollatedEqualTo
- value() - Method in class org.apache.spark.sql.sources.CollatedGreaterThan
- value() - Method in class org.apache.spark.sql.sources.CollatedGreaterThanOrEqual
- value() - Method in class org.apache.spark.sql.sources.CollatedLessThan
- value() - Method in class org.apache.spark.sql.sources.CollatedLessThanOrEqual
- value() - Method in class org.apache.spark.sql.sources.CollatedStringContains
- value() - Method in class org.apache.spark.sql.sources.CollatedStringEndsWith
- value() - Method in class org.apache.spark.sql.sources.CollatedStringStartsWith
- value() - Method in class org.apache.spark.sql.sources.EqualNullSafe
- value() - Method in class org.apache.spark.sql.sources.EqualTo
- value() - Method in class org.apache.spark.sql.sources.GreaterThan
- value() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
- value() - Method in class org.apache.spark.sql.sources.LessThan
- value() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
- value() - Method in class org.apache.spark.sql.sources.StringContains
- value() - Method in class org.apache.spark.sql.sources.StringEndsWith
- value() - Method in class org.apache.spark.sql.sources.StringStartsWith
- value() - Method in class org.apache.spark.sql.util.MapperRowCounter
- value() - Method in class org.apache.spark.status.api.v1.AccumulableInfo
- value() - Method in class org.apache.spark.status.api.v1.sql.Metric
- value() - Method in class org.apache.spark.status.LiveRDDPartition
- value() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
- value() - Method in class org.apache.spark.util.AccumulatorV2
-
Defines the current value of this accumulator
- value() - Method in class org.apache.spark.util.CollectionAccumulator
- value() - Method in class org.apache.spark.util.DoubleAccumulator
- value() - Method in class org.apache.spark.util.LongAccumulator
- value() - Method in class org.apache.spark.util.SerializableConfiguration
- VALUE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- VALUE1_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- VALUE2_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- valueArray() - Method in class org.apache.spark.sql.vectorized.ColumnarMap
- valueContainsNull() - Method in class org.apache.spark.sql.types.MapType
- valueIsNullError(int) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
- valueOf(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
Deprecated.
- valueOf(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
Deprecated.
- valueOf(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
-
Deprecated.
- valueOf(int) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
Deprecated.
- valueOf(Descriptors.EnumValueDescriptor) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
Returns the enum constant of this class with the specified name.
- valueOf(Descriptors.EnumValueDescriptor) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(Descriptors.EnumValueDescriptor) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.graphx.impl.EdgeActiveness
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.JobExecutionStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.launcher.SparkAppHandle.State
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.QueryContextType
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.avro.AvroCompressionCodec
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Mode
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.catalog.TableCatalogCapability
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.catalog.TableWritePrivilege
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.expressions.NullOrdering
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.expressions.SortDirection
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.read.Scan.ColumnarSupportMode
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.connector.write.RowLevelOperation.Command
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.sql.SaveMode
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.api.v1.ApplicationStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.api.v1.StageStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.api.v1.streaming.BatchStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.api.v1.TaskSorting
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.api.v1.TaskStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.storage.StorageLevelMapper
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.streaming.StreamingContextState
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.types.variant.VariantUtil.Type
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.util.sketch.BloomFilter.Version
-
Returns the enum constant of this class with the specified name.
- valueOf(String) - Static method in enum class org.apache.spark.util.sketch.CountMinSketch.Version
-
Returns the enum constant of this class with the specified name.
- values() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the values of each tuple.
- values() - Static method in class org.apache.spark.ErrorMessageFormat
- values() - Static method in enum class org.apache.spark.graphx.impl.EdgeActiveness
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.JobExecutionStatus
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.launcher.SparkAppHandle.State
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- values() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- values() - Method in class org.apache.spark.ml.linalg.DenseMatrix
- values() - Method in class org.apache.spark.ml.linalg.DenseVector
- values() - Method in class org.apache.spark.ml.linalg.SparseMatrix
- values() - Method in class org.apache.spark.ml.linalg.SparseVector
- values() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
- values() - Method in class org.apache.spark.mllib.linalg.DenseVector
- values() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
- values() - Method in class org.apache.spark.mllib.linalg.SparseVector
- values() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
- values() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- values() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
- values() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- values() - Static method in enum class org.apache.spark.QueryContextType
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in class org.apache.spark.rdd.CheckpointState
- values() - Static method in class org.apache.spark.rdd.DeterministicLevel
- values() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the values of each tuple.
- values() - Static method in class org.apache.spark.RequestMethod
- values() - Static method in class org.apache.spark.scheduler.SchedulingMode
- values() - Static method in class org.apache.spark.scheduler.TaskLocality
- values() - Static method in enum class org.apache.spark.sql.avro.AvroCompressionCodec
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.catalog.procedures.ProcedureParameter.Mode
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.catalog.TableCapability
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.catalog.TableCatalogCapability
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.catalog.TableWritePrivilege
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.expressions.NullOrdering
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.expressions.SortDirection
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.read.Scan.ColumnarSupportMode
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.connector.write.RowLevelOperation.Command
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.sql.SaveMode
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Method in class org.apache.spark.sql.sources.CollatedIn
- values() - Method in class org.apache.spark.sql.sources.In
- values() - Method in interface org.apache.spark.sql.streaming.MapState
-
Get the list of values present in map associated with grouping key
- values() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
- values() - Static method in enum class org.apache.spark.status.api.v1.ApplicationStatus
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.api.v1.StageStatus
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.api.v1.streaming.BatchStatus
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.api.v1.TaskSorting
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.api.v1.TaskStatus
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.DeterministicLevel
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.JobExecutionStatus
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.status.protobuf.StoreTypes.StageStatus
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.storage.StorageLevelMapper
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- values() - Static method in enum class org.apache.spark.streaming.StreamingContextState
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in class org.apache.spark.TaskState
- values() - Static method in enum class org.apache.spark.types.variant.VariantUtil.Type
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.util.sketch.BloomFilter.Version
-
Returns an array containing the constants of this enum class, in the order they are declared.
- values() - Static method in enum class org.apache.spark.util.sketch.CountMinSketch.Version
-
Returns an array containing the constants of this enum class, in the order they are declared.
- VALUES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
- ValuesHolder<T> - Interface in org.apache.spark.storage.memory
- valueSize(byte[], int) - Static method in class org.apache.spark.types.variant.VariantUtil
- ValueState<S> - Interface in org.apache.spark.sql.streaming
-
Interface used for arbitrary stateful operations with the v2 API to capture single value state.
- valueType() - Method in class org.apache.spark.sql.types.MapType
- var_pop(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population variance of the values in a group.
- var_pop(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population variance of the values in a group.
- var_samp(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the unbiased variance of the values in a group.
- var_samp(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the unbiased variance of the values in a group.
- VarcharType - Class in org.apache.spark.sql.types
- VarcharType(int) - Constructor for class org.apache.spark.sql.types.VarcharType
- variableDeclarationNotAllowedInScope(Origin, Seq<String>) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- variableDeclarationOnlyAtBeginning(Origin, Seq<String>) - Static method in class org.apache.spark.sql.errors.SqlScriptingErrors
- variance() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the population variance of this RDD's elements.
- variance() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Unbiased estimate of sample variance of each dimension.
- variance() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Sample variance vector.
- variance() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the population variance of this RDD's elements.
- variance() - Method in class org.apache.spark.util.StatCounter
-
Return the population variance of the values.
- variance(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
- variance(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- variance(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
- variance(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
- variance(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: alias for
var_samp
. - variance(Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- variance(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: alias for
var_samp
. - variance(Column, Column) - Static method in class org.apache.spark.ml.stat.Summarizer
- Variance - Class in org.apache.spark.mllib.tree.impurity
-
Class for calculating variance during regression
- Variance() - Constructor for class org.apache.spark.mllib.tree.impurity.Variance
- varianceCol() - Method in interface org.apache.spark.ml.param.shared.HasVarianceCol
-
Param for Column name for the biased sample variance of prediction.
- varianceCol() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- varianceCol() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- variancePower() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
- variancePower() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- variancePower() - Method in interface org.apache.spark.ml.regression.GeneralizedLinearRegressionBase
-
Param for the power in the variance function of the Tweedie distribution which provides the relationship between the variance and mean of the distribution.
- variancePower() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- varianceThreshold() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelector
- varianceThreshold() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- varianceThreshold() - Method in interface org.apache.spark.ml.feature.VarianceThresholdSelectorParams
-
Param for variance threshold.
- VarianceThresholdSelector - Class in org.apache.spark.ml.feature
-
Feature selector that removes all low-variance features.
- VarianceThresholdSelector() - Constructor for class org.apache.spark.ml.feature.VarianceThresholdSelector
- VarianceThresholdSelector(String) - Constructor for class org.apache.spark.ml.feature.VarianceThresholdSelector
- VarianceThresholdSelectorModel - Class in org.apache.spark.ml.feature
-
Model fitted by
VarianceThresholdSelector
. - VarianceThresholdSelectorParams - Interface in org.apache.spark.ml.feature
-
Params for
VarianceThresholdSelector
andVarianceThresholdSelectorModel
. - Variant - Class in org.apache.spark.types.variant
-
This class is structurally equivalent to
VariantVal
. - Variant(byte[], byte[]) - Constructor for class org.apache.spark.types.variant.Variant
- variant_get(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Extracts a sub-variant from
v
according topath
, and then cast the sub-variant totargetType
. - Variant.ObjectField - Class in org.apache.spark.types.variant
- VariantBuilder - Class in org.apache.spark.types.variant
-
Build variant value and metadata by parsing JSON values.
- VariantBuilder(boolean) - Constructor for class org.apache.spark.types.variant.VariantBuilder
- VariantBuilder.FieldEntry - Class in org.apache.spark.types.variant
- variantSizeLimitError(int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- VariantSizeLimitException - Exception in org.apache.spark.types.variant
-
An exception indicating that we are attempting to build a variant with it value or metadata exceeding the 16MiB size limit.
- VariantSizeLimitException() - Constructor for exception org.apache.spark.types.variant.VariantSizeLimitException
- VariantType - Class in org.apache.spark.sql.types
-
The data type representing semi-structured values with arbitrary hierarchical data structures.
- VariantType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the VariantType object.
- VariantType() - Constructor for class org.apache.spark.sql.types.VariantType
- VariantUtil - Class in org.apache.spark.types.variant
-
This class defines constants related to the variant format and provides functions for manipulating variant binaries.
- VariantUtil() - Constructor for class org.apache.spark.types.variant.VariantUtil
- VariantUtil.ArrayHandler<T> - Interface in org.apache.spark.types.variant
- VariantUtil.IntervalFields - Class in org.apache.spark.types.variant
- VariantUtil.ObjectHandler<T> - Interface in org.apache.spark.types.variant
- VariantUtil.Type - Enum Class in org.apache.spark.types.variant
- vClassTag() - Method in class org.apache.spark.api.java.JavaHadoopRDD
- vClassTag() - Method in class org.apache.spark.api.java.JavaNewHadoopRDD
- vClassTag() - Method in class org.apache.spark.api.java.JavaPairRDD
- vClassTag() - Method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
- vClassTag() - Method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
- vector() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
- vector() - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
- Vector - Interface in org.apache.spark.ml.linalg
-
Represents a numeric vector, whose index type is Int and value type is Double.
- Vector - Interface in org.apache.spark.mllib.linalg
-
Represents a numeric vector, whose index type is Int and value type is Double.
- vector_to_array(Column, String) - Static method in class org.apache.spark.ml.functions
-
Converts a column of MLlib sparse/dense vectors into a column of dense arrays.
- VectorAssembler - Class in org.apache.spark.ml.feature
-
A feature transformer that merges multiple columns into a vector column.
- VectorAssembler() - Constructor for class org.apache.spark.ml.feature.VectorAssembler
- VectorAssembler(String) - Constructor for class org.apache.spark.ml.feature.VectorAssembler
- VectorAttributeRewriter - Class in org.apache.spark.ml.feature
-
Utility transformer that rewrites Vector attribute names via prefix replacement.
- VectorAttributeRewriter(String, String, Map<String, String>) - Constructor for class org.apache.spark.ml.feature.VectorAttributeRewriter
- VectorAttributeRewriter(String, Map<String, String>) - Constructor for class org.apache.spark.ml.feature.VectorAttributeRewriter
- vectorCol() - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- VectorImplicits - Class in org.apache.spark.mllib.linalg
- VectorImplicits() - Constructor for class org.apache.spark.mllib.linalg.VectorImplicits
- VectorIndexer - Class in org.apache.spark.ml.feature
-
Class for indexing categorical feature columns in a dataset of
Vector
. - VectorIndexer() - Constructor for class org.apache.spark.ml.feature.VectorIndexer
- VectorIndexer(String) - Constructor for class org.apache.spark.ml.feature.VectorIndexer
- VectorIndexerModel - Class in org.apache.spark.ml.feature
-
Model fitted by
VectorIndexer
. - VectorIndexerParams - Interface in org.apache.spark.ml.feature
-
Private trait for params for VectorIndexer and VectorIndexerModel
- Vectors - Class in org.apache.spark.ml.linalg
-
Factory methods for
Vector
. - Vectors - Class in org.apache.spark.mllib.linalg
-
Factory methods for
Vector
. - Vectors() - Constructor for class org.apache.spark.ml.linalg.Vectors
- Vectors() - Constructor for class org.apache.spark.mllib.linalg.Vectors
- vectorSize() - Method in class org.apache.spark.ml.feature.Word2Vec
- vectorSize() - Method in interface org.apache.spark.ml.feature.Word2VecBase
-
The dimension of the code that you want to transform from words.
- vectorSize() - Method in class org.apache.spark.ml.feature.Word2VecModel
- VectorSizeHint - Class in org.apache.spark.ml.feature
-
A feature transformer that adds size information to the metadata of a vector column.
- VectorSizeHint() - Constructor for class org.apache.spark.ml.feature.VectorSizeHint
- VectorSizeHint(String) - Constructor for class org.apache.spark.ml.feature.VectorSizeHint
- VectorSlicer - Class in org.apache.spark.ml.feature
-
This class takes a feature vector and outputs a new feature vector with a subarray of the original features.
- VectorSlicer() - Constructor for class org.apache.spark.ml.feature.VectorSlicer
- VectorSlicer(String) - Constructor for class org.apache.spark.ml.feature.VectorSlicer
- VectorTransformer - Interface in org.apache.spark.mllib.feature
-
Trait for transformation of a vector
- VectorType() - Static method in class org.apache.spark.ml.linalg.SQLDataTypes
-
Data type for
Vector
. - vectorTypes(Seq<Attribute>, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
The exact java types of the columns that are output in columnar processing mode.
- VectorUDT - Class in org.apache.spark.mllib.linalg
-
:: AlphaComponent ::
- VectorUDT() - Constructor for class org.apache.spark.mllib.linalg.VectorUDT
- vendor() - Method in class org.apache.spark.resource.ExecutorResourceRequest
- vendor() - Method in class org.apache.spark.resource.ResourceRequest
- VENDOR() - Static method in class org.apache.spark.resource.ResourceUtils
- VENDOR_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- version() - Method in class org.apache.spark.api.java.JavaSparkContext
-
The version of Spark on which this application is running.
- version() - Method in class org.apache.spark.SparkContext
-
The version of Spark on which this application is running.
- version() - Method in class org.apache.spark.sql.api.SparkSession
-
The version of Spark on which this application is running.
- version() - Static method in class org.apache.spark.sql.functions
-
Returns the Spark version.
- version() - Method in class org.apache.spark.sql.SparkSession
- VERSION - Static variable in class org.apache.spark.types.variant.VariantUtil
- VERSION_MASK - Static variable in class org.apache.spark.types.variant.VariantUtil
- VersionInfo - Class in org.apache.spark.status.api.v1
- VersionUtils - Class in org.apache.spark.util
-
Utilities for working with Spark version strings
- VersionUtils() - Constructor for class org.apache.spark.util.VersionUtils
- vertcat(Matrix[]) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Vertically concatenate a sequence of matrices.
- vertcat(Matrix[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Vertically concatenate a sequence of matrices.
- vertexAttr(long) - Method in class org.apache.spark.graphx.EdgeTriplet
-
Get the vertex object for the given vertex in the edge.
- VertexPartitionBaseOpsConstructor<T extends org.apache.spark.graphx.impl.VertexPartitionBase<Object>> - Interface in org.apache.spark.graphx.impl
-
A typeclass for subclasses of
VertexPartitionBase
representing the ability to wrap them in aVertexPartitionBaseOps
. - VertexRDD<VD> - Class in org.apache.spark.graphx
-
Extends
RDD[(VertexId, VD)]
by ensuring that there is only one entry for each vertex and by pre-indexing the entries for fast, efficient joins. - VertexRDD(SparkContext, Seq<Dependency<?>>) - Constructor for class org.apache.spark.graphx.VertexRDD
- VertexRDDImpl<VD> - Class in org.apache.spark.graphx.impl
- vertices() - Method in class org.apache.spark.graphx.Graph
-
An RDD containing the vertices and their associated attributes.
- vertices() - Method in class org.apache.spark.graphx.impl.GraphImpl
- View - Interface in org.apache.spark.sql.connector.catalog
-
An interface representing a persisted view.
- viewAlreadyExistsError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- ViewCatalog - Interface in org.apache.spark.sql.connector.catalog
-
Catalog methods for working with views.
- ViewChange - Interface in org.apache.spark.sql.connector.catalog
-
ViewChange subclasses represent requested changes to a view.
- ViewChange.RemoveProperty - Class in org.apache.spark.sql.connector.catalog
- ViewChange.SetProperty - Class in org.apache.spark.sql.connector.catalog
- viewDepthExceedsMaxResolutionDepthError(TableIdentifier, int, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- viewExists(Identifier) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Test whether a view exists using an
identifier
from the catalog. - ViewInfo - Class in org.apache.spark.sql.connector.catalog
-
A class that holds view information.
- ViewInfo(Identifier, String, String, String[], StructType, String[], String[], String[], Map<String, String>) - Constructor for class org.apache.spark.sql.connector.catalog.ViewInfo
- viewToSeq(KVStoreView<T>) - Static method in class org.apache.spark.status.KVUtils
-
Turns a KVStoreView into a Scala sequence.
- viewToSeq(KVStoreView<T>, int, int, Function1<T, Object>) - Static method in class org.apache.spark.status.KVUtils
-
Turns an interval of KVStoreView into a Scala sequence, applying a filter.
- viewToSeq(KVStoreView<T>, int, Function1<T, Object>) - Static method in class org.apache.spark.status.KVUtils
-
Turns a KVStoreView into a Scala sequence, applying a filter.
- visible() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockVisibility
- visit(int, int, String, String, String, String[]) - Method in class org.apache.spark.util.InnerClosureFinder
- visitAggregateFunction(String, boolean, String[]) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- visitAggregateFunction(String, boolean, String[]) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLBuilder
- visitAggregateFunction(String, boolean, String[]) - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- visitAggregateFunction(String, boolean, String[]) - Method in class org.apache.spark.sql.jdbc.OracleDialect.OracleSQLBuilder
- visitCast(String, DataType, DataType) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- visitContains(String, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- visitEndsWith(String, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- visitExtract(String, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- visitInverseDistributionFunction(String, boolean, String[], String[]) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- visitLiteral(Literal<?>) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- visitMethod(int, String, String, String, String[]) - Method in class org.apache.spark.util.InnerClosureFinder
- visitMethod(int, String, String, String, String[]) - Method in class org.apache.spark.util.ReturnStatementFinder
- visitNamedReference(NamedReference) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- visitOverlay(String[]) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- visitSortOrder(String, SortDirection, NullOrdering) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLBuilder
- visitSortOrder(String, SortDirection, NullOrdering) - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- visitSQLFunction(String, String[]) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- visitSQLFunction(String, String[]) - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- visitStartsWith(String, String) - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLBuilder
- visitTrim(String, String[]) - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLBuilder
- vizHeaderNodes(HttpServletRequest) - Static method in class org.apache.spark.ui.UIUtils
- vManifest() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
- vocabSize() - Method in class org.apache.spark.ml.clustering.LDAModel
- vocabSize() - Method in class org.apache.spark.ml.feature.CountVectorizer
- vocabSize() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- vocabSize() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Max size of the vocabulary.
- vocabSize() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
- vocabSize() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Vocabulary size (number of terms or terms in the vocabulary)
- vocabSize() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
- vocabulary() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- VocabWord - Class in org.apache.spark.mllib.feature
-
Entry in vocabulary
- VocabWord(String, long, int[], int[], int) - Constructor for class org.apache.spark.mllib.feature.VocabWord
- VoidFunction<T> - Interface in org.apache.spark.api.java.function
-
A function with no return value.
- VoidFunction2<T1,
T2> - Interface in org.apache.spark.api.java.function -
A two-argument function that takes arguments of type T1 and T2 with no return value.
- Vote() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
W
- w(boolean) - Method in class org.apache.spark.ml.param.BooleanParam
-
Creates a param pair with the given value (for Java).
- w(double) - Method in class org.apache.spark.ml.param.DoubleParam
-
Creates a param pair with the given value (for Java).
- w(float) - Method in class org.apache.spark.ml.param.FloatParam
-
Creates a param pair with the given value (for Java).
- w(int) - Method in class org.apache.spark.ml.param.IntParam
-
Creates a param pair with the given value (for Java).
- w(long) - Method in class org.apache.spark.ml.param.LongParam
-
Creates a param pair with the given value (for Java).
- w(List<Double>) - Method in class org.apache.spark.ml.param.DoubleArrayParam
-
Creates a param pair with a `java.util.List` of values (for Java and Python).
- w(List<Integer>) - Method in class org.apache.spark.ml.param.IntArrayParam
-
Creates a param pair with a `java.util.List` of values (for Java and Python).
- w(List<String>) - Method in class org.apache.spark.ml.param.StringArrayParam
-
Creates a param pair with a `java.util.List` of values (for Java and Python).
- w(List<List<Double>>) - Method in class org.apache.spark.ml.param.DoubleArrayArrayParam
-
Creates a param pair with a `java.util.List` of values (for Java and Python).
- w(T) - Method in class org.apache.spark.ml.param.Param
-
Creates a param pair with the given value (for Java).
- waitingForAsyncReregistrationError(Throwable) - Static method in class org.apache.spark.errors.SparkCoreErrors
- waitingForReplicationToFinishError(Throwable) - Static method in class org.apache.spark.errors.SparkCoreErrors
- waitTillTime(long) - Method in interface org.apache.spark.util.Clock
-
Wait until the wall clock reaches at least the given time.
- waitUntilEmpty(long) - Method in class org.apache.spark.scheduler.AsyncEventQueue
-
For testing only.
- warmUp(SparkContext) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
Warms up the SparkContext in master and executor by running tasks to force JIT kick in before real workload starts.
- warnOnWastedResources(ResourceProfile, SparkConf, Option<Object>) - Static method in class org.apache.spark.resource.ResourceUtils
- weakIntern(String) - Static method in class org.apache.spark.util.Utils
-
String interning to reduce the memory usage.
- weekday(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the day of the week for date/timestamp (0 = Monday, 1 = Tuesday, ..., 6 = Sunday).
- weekofyear(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the week number as an integer from a given date/timestamp/string.
- WeibullGenerator - Class in org.apache.spark.mllib.random
-
Generates i.i.d.
- WeibullGenerator(double, double) - Constructor for class org.apache.spark.mllib.random.WeibullGenerator
- weight() - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
Weighted count of instances in this aggregator.
- weight() - Method in interface org.apache.spark.scheduler.Schedulable
- weightCol() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Field in "predictions" which gives the weight of each instance.
- weightCol() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- weightCol() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
- weightCol() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- weightCol() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
- weightCol() - Method in class org.apache.spark.ml.classification.FMClassifier
- weightCol() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- weightCol() - Method in class org.apache.spark.ml.classification.GBTClassifier
- weightCol() - Method in class org.apache.spark.ml.classification.LinearSVC
- weightCol() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- weightCol() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
- weightCol() - Method in class org.apache.spark.ml.classification.LogisticRegression
- weightCol() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
- weightCol() - Method in class org.apache.spark.ml.classification.LogisticRegressionSummaryImpl
- weightCol() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationSummaryImpl
- weightCol() - Method in class org.apache.spark.ml.classification.NaiveBayes
- weightCol() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- weightCol() - Method in class org.apache.spark.ml.classification.OneVsRest
- weightCol() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- weightCol() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- weightCol() - Method in class org.apache.spark.ml.classification.RandomForestClassificationSummaryImpl
- weightCol() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
- weightCol() - Method in class org.apache.spark.ml.clustering.BisectingKMeans
- weightCol() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- weightCol() - Method in class org.apache.spark.ml.clustering.GaussianMixture
- weightCol() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- weightCol() - Method in class org.apache.spark.ml.clustering.KMeans
- weightCol() - Method in class org.apache.spark.ml.clustering.KMeansModel
- weightCol() - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
- weightCol() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
- weightCol() - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
- weightCol() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
- weightCol() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
- weightCol() - Method in interface org.apache.spark.ml.param.shared.HasWeightCol
-
Param for weight column name.
- weightCol() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- weightCol() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
- weightCol() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- weightCol() - Method in class org.apache.spark.ml.regression.FMRegressor
- weightCol() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- weightCol() - Method in class org.apache.spark.ml.regression.GBTRegressor
- weightCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
- weightCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
- weightCol() - Method in class org.apache.spark.ml.regression.IsotonicRegression
- weightCol() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- weightCol() - Method in class org.apache.spark.ml.regression.LinearRegression
- weightCol() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
- weightCol() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- weightCol() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
- weightedFalseNegatives() - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
-
weighted number of false negatives
- weightedFalsePositiveRate() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns weighted false positive rate.
- weightedFalsePositiveRate() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- weightedFalsePositives() - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
-
weighted number of false positives
- weightedFMeasure() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns weighted averaged f1-measure.
- weightedFMeasure() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- weightedFMeasure(double) - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns weighted averaged f-measure.
- weightedFMeasure(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns weighted averaged f-measure
- weightedNegatives() - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
-
weighted number of negatives
- weightedPositives() - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
-
weighted number of positives
- weightedPrecision() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns weighted averaged precision.
- weightedPrecision() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- weightedRecall() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns weighted averaged recall.
- weightedRecall() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- weightedTrueNegatives() - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
-
weighted number of true negatives
- weightedTruePositiveRate() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns weighted true positive rate.
- weightedTruePositiveRate() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
- weightedTruePositives() - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryConfusionMatrix
-
weighted number of true positives
- weights() - Method in interface org.apache.spark.ml.ann.LayerModel
- weights() - Method in interface org.apache.spark.ml.ann.TopologyModel
- weights() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- weights() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
- weights() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
- weights() - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
- weights() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
- weights() - Method in class org.apache.spark.mllib.classification.SVMModel
- weights() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
- weights() - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
- weights() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
- weights() - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
- weights() - Method in class org.apache.spark.mllib.regression.LassoModel
- weights() - Method in class org.apache.spark.mllib.regression.LinearRegressionModel
- weights() - Method in class org.apache.spark.mllib.regression.RidgeRegressionModel
- weightSize() - Method in interface org.apache.spark.ml.ann.Layer
-
Number of weights that is used to allocate memory for the weights vector
- weightSum() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- weightSum() - Method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
- weightSum() - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
- weightSum() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Sum of weights.
- weightSum() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Sum of weights.
- weightSumVec() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
- WelchTTest - Class in org.apache.spark.mllib.stat.test
-
Performs Welch's 2-sample t-test.
- WelchTTest() - Constructor for class org.apache.spark.mllib.stat.test.WelchTTest
- when(Column, Object) - Method in class org.apache.spark.sql.Column
-
Evaluates a list of conditions and returns one of multiple possible result expressions.
- when(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Evaluates a list of conditions and returns one of multiple possible result expressions.
- whenMatched() - Method in class org.apache.spark.sql.MergeIntoWriter
-
Initialize a
WhenMatched
action without any condition. - whenMatched(Column) - Method in class org.apache.spark.sql.MergeIntoWriter
-
Initialize a
WhenMatched
action with a condition. - WhenMatched<T> - Class in org.apache.spark.sql
-
A class for defining actions to be taken when matching rows in a DataFrame during a merge operation.
- whenNotMatched() - Method in class org.apache.spark.sql.MergeIntoWriter
-
Initialize a
WhenNotMatched
action without any condition. - whenNotMatched(Column) - Method in class org.apache.spark.sql.MergeIntoWriter
-
Initialize a
WhenNotMatched
action with a condition. - WhenNotMatched<T> - Class in org.apache.spark.sql
-
A class for defining actions to be taken when no matching rows are found in a DataFrame during a merge operation.
- whenNotMatchedBySource() - Method in class org.apache.spark.sql.MergeIntoWriter
-
Initialize a
WhenNotMatchedBySource
action without any condition. - whenNotMatchedBySource(Column) - Method in class org.apache.spark.sql.MergeIntoWriter
-
Initialize a
WhenNotMatchedBySource
action with a condition. - WhenNotMatchedBySource<T> - Class in org.apache.spark.sql
-
A class for defining actions to be performed when there is no match by source during a merge operation in a MergeIntoWriter.
- where(String) - Method in class org.apache.spark.sql.api.Dataset
-
Filters rows using the given SQL expression.
- where(String) - Method in class org.apache.spark.sql.Dataset
- where(Column) - Method in class org.apache.spark.sql.api.Dataset
-
Filters rows using the given condition.
- where(Column) - Method in class org.apache.spark.sql.Dataset
- WhileStatementExec - Class in org.apache.spark.sql.scripting
-
Executable node for WhileStatement.
- WhileStatementExec(SingleStatementExec, CompoundBodyExec, Option<String>, SparkSession) - Constructor for class org.apache.spark.sql.scripting.WhileStatementExec
- wholeStageCodegenId() - Method in class org.apache.spark.status.api.v1.sql.Node
- wholeTextFiles(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
- wholeTextFiles(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
- wholeTextFiles(String, int) - Method in class org.apache.spark.SparkContext
-
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
- width() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Width of this
CountMinSketch
. - width_bucket(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the bucket number into which the value of this expression would fall after being evaluated.
- window(Column, String) - Static method in class org.apache.spark.sql.functions
-
Generates tumbling time windows given a timestamp specifying column.
- window(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Bucketize rows into one or more time windows given a timestamp specifying column.
- window(Column, String, String, String) - Static method in class org.apache.spark.sql.functions
-
Bucketize rows into one or more time windows given a timestamp specifying column.
- window(Duration) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
- window(Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream which is computed based on windowed batches of this DStream.
- window(Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
- window(Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
- window(Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream which is computed based on windowed batches of this DStream.
- window(Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
- Window - Class in org.apache.spark.sql.expressions
-
Utility functions for defining window in DataFrames.
- Window() - Constructor for class org.apache.spark.sql.expressions.Window
- window_time(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the event time from the window column.
- windowAggregateFunctionWithFilterNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- windowFrameNotMatchRequiredFrameError(SpecifiedWindowFrame, WindowFrame) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- windowFunctionInAggregateFilterError(Expression, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- windowFunctionInsideAggregateFunctionNotAllowedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- windowFunctionNotAllowedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- windowFunctionWithWindowFrameNotOrderedError(WindowFunction) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- windowsDrive() - Static method in class org.apache.spark.util.Utils
-
Pattern for matching a Windows drive, which contains only a single alphabet character.
- windowSize() - Method in class org.apache.spark.ml.feature.Word2Vec
- windowSize() - Method in interface org.apache.spark.ml.feature.Word2VecBase
-
The window size (context words from [-window, window]).
- windowSize() - Method in class org.apache.spark.ml.feature.Word2VecModel
- WindowSpec - Class in org.apache.spark.sql.expressions
-
A window specification that defines the partitioning, ordering, and frame boundaries.
- windowSpecificationNotDefinedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- wipe() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- withCentering() - Method in class org.apache.spark.ml.feature.RobustScaler
- withCentering() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- withCentering() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
-
Whether to center the data with median before scaling.
- withColumn(String, Column) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by adding a column or replacing the existing column that has the same name.
- withColumn(String, Column) - Method in class org.apache.spark.sql.Dataset
- withColumnRenamed(String, String) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset with a column renamed.
- withColumnRenamed(String, String) - Method in class org.apache.spark.sql.Dataset
- withColumns(String[]) - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
The columns names that following dialect's SQL syntax.
- withColumns(Map<String, Column>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Returns a new Dataset by adding columns or replacing the existing columns that has the same names.
- withColumns(Map<String, Column>) - Method in class org.apache.spark.sql.Dataset
- withColumns(Map<String, Column>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset by adding columns or replacing the existing columns that has the same names.
- withColumns(Map<String, Column>) - Method in class org.apache.spark.sql.Dataset
- withColumnsRenamed(Map<String, String>) - Method in class org.apache.spark.sql.api.Dataset
-
(Java-specific) Returns a new Dataset with a columns renamed.
- withColumnsRenamed(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
- withColumnsRenamed(Map<String, String>) - Method in class org.apache.spark.sql.api.Dataset
-
(Scala-specific) Returns a new Dataset with a columns renamed.
- withColumnsRenamed(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
- withComment(String) - Method in class org.apache.spark.sql.types.StructField
-
Updates the StructField with a new comment value.
- withContextClassLoader(ClassLoader, Function0<T>) - Static method in class org.apache.spark.util.Utils
-
Run a segment of code using a different context class loader in the current thread
- withCurrentDefaultValue(String) - Method in class org.apache.spark.sql.types.StructField
-
Updates the StructField with a new current default value.
- withDefaultOwnership(Map<String, String>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
- withDummyCallSite(SparkContext, Function0<T>) - Static method in class org.apache.spark.util.Utils
-
To avoid calling
Utils.getCallSite
for every single RDD we create in the body, set a dummy call site that RDDs use instead. - withEdges(EdgeRDD<?>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
- withEdges(EdgeRDD<?>) - Method in class org.apache.spark.graphx.VertexRDD
-
Prepares this VertexRDD for efficient joins with the given EdgeRDD.
- withExistenceDefaultValue(String) - Method in class org.apache.spark.sql.types.StructField
-
Updates the StructField with a new existence default value.
- withExtensions(Function1<SparkSessionExtensions, BoxedUnit>) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Inject extensions into the
SparkSession
. - withField(String, Column) - Method in class org.apache.spark.sql.Column
-
An expression that adds/replaces field in
StructType
by name. - withFitEvent(Estimator<M>, Dataset<?>, Function0<M>) - Method in interface org.apache.spark.ml.MLEvents
- withGroupByColumns(String[]) - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Constructs the GROUP BY clause that following dialect's SQL syntax.
- withHttpConnection(URL, String, Seq<Tuple2<String, String>>, Function1<HttpURLConnection, T>) - Static method in class org.apache.spark.TestUtils
- withHttpServer(String, Function1<URL, BoxedUnit>) - Static method in class org.apache.spark.TestUtils
- withIndex(int) - Method in class org.apache.spark.ml.attribute.Attribute
-
Copy with a new index.
- withIndex(int) - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- withIndex(int) - Method in class org.apache.spark.ml.attribute.NominalAttribute
- withIndex(int) - Method in class org.apache.spark.ml.attribute.NumericAttribute
- withIndex(int) - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- withLimit(int) - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Saves the limit value used to construct LIMIT clause.
- withListener(SparkContext, L, Function1<L, BoxedUnit>) - Static method in class org.apache.spark.TestUtils
-
Runs some code with the given listener installed in the SparkContext.
- withListener(Function1<org.apache.spark.streaming.ui.StreamingJobProgressListener, T>) - Method in interface org.apache.spark.status.api.v1.streaming.BaseStreamingAppResource
- withLoadInstanceEvent(MLReader<T>, String, Function0<T>) - Method in interface org.apache.spark.ml.MLEvents
- withLock(Function2<BlockInfo, Condition, T>) - Method in class org.apache.spark.storage.BlockInfoWrapper
- withMapStatuses(Function1<MapStatus[], T>) - Method in class org.apache.spark.ShuffleStatus
-
Helper function which provides thread-safe access to the mapStatuses array.
- withMax(double) - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy with a new max value.
- withMean() - Method in class org.apache.spark.ml.feature.StandardScaler
- withMean() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- withMean() - Method in interface org.apache.spark.ml.feature.StandardScalerParams
-
Whether to center the data with mean before scaling.
- withMean() - Method in class org.apache.spark.mllib.feature.StandardScalerModel
- withMergeStatuses(Function1<org.apache.spark.scheduler.MergeStatus[], T>) - Method in class org.apache.spark.ShuffleStatus
- withMetadata(String, Metadata) - Method in class org.apache.spark.sql.api.Dataset
-
Returns a new Dataset by updating an existing column with metadata.
- withMetadata(String, Metadata) - Method in class org.apache.spark.sql.Dataset
- withMetadata(Metadata) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Include the content of an existing
Metadata
instance. - withMin(double) - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy with a new min value.
- withName(String) - Static method in class org.apache.spark.ErrorMessageFormat
- withName(String) - Method in class org.apache.spark.ml.attribute.Attribute
-
Copy with a new name.
- withName(String) - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- withName(String) - Method in class org.apache.spark.ml.attribute.NominalAttribute
- withName(String) - Method in class org.apache.spark.ml.attribute.NumericAttribute
- withName(String) - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- withName(String) - Static method in class org.apache.spark.mllib.tree.configuration.Algo
- withName(String) - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
- withName(String) - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
- withName(String) - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
- withName(String) - Static method in class org.apache.spark.rdd.CheckpointState
- withName(String) - Static method in class org.apache.spark.rdd.DeterministicLevel
- withName(String) - Static method in class org.apache.spark.RequestMethod
- withName(String) - Static method in class org.apache.spark.scheduler.SchedulingMode
- withName(String) - Static method in class org.apache.spark.scheduler.TaskLocality
- withName(String) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Updates UserDefinedFunction with a given name.
- withName(String) - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
- withName(String) - Static method in class org.apache.spark.TaskState
- withNaNCheckCode(DataType, String, String, String, Function1<String, String>) - Static method in class org.apache.spark.sql.util.SQLOpenHashSet
- withNaNCheckFunc(DataType, SQLOpenHashSet<Object>, Function1<Object, BoxedUnit>, Function1<Object, BoxedUnit>) - Static method in class org.apache.spark.sql.util.SQLOpenHashSet
- withNoProgress(StreamingQueryUIData, Function0<T>, T) - Static method in class org.apache.spark.sql.streaming.ui.UIUtils
-
Execute a block of code when there is already one completed batch in streaming query, otherwise return
default
value. - withNullCheckCode(boolean, boolean, String, String, String, Function2<String, String, String>, String) - Static method in class org.apache.spark.sql.util.SQLOpenHashSet
- withNullCheckFunc(DataType, SQLOpenHashSet<Object>, Function1<Object, BoxedUnit>, Function0<BoxedUnit>) - Static method in class org.apache.spark.sql.util.SQLOpenHashSet
- withNumberInvalid(Function0<Object>) - Static method in class org.apache.spark.sql.streaming.ui.UIUtils
-
Check whether
number
is valid, if not return 0.0d - withNumValues(int) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Copy with a new
numValues
and emptyvalues
. - withOffset(int) - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Saves the offset value used to construct OFFSET clause.
- withoutIndex() - Method in class org.apache.spark.ml.attribute.Attribute
-
Copy without the index.
- withoutIndex() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- withoutIndex() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- withoutIndex() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- withoutIndex() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- withoutMax() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy without the max value.
- withoutMin() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy without the min value.
- withoutName() - Method in class org.apache.spark.ml.attribute.Attribute
-
Copy without the name.
- withoutName() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
- withoutName() - Method in class org.apache.spark.ml.attribute.NominalAttribute
- withoutName() - Method in class org.apache.spark.ml.attribute.NumericAttribute
- withoutName() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
- withoutNumValues() - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Copy without the
numValues
. - withoutSparsity() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy without the sparsity.
- withoutStd() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy without the standard deviation.
- withoutSummary() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy without summary statistics.
- withoutValues() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
-
Copy without the values.
- withoutValues() - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Copy without the values.
- withPathFilter(double, SparkSession, long, Function0<T>) - Static method in class org.apache.spark.ml.image.SamplePathFilter
-
Sets the HDFS PathFilter flag and then restores it.
- withPosition(Origin) - Method in exception org.apache.spark.sql.AnalysisException
- withPredicates(Predicate[], JDBCPartition) - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Constructs the WHERE clause that following dialect's SQL syntax.
- withRecursiveFlag(boolean, SparkSession, Function0<T>) - Static method in class org.apache.spark.ml.image.RecursiveFlag
-
Sets the spark recursive flag and then restores it.
- withReferences(Seq<NamedReference>) - Method in class org.apache.spark.sql.connector.expressions.ClusterByTransform
- withReferences(Seq<NamedReference>) - Method in interface org.apache.spark.sql.connector.expressions.RewritableTransform
-
Creates a copy of this transform with the new analyzed references.
- withResources(ResourceProfile) - Method in class org.apache.spark.api.java.JavaRDD
-
Specify a ResourceProfile to use when calculating this RDD.
- withResources(ResourceProfile) - Method in class org.apache.spark.rdd.RDD
-
Specify a ResourceProfile to use when calculating this RDD.
- withResources(Function0<T>) - Method in class org.apache.spark.sql.artifact.ArtifactManager
- withResourcesJson(String, Function1<String, Seq<T>>) - Static method in class org.apache.spark.resource.ResourceUtils
- withSaveInstanceEvent(MLWriter, String, Function0<BoxedUnit>) - Method in interface org.apache.spark.ml.MLEvents
- withScaling() - Method in class org.apache.spark.ml.feature.RobustScaler
- withScaling() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- withScaling() - Method in interface org.apache.spark.ml.feature.RobustScalerParams
-
Whether to scale the data to quantile range.
- withSchemaEvolution() - Method in class org.apache.spark.sql.MergeIntoWriter
-
Enable automatic schema evolution for this merge operation.
- withSortOrders(String[]) - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Constructs the ORDER BY clause that following dialect's SQL syntax.
- withSparkUI(String, Option<String>, Function1<org.apache.spark.ui.SparkUI, T>) - Method in interface org.apache.spark.status.api.v1.UIRoot
-
Runs some code with the current SparkUI instance for the app / attempt.
- withSparsity(double) - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy with a new sparsity.
- withStd() - Method in class org.apache.spark.ml.feature.StandardScaler
- withStd() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- withStd() - Method in interface org.apache.spark.ml.feature.StandardScalerParams
-
Whether to scale the data to unit standard deviation.
- withStd() - Method in class org.apache.spark.mllib.feature.StandardScalerModel
- withStd(double) - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
Copy with a new standard deviation.
- withTableSample(TableSampleInfo) - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Constructs the table sample clause that following dialect's SQL syntax.
- withTransformEvent(Transformer, Dataset<?>, Function0<Dataset<Row>>) - Method in interface org.apache.spark.ml.MLEvents
- withUI(Function1<org.apache.spark.ui.SparkUI, T>) - Method in interface org.apache.spark.status.api.v1.BaseAppResource
- withValues(String[]) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Copy with new values and empty
numValues
. - withValues(String, String) - Method in class org.apache.spark.ml.attribute.BinaryAttribute
-
Copy with new values.
- withValues(String, String...) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Copy with new values and empty
numValues
. - withValues(String, Seq<String>) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Copy with new values and empty
numValues
. - withWatermark(String, String) - Method in class org.apache.spark.sql.api.Dataset
-
Defines an event time watermark for this
Dataset
. - withWatermark(String, String) - Method in class org.apache.spark.sql.Dataset
- word() - Method in class org.apache.spark.mllib.feature.VocabWord
- Word2Vec - Class in org.apache.spark.ml.feature
-
Word2Vec trains a model of
Map(String, Vector)
, i.e. - Word2Vec - Class in org.apache.spark.mllib.feature
-
Word2Vec creates vector representation of words in a text corpus.
- Word2Vec() - Constructor for class org.apache.spark.ml.feature.Word2Vec
- Word2Vec() - Constructor for class org.apache.spark.mllib.feature.Word2Vec
- Word2Vec(String) - Constructor for class org.apache.spark.ml.feature.Word2Vec
- Word2VecBase - Interface in org.apache.spark.ml.feature
-
Params for
Word2Vec
andWord2VecModel
. - Word2VecModel - Class in org.apache.spark.ml.feature
-
Model fitted by
Word2Vec
. - Word2VecModel - Class in org.apache.spark.mllib.feature
-
Word2Vec model param: wordIndex maps each word to an index, which can retrieve the corresponding vector from wordVectors param: wordVectors array of length numWords * vectorSize, vector corresponding to the word mapped with index i can be retrieved by the slice (i * vectorSize, i * vectorSize + vectorSize)
- Word2VecModel(Map<String, float[]>) - Constructor for class org.apache.spark.mllib.feature.Word2VecModel
- Word2VecModel.Data$ - Class in org.apache.spark.ml.feature
- Word2VecModel.Word2VecModelWriter$ - Class in org.apache.spark.ml.feature
- Word2VecModelWriter$() - Constructor for class org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
- WORKER() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
- workerId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
- workerRemoved(String, String, String) - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Process a removed worker
- Workspace(int) - Constructor for class org.apache.spark.mllib.optimization.NNLS.Workspace
- wrapCallerStacktrace(T, String, int) - Static method in class org.apache.spark.util.ThreadUtils
-
Adjust exception stack stace to wrap with caller side thread stack trace.
- WRAPPER_NOT_SET - Enum constant in enum class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.WrapperCase
- wrapperClass() - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
- wrapRDD(RDD<Double>) - Method in class org.apache.spark.api.java.JavaDoubleRDD
- wrapRDD(RDD<Tuple2<K, V>>) - Method in class org.apache.spark.api.java.JavaPairRDD
- wrapRDD(RDD<Tuple2<K, V>>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
- wrapRDD(RDD<T>) - Method in class org.apache.spark.api.java.JavaRDD
- wrapRDD(RDD<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
- wrapRDD(RDD<T>) - Method in class org.apache.spark.streaming.api.java.JavaDStream
- wrapRDD(RDD<T>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
- WritableByteChannelWrapper - Interface in org.apache.spark.shuffle.api
-
:: Private :: A thin wrapper around a
WritableByteChannel
. - write() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
- write() - Method in class org.apache.spark.ml.classification.FMClassificationModel
- write() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
- write() - Method in class org.apache.spark.ml.classification.LinearSVCModel
- write() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Returns a
MLWriter
instance for this ML instance. - write() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
- write() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
- write() - Method in class org.apache.spark.ml.classification.OneVsRest
- write() - Method in class org.apache.spark.ml.classification.OneVsRestModel
- write() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
- write() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
- write() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
- write() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
Returns a
MLWriter
instance for this ML instance. - write() - Method in class org.apache.spark.ml.clustering.KMeansModel
-
Returns a
GeneralMLWriter
instance for this ML instance. - write() - Method in class org.apache.spark.ml.clustering.LocalLDAModel
- write() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
- write() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
- write() - Method in class org.apache.spark.ml.feature.ColumnPruner
- write() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
- write() - Method in class org.apache.spark.ml.feature.IDFModel
- write() - Method in class org.apache.spark.ml.feature.ImputerModel
- write() - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
- write() - Method in class org.apache.spark.ml.feature.MinHashLSHModel
- write() - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
- write() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
- write() - Method in class org.apache.spark.ml.feature.PCAModel
- write() - Method in class org.apache.spark.ml.feature.RFormulaModel
- write() - Method in class org.apache.spark.ml.feature.RobustScalerModel
- write() - Method in class org.apache.spark.ml.feature.StandardScalerModel
- write() - Method in class org.apache.spark.ml.feature.StringIndexerModel
- write() - Method in class org.apache.spark.ml.feature.UnivariateFeatureSelectorModel
- write() - Method in class org.apache.spark.ml.feature.VarianceThresholdSelectorModel
- write() - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
- write() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
- write() - Method in class org.apache.spark.ml.feature.Word2VecModel
- write() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
- write() - Method in class org.apache.spark.ml.Pipeline
- write() - Method in class org.apache.spark.ml.PipelineModel
- write() - Method in class org.apache.spark.ml.recommendation.ALSModel
- write() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- write() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
- write() - Method in class org.apache.spark.ml.regression.FMRegressionModel
- write() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
- write() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
Returns a
MLWriter
instance for this ML instance. - write() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
- write() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
Returns a
GeneralMLWriter
instance for this ML instance. - write() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
- write() - Method in class org.apache.spark.ml.tuning.CrossValidator
- write() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
- write() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
- write() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
- write() - Method in interface org.apache.spark.ml.util.DefaultParamsWritable
- write() - Method in interface org.apache.spark.ml.util.GeneralMLWritable
-
Returns an
MLWriter
instance for this ML instance. - write() - Method in interface org.apache.spark.ml.util.MLWritable
-
Returns an
MLWriter
instance for this ML instance. - write() - Method in class org.apache.spark.sql.api.Dataset
-
Interface for saving the content of the non-streaming Dataset out into external storage.
- write() - Method in class org.apache.spark.sql.Dataset
- write(byte[]) - Method in class org.apache.spark.storage.TimeTrackingOutputStream
- write(byte[], int, int) - Method in class org.apache.spark.storage.TimeTrackingOutputStream
- write(int) - Method in class org.apache.spark.storage.TimeTrackingOutputStream
- write(Kryo, Output, Iterable<?>) - Method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
- write(String, SparkSession, Map<String, String>, PipelineStage) - Method in class org.apache.spark.ml.clustering.InternalKMeansModelWriter
- write(String, SparkSession, Map<String, String>, PipelineStage) - Method in class org.apache.spark.ml.clustering.PMMLKMeansModelWriter
- write(String, SparkSession, Map<String, String>, PipelineStage) - Method in class org.apache.spark.ml.regression.InternalLinearRegressionModelWriter
- write(String, SparkSession, Map<String, String>, PipelineStage) - Method in class org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter
- write(String, SparkSession, Map<String, String>, PipelineStage) - Method in interface org.apache.spark.ml.util.MLWriterFormat
-
Function to write the provided pipeline stage out.
- write(ByteBuffer) - Method in class org.apache.spark.storage.CountingWritableChannel
- write(ByteBuffer, long) - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Write the record to the log and return a record handle, which contains all the information necessary to read back the written record.
- write(ElementTrackingStore, long, boolean) - Method in class org.apache.spark.status.LiveExecutorStageSummary
- write(T) - Method in interface org.apache.spark.sql.connector.write.DataWriter
-
Writes one record.
- write(T) - Method in interface org.apache.spark.sql.connector.write.DeltaWriter
- Write - Interface in org.apache.spark.sql.connector.write
-
A logical representation of a data source write.
- WRITE_BYTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- WRITE_RECORDS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- WRITE_TIME() - Method in class org.apache.spark.InternalAccumulator.shuffleWrite$
- WRITE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- WRITE_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- WriteAheadLog - Class in org.apache.spark.streaming.util
-
:: DeveloperApi :: This abstract class represents a write ahead log (aka journal) that is used by Spark Streaming to save the received data (by receivers) and associated metadata to a reliable storage, so that they can be recovered after driver failures.
- WriteAheadLog() - Constructor for class org.apache.spark.streaming.util.WriteAheadLog
- WriteAheadLogRecordHandle - Class in org.apache.spark.streaming.util
-
:: DeveloperApi :: This abstract class represents a handle that refers to a record written in a
WriteAheadLog
. - WriteAheadLogRecordHandle() - Constructor for class org.apache.spark.streaming.util.WriteAheadLogRecordHandle
- WriteAheadLogUtils - Class in org.apache.spark.streaming.util
-
A helper class with utility functions related to the WriteAheadLog interface
- WriteAheadLogUtils() - Constructor for class org.apache.spark.streaming.util.WriteAheadLogUtils
- writeAll(Iterator<T>) - Method in interface org.apache.spark.sql.connector.write.DataWriter
-
Writes all records provided by the given iterator.
- writeAll(Iterator<T>, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializationStream
- writeBoolean(DataOutputStream, boolean) - Static method in class org.apache.spark.api.r.SerDe
- writeBooleanArr(DataOutputStream, boolean[]) - Static method in class org.apache.spark.api.r.SerDe
- WriteBuilder - Interface in org.apache.spark.sql.connector.write
-
An interface for building the
Write
. - writeByteBuffer(ByteBuffer, DataOutput) - Static method in class org.apache.spark.util.Utils
-
Primitive often used when writing
ByteBuffer
toDataOutput
- writeByteBuffer(ByteBuffer, OutputStream) - Static method in class org.apache.spark.util.Utils
-
Primitive often used when writing
ByteBuffer
toOutputStream
- writeBytes() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetricDistributions
- writeBytes(DataOutputStream, byte[]) - Static method in class org.apache.spark.api.r.SerDe
- WriteConfigMethods<R> - Interface in org.apache.spark.sql
-
Configuration methods common to create/replace operations and insert/overwrite operations.
- writeDate(DataOutputStream, Date) - Static method in class org.apache.spark.api.r.SerDe
- writeDistributionAndOrderingNotSupportedInContinuousExecution() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- writeDouble(DataOutputStream, double) - Static method in class org.apache.spark.api.r.SerDe
- writeDoubleArr(DataOutputStream, double[]) - Static method in class org.apache.spark.api.r.SerDe
- writeEmptySchemasUnsupportedByDataSourceError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- writeEventLogs(String, Option<String>, ZipOutputStream) - Method in interface org.apache.spark.status.api.v1.UIRoot
-
Write the event logs for the given app to the
ZipOutputStream
instance. - writeExternal(ObjectOutput) - Method in class org.apache.spark.serializer.JavaSerializer
- writeExternal(ObjectOutput) - Method in class org.apache.spark.storage.BlockManagerId
- writeExternal(ObjectOutput) - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
- writeExternal(ObjectOutput) - Method in class org.apache.spark.storage.StorageLevel
- writeInt(DataOutputStream, int) - Static method in class org.apache.spark.api.r.SerDe
- writeIntArr(DataOutputStream, int[]) - Static method in class org.apache.spark.api.r.SerDe
- writeIntoTempViewNotAllowedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- writeIntoV1TableNotAllowedError(TableIdentifier, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- writeIntoViewNotAllowedError(TableIdentifier, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- writeJObj(DataOutputStream, Object, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
- writeKey(T, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializationStream
-
Writes the object representing the key of a key-value pair.
- writeLong(byte[], int, long, int) - Static method in class org.apache.spark.types.variant.VariantUtil
- writeMapField(String, Map<String, String>, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
------------------------------ * Util JSON serialization methods |
- writeObject(DataOutputStream, Object, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
- writeObject(T, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializationStream
-
The most general-purpose method to write an object.
- writePartitionExceedConfigSizeWhenDynamicPartitionError(int, int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- writer() - Method in class org.apache.spark.ml.SaveInstanceEnd
- writer() - Method in class org.apache.spark.ml.SaveInstanceStart
- WriterCommitMessage - Interface in org.apache.spark.sql.connector.write
-
A commit message returned by
DataWriter.commit()
and will be sent back to the driver side as the input parameter ofBatchWrite.commit(WriterCommitMessage[])
orStreamingWrite.commit(long, WriterCommitMessage[])
. - writeRecords() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetricDistributions
- writeSparkEventToJson(SparkListenerEvent, JsonGenerator, JsonProtocolOptions) - Static method in class org.apache.spark.util.JsonProtocol
- writeSqlObject(DataOutputStream, Object) - Static method in class org.apache.spark.sql.api.r.SQLUtils
- writeStream() - Method in class org.apache.spark.sql.Dataset
-
Interface for saving the content of the streaming Dataset out into external storage.
- writeString(DataOutputStream, String) - Static method in class org.apache.spark.api.r.SerDe
- writeStringArr(DataOutputStream, String[]) - Static method in class org.apache.spark.api.r.SerDe
- writeTime() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetricDistributions
- writeTime() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
- writeTime(DataOutputStream, Time) - Static method in class org.apache.spark.api.r.SerDe
- writeTime(DataOutputStream, Timestamp) - Static method in class org.apache.spark.api.r.SerDe
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics
- writeTo(CodedOutputStream) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
- writeTo(OutputStream) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Writes out this
BloomFilter
to an output stream in binary format. - writeTo(OutputStream) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Writes out this
CountMinSketch
to an output stream in binary format. - writeTo(String) - Method in class org.apache.spark.sql.api.Dataset
-
Create a write configuration builder for v2 sources.
- writeTo(String) - Method in class org.apache.spark.sql.Dataset
- writeType(DataOutputStream, String) - Static method in class org.apache.spark.api.r.SerDe
- writeUnsupportedForBinaryFileDataSourceError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- writeValue(T, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializationStream
-
Writes the object representing the value of a key-value pair.
- writeWithSaveModeUnsupportedBySourceError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- writingJobFailedError(Throwable) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
- wrongCommandForObjectTypeError(String, String, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- wrongNumArgsError(String, Seq<Object>, int, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- wrongNumberArgumentsForTransformError(String, int, SqlBaseParser.ApplyTransformContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
- wrongNumOrderingsForInverseDistributionFunctionError(String, int, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
X
- x - Variable in class org.apache.spark.sql.util.NumericHistogram.Coord
- x() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
- xml(String) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads a XML file and returns the result as a
DataFrame
. - xml(String) - Method in class org.apache.spark.sql.DataFrameReader
- xml(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the
DataFrame
in XML format at the specified path. - xml(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads a XML file stream and returns the result as a
DataFrame
. - xml(String...) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads XML files and returns the result as a
DataFrame
. - xml(String...) - Method in class org.apache.spark.sql.DataFrameReader
- xml(Dataset) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads an
Dataset[String]
storing XML object and returns the result as aDataFrame
. - xml(Dataset<String>) - Method in class org.apache.spark.sql.DataFrameReader
- xml(Seq<String>) - Method in class org.apache.spark.sql.api.DataFrameReader
-
Loads XML files and returns the result as a
DataFrame
. - xml(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
- xmlRowTagRequiredError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- xpath(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a string array of values within the nodes of xml that match the XPath expression.
- xpath_boolean(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true if the XPath expression evaluates to true, or if a matching node is found.
- xpath_double(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a double value, the value zero if no match is found, or NaN if a match is found but the value is non-numeric.
- xpath_float(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a float value, the value zero if no match is found, or NaN if a match is found but the value is non-numeric.
- xpath_int(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an integer value, or the value zero if no match is found, or a match is found but the value is non-numeric.
- xpath_long(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a long integer value, or the value zero if no match is found, or a match is found but the value is non-numeric.
- xpath_number(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a double value, the value zero if no match is found, or NaN if a match is found but the value is non-numeric.
- xpath_short(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a short integer value, or the value zero if no match is found, or a match is found but the value is non-numeric.
- xpath_string(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the text contents of the first xml node that matches the XPath expression.
- XssSafeRequest - Class in org.apache.spark.ui
- XssSafeRequest(HttpServletRequest, String) - Constructor for class org.apache.spark.ui.XssSafeRequest
- xxhash64(Column...) - Static method in class org.apache.spark.sql.functions
-
Calculates the hash code of given columns using the 64-bit variant of the xxHash algorithm, and returns the result as a long column.
- xxhash64(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Calculates the hash code of given columns using the 64-bit variant of the xxHash algorithm, and returns the result as a long column.
- XZ - Enum constant in enum class org.apache.spark.sql.avro.AvroCompressionCodec
Y
- y - Variable in class org.apache.spark.sql.util.NumericHistogram.Coord
- year(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the year as an integer from a given date/timestamp/string.
- YEAR() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- YEAR_MONTH_INTERVAL - Enum constant in enum class org.apache.spark.types.variant.VariantUtil.Type
- YEAR_MONTH_INTERVAL - Static variable in class org.apache.spark.types.variant.VariantUtil
- yearMonthFields() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
- YearMonthIntervalType - Class in org.apache.spark.sql.types
-
The type represents year-month intervals of the SQL standard.
- YearMonthIntervalType(byte, byte) - Constructor for class org.apache.spark.sql.types.YearMonthIntervalType
- YearMonthIntervalUtils - Class in org.apache.spark.util
- YearMonthIntervalUtils() - Constructor for class org.apache.spark.util.YearMonthIntervalUtils
- years(String) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a yearly transform for a timestamp or date column.
- years(Column) - Method in class org.apache.spark.sql.functions.partitioning$
-
(Scala-specific) A transform for timestamps and dates to partition data into years.
- years(Column) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) A transform for timestamps and dates to partition data into years.
- years(NamedReference) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
Z
- zero() - Method in class org.apache.spark.sql.expressions.Aggregator
-
A zero value for this aggregation.
- zero() - Static method in class org.apache.spark.sql.types.ByteExactNumeric
- zero() - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
- zero() - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
- zero() - Static method in class org.apache.spark.sql.types.FloatExactNumeric
- zero() - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
- zero() - Static method in class org.apache.spark.sql.types.LongExactNumeric
- zero() - Static method in class org.apache.spark.sql.types.ShortExactNumeric
- zero(int, int) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
- zeroArgumentIndexError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
- zeroifnull(Column) - Static method in class org.apache.spark.sql.functions
-
Returns zero if
col
is null, orcol
otherwise. - zeros(int) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a vector of all zeros.
- zeros(int) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a vector of all zeros.
- zeros(int, int) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting of zeros. - zeros(int, int) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a
Matrix
consisting of zeros. - zeros(int, int) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a
DenseMatrix
consisting of zeros. - zeros(int, int) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a
Matrix
consisting of zeros. - zip(JavaRDDLike<U, ?>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.
- zip(RDD<U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.
- zip_with(Column, Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Merge two given arrays, element-wise, into a single array using a function.
- zipPartitions(JavaRDDLike<U, ?>, FlatMapFunction2<Iterator<T>, Iterator<U>, V>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.
- zipPartitions(RDD<B>, boolean, Function2<Iterator<T>, Iterator<B>, Iterator<V>>, ClassTag<B>, ClassTag<V>) - Method in class org.apache.spark.rdd.RDD
-
Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.
- zipPartitions(RDD<B>, RDD<C>, boolean, Function3<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<V>) - Method in class org.apache.spark.rdd.RDD
- zipPartitions(RDD<B>, RDD<C>, RDD<D>, boolean, Function4<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<D>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<D>, ClassTag<V>) - Method in class org.apache.spark.rdd.RDD
- zipPartitions(RDD<B>, RDD<C>, RDD<D>, Function4<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<D>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<D>, ClassTag<V>) - Method in class org.apache.spark.rdd.RDD
- zipPartitions(RDD<B>, RDD<C>, Function3<Iterator<T>, Iterator<B>, Iterator<C>, Iterator<V>>, ClassTag<B>, ClassTag<C>, ClassTag<V>) - Method in class org.apache.spark.rdd.RDD
- zipPartitions(RDD<B>, Function2<Iterator<T>, Iterator<B>, Iterator<V>>, ClassTag<B>, ClassTag<V>) - Method in class org.apache.spark.rdd.RDD
- zipPartitionsWithEvaluator(RDD<T>, PartitionEvaluatorFactory<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Zip this RDD's partitions with another RDD and return a new RDD by applying an evaluator to the zipped partitions.
- zipWithIndex() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Zips this RDD with its element indices.
- zipWithIndex() - Method in class org.apache.spark.rdd.RDD
-
Zips this RDD with its element indices.
- zipWithUniqueId() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Zips this RDD with generated unique Long ids.
- zipWithUniqueId() - Method in class org.apache.spark.rdd.RDD
-
Zips this RDD with generated unique Long ids.
- ZSTANDARD - Enum constant in enum class org.apache.spark.sql.avro.AvroCompressionCodec
- ZStdCompressionCodec - Class in org.apache.spark.io
-
:: DeveloperApi :: ZStandard implementation of
CompressionCodec
. - ZStdCompressionCodec(SparkConf) - Constructor for class org.apache.spark.io.ZStdCompressionCodec
_
- _1() - Method in class org.apache.spark.util.MutablePair
- _2() - Method in class org.apache.spark.util.MutablePair
All Classes and Interfaces|All Packages|Constant Field Values|Serialized Form