Modifier and Type | Method and Description |
---|---|
SchemaPath |
DynamicRootSchema.resolveTableAlias(String alias) |
Modifier and Type | Class and Description |
---|---|
class |
FieldReference |
Modifier and Type | Field and Description |
---|---|
static SchemaPath |
SchemaPath.STAR_COLUMN |
Modifier and Type | Method and Description |
---|---|
static SchemaPath |
SchemaPath.create(UserBitShared.NamePart namePart) |
SchemaPath |
SchemaPath.De.deserialize(com.fasterxml.jackson.core.JsonParser jp,
com.fasterxml.jackson.databind.DeserializationContext ctxt) |
SchemaPath |
SchemaPath.getChild(int index) |
SchemaPath |
SchemaPath.getChild(int index,
Object originalValue,
TypeProtos.MajorType valueType) |
SchemaPath |
SchemaPath.getChild(String childPath) |
SchemaPath |
SchemaPath.getChild(String childPath,
Object originalValue,
TypeProtos.MajorType valueType) |
static SchemaPath |
SchemaPath.getCompoundPath(int n,
String... path)
Constructs
SchemaPath based on given path array up to nth element (inclusively). |
static SchemaPath |
SchemaPath.getCompoundPath(String... path) |
SchemaPath |
TypedFieldExpr.getPath() |
static SchemaPath |
SchemaPath.getSimplePath(String name) |
SchemaPath |
SchemaPath.getUnIndexed()
Returns schema path with for arrays without index.
|
static SchemaPath |
SchemaPath.parseFromString(String expr)
Parses input string using the same rules which are used for the field in the query.
|
Modifier and Type | Method and Description |
---|---|
boolean |
SchemaPath.contains(SchemaPath path) |
Void |
ExpressionStringBuilder.visitSchemaPath(SchemaPath path,
StringBuilder sb) |
Constructor and Description |
---|
FieldReference(SchemaPath sp) |
SchemaPath(SchemaPath path) |
TypedFieldExpr(SchemaPath path,
TypeProtos.MajorType type) |
Modifier and Type | Method and Description |
---|---|
abstract T |
SimpleExprVisitor.visitSchemaPath(SchemaPath path) |
Boolean |
AggregateChecker.visitSchemaPath(SchemaPath path,
ErrorCollector errors) |
Void |
ExpressionValidator.visitSchemaPath(SchemaPath path,
ErrorCollector errors) |
T |
AbstractExprVisitor.visitSchemaPath(SchemaPath path,
VAL value) |
T |
ExprVisitor.visitSchemaPath(SchemaPath path,
VAL value) |
T |
SimpleExprVisitor.visitSchemaPath(SchemaPath path,
Void value) |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
Unnest.getColumn() |
Constructor and Description |
---|
Unnest(SchemaPath column) |
Modifier and Type | Method and Description |
---|---|
Boolean |
ConstantExpressionIdentifier.visitSchemaPath(SchemaPath path,
IdentityHashMap<LogicalExpression,Object> value) |
Modifier and Type | Method and Description |
---|---|
LogicalExpression |
CloneVisitor.visitSchemaPath(SchemaPath path,
Void value) |
Integer |
HashVisitor.visitSchemaPath(SchemaPath path,
Void value) |
Constructor and Description |
---|
StatisticsProvider(Map<SchemaPath,ColumnStatistics<?>> columnStatMap,
long rowCount) |
Modifier and Type | Method and Description |
---|---|
ValueHolder |
InterpreterEvaluator.EvalVisitor.visitSchemaPath(SchemaPath path,
Integer value) |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
AnalyzeFileInfoProvider.getLocationField(ColumnNamesOptions columnNamesOptions) |
SchemaPath |
AnalyzeInfoProvider.getLocationField(ColumnNamesOptions columnNamesOptions)
Provides schema path to field which will be used as a location for specific table data,
for example, for file-based tables, it may be `fqn`.
|
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
AnalyzeFileInfoProvider.getProjectionFields(DrillTable table,
MetadataType metadataLevel,
ColumnNamesOptions columnNamesOptions) |
List<SchemaPath> |
AnalyzeInfoProvider.getProjectionFields(DrillTable table,
MetadataType metadataLevel,
ColumnNamesOptions columnNamesOptions)
Returns list of fields required for ANALYZE.
|
List<SchemaPath> |
AnalyzeParquetInfoProvider.getProjectionFields(DrillTable table,
MetadataType metadataLevel,
ColumnNamesOptions columnNamesOptions) |
List<SchemaPath> |
AnalyzeFileInfoProvider.getSegmentColumns(DrillTable table,
ColumnNamesOptions columnNamesOptions) |
List<SchemaPath> |
AnalyzeInfoProvider.getSegmentColumns(DrillTable table,
ColumnNamesOptions columnNamesOptions)
Returns list of segment column names for specified
DrillTable table. |
List<SchemaPath> |
MetadataAggregateContext.interestingColumns() |
List<SchemaPath> |
MetadataControllerContext.interestingColumns() |
List<SchemaPath> |
MetadataAggregateContext.metadataColumns() |
Modifier and Type | Method and Description |
---|---|
NamedExpression |
AnalyzeFileInfoProvider.getParentLocationExpression(SchemaPath locationField) |
NamedExpression |
AnalyzeInfoProvider.getParentLocationExpression(SchemaPath locationField)
Returns expression which may be used to determine parent location for specific table data,
i.e.
|
Modifier and Type | Method and Description |
---|---|
MetadataInfoCollector |
AnalyzeFileInfoProvider.getMetadataInfoCollector(BasicTablesRequests basicRequests,
TableInfo tableInfo,
FormatSelection selection,
PlannerSettings settings,
Supplier<org.apache.calcite.rel.core.TableScan> tableScanSupplier,
List<SchemaPath> interestingColumns,
MetadataType metadataLevel,
int segmentColumnsCount) |
MetadataInfoCollector |
AnalyzeInfoProvider.getMetadataInfoCollector(BasicTablesRequests basicRequests,
TableInfo tableInfo,
FormatSelection selection,
PlannerSettings settings,
Supplier<org.apache.calcite.rel.core.TableScan> tableScanSupplier,
List<SchemaPath> interestingColumns,
MetadataType metadataLevel,
int segmentColumnsCount)
Returns
MetadataInfoCollector instance for obtaining information about segments, files, etc. |
MetadataAggregateContext.MetadataAggregateContextBuilder |
MetadataAggregateContext.MetadataAggregateContextBuilder.interestingColumns(List<SchemaPath> interestingColumns) |
MetadataControllerContext.MetadataControllerContextBuilder |
MetadataControllerContext.MetadataControllerContextBuilder.interestingColumns(List<SchemaPath> interestingColumns) |
MetadataAggregateContext.MetadataAggregateContextBuilder |
MetadataAggregateContext.MetadataAggregateContextBuilder.metadataColumns(List<SchemaPath> metadataColumns) |
Constructor and Description |
---|
FileMetadataInfoCollector(BasicTablesRequests basicRequests,
TableInfo tableInfo,
FormatSelection selection,
PlannerSettings settings,
Supplier<org.apache.calcite.rel.core.TableScan> tableScanSupplier,
List<SchemaPath> interestingColumns,
MetadataType metadataLevel,
int segmentColumnsCount) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
MetastoreFileTableMetadataProvider.getPartitionColumns() |
List<SchemaPath> |
SimpleFileTableMetadataProvider.getPartitionColumns() |
Modifier and Type | Method and Description |
---|---|
List<PartitionMetadata> |
MetastoreFileTableMetadataProvider.getPartitionMetadata(SchemaPath columnName) |
List<PartitionMetadata> |
SimpleFileTableMetadataProvider.getPartitionMetadata(SchemaPath columnName) |
Modifier and Type | Field and Description |
---|---|
static List<SchemaPath> |
GroupScan.ALL_COLUMNS
columns list in GroupScan : 1) empty_column is for skipAll query.
|
protected List<SchemaPath> |
AbstractGroupScanWithMetadata.columns |
protected List<SchemaPath> |
AbstractGroupScanWithMetadata.partitionColumns |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
AbstractDbGroupScan.getRowKeyPath() |
SchemaPath |
DbGroupScan.getRowKeyPath() |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
AbstractGroupScan.getColumns() |
List<SchemaPath> |
AbstractGroupScanWithMetadata.getColumns() |
List<SchemaPath> |
DbGroupScan.getColumns() |
List<SchemaPath> |
GroupScan.getColumns()
Returns a list of columns scanned by this group scan
|
List<SchemaPath> |
IndexGroupScan.getColumns() |
List<SchemaPath> |
AbstractGroupScan.getPartitionColumns() |
List<SchemaPath> |
AbstractGroupScanWithMetadata.getPartitionColumns() |
List<SchemaPath> |
GroupScan.getPartitionColumns()
Returns a list of columns that can be used for partition pruning
|
Modifier and Type | Method and Description |
---|---|
long |
AbstractGroupScan.getColumnValueCount(SchemaPath column)
By default, throw exception, since group scan does not have exact column value count.
|
long |
AbstractGroupScanWithMetadata.getColumnValueCount(SchemaPath column)
Return column value count for the specified column.
|
long |
GroupScan.getColumnValueCount(SchemaPath column)
Return the number of non-null value in the specified column.
|
<T> T |
AbstractGroupScanWithMetadata.getPartitionValue(org.apache.hadoop.fs.Path path,
SchemaPath column,
Class<T> clazz) |
TypeProtos.MajorType |
AbstractGroupScanWithMetadata.getTypeForColumn(SchemaPath schemaPath) |
static boolean |
AbstractGroupScanWithMetadata.isImplicitOrPartCol(SchemaPath schemaPath,
OptionManager optionManager) |
Modifier and Type | Method and Description |
---|---|
boolean |
SchemalessScan.canPushdownProjects(List<SchemaPath> columns) |
boolean |
AbstractGroupScan.canPushdownProjects(List<SchemaPath> columns) |
boolean |
GroupScan.canPushdownProjects(List<SchemaPath> columns)
GroupScan should check the list of columns, and see if it could support all the columns in the list.
|
GroupScan |
SchemalessScan.clone(List<SchemaPath> columns) |
GroupScan |
AbstractGroupScan.clone(List<SchemaPath> columns) |
GroupScan |
GroupScan.clone(List<SchemaPath> columns)
Returns a clone of GroupScan instance, except that the new GroupScan will use the provided list of columns .
|
<T extends Metadata> |
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer.filterAndGetMetadata(Set<SchemaPath> schemaPathsInExpr,
Iterable<T> metadataList,
FilterPredicate<?> filterPredicate,
OptionManager optionManager)
Produces filtering of specified metadata using specified filter expression and returns filtered metadata.
|
protected void |
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer.filterFileMetadata(OptionManager optionManager,
FilterPredicate<?> filterPredicate,
Set<SchemaPath> schemaPathsInExpr)
Produces filtering of metadata at file level.
|
protected void |
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer.filterPartitionMetadata(OptionManager optionManager,
FilterPredicate<?> filterPredicate,
Set<SchemaPath> schemaPathsInExpr)
Produces filtering of metadata at partition level.
|
protected void |
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer.filterSegmentMetadata(OptionManager optionManager,
FilterPredicate<?> filterPredicate,
Set<SchemaPath> schemaPathsInExpr)
Produces filtering of metadata at segment level.
|
protected void |
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer.filterTableMetadata(FilterPredicate<?> filterPredicate,
Set<SchemaPath> schemaPathsInExpr)
Produces filtering of metadata at table level.
|
DbGroupScan |
AbstractDbGroupScan.getRestrictedScan(List<SchemaPath> columns) |
DbGroupScan |
DbGroupScan.getRestrictedScan(List<SchemaPath> columns)
If this DbGroupScan supports restricted scan, create a restricted scan from this DbGroupScan.
|
void |
IndexGroupScan.setColumns(List<SchemaPath> columns) |
Constructor and Description |
---|
AbstractGroupScanWithMetadata(String userName,
List<SchemaPath> columns,
LogicalExpression filter) |
SchemalessScan(String userName,
org.apache.hadoop.fs.Path selectionRoot,
List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
FlattenPOP.getColumn() |
SchemaPath |
UnnestPOP.getColumn() |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
LateralJoinPOP.getExcludedColumns() |
Constructor and Description |
---|
FlattenPOP(PhysicalOperator child,
SchemaPath column) |
UnnestPOP(PhysicalOperator child,
SchemaPath column,
String implicitColumn) |
Constructor and Description |
---|
LateralJoinPOP(PhysicalOperator left,
PhysicalOperator right,
org.apache.calcite.rel.core.JoinRelType joinType,
String implicitRIDColumn,
List<SchemaPath> excludedColumns) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
ScanBatch.getValueVectorId(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
SpilledRecordBatch.getValueVectorId(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
void |
OrderedPartitionProjector.setup(FragmentContext context,
VectorAccessible incoming,
RecordBatch outgoing,
List<TransferPair> transfers,
VectorContainer partitionVectors,
int partitions,
SchemaPath outputField) |
void |
OrderedPartitionProjectorTemplate.setup(FragmentContext context,
VectorAccessible incoming,
RecordBatch outgoing,
List<TransferPair> transfers,
VectorContainer partitionVectors,
int partitions,
SchemaPath outputField) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
PartitionerTemplate.OutgoingRecordBatch.getValueVectorId(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
BatchAccessor.getValueVectorId(SchemaPath path) |
TypedFieldId |
OperatorRecordBatch.getValueVectorId(SchemaPath path) |
TypedFieldId |
VectorContainerAccessor.getValueVectorId(SchemaPath path) |
Modifier and Type | Field and Description |
---|---|
List<SchemaPath> |
ScanSchemaOrchestrator.ScanSchemaOptions.projection |
protected List<SchemaPath> |
ScanLevelProjection.projectionList |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
ScanLevelProjection.Builder.projectionList() |
List<SchemaPath> |
ScanLevelProjection.requestedCols()
Return the set of columns from the SELECT list
|
Modifier and Type | Method and Description |
---|---|
static ScanLevelProjection |
ScanLevelProjection.build(List<SchemaPath> projectionList,
List<ScanLevelProjection.ScanProjectionParser> parsers)
Builder shortcut, primarily for tests.
|
static ScanLevelProjection |
ScanLevelProjection.build(List<SchemaPath> projectionList,
List<ScanLevelProjection.ScanProjectionParser> parsers,
TupleMetadata outputSchema)
Builder shortcut, primarily for tests.
|
void |
ScanSchemaOrchestrator.ScanOrchestratorBuilder.projection(List<SchemaPath> projection) |
ScanLevelProjection.Builder |
ScanLevelProjection.Builder.projection(List<SchemaPath> projectionList)
Specify the set of columns in the SELECT list.
|
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
ScanLifecycleBuilder.projection() |
Modifier and Type | Method and Description |
---|---|
void |
ScanLifecycleBuilder.projection(List<SchemaPath> projection) |
Modifier and Type | Method and Description |
---|---|
static ScanProjectionParser.ProjectionParseResult |
ScanProjectionParser.parse(Collection<SchemaPath> projList) |
ScanSchemaConfigBuilder |
ScanSchemaConfigBuilder.projection(List<SchemaPath> projectionList) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
UnorderedReceiverBatch.getValueVectorId(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
IteratorValidatorBatchIterator.getValueVectorId(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
WindowDataBatch.getValueVectorId(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
BatchGroup.getValueVectorId(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
static RequestedTuple |
Projections.parse(Collection<SchemaPath> projList)
Parse a projection list.
|
Modifier and Type | Method and Description |
---|---|
TypeProtos.MajorType |
PartitionDescriptor.getVectorType(SchemaPath column,
PlannerSettings plannerSettings)
Method returns the Major type associated with the given column
|
TypeProtos.MajorType |
FileSystemPartitionDescriptor.getVectorType(SchemaPath column,
PlannerSettings plannerSettings) |
TypeProtos.MajorType |
ParquetPartitionDescriptor.getVectorType(SchemaPath column,
PlannerSettings plannerSettings) |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
DrillStatsTable.ColumnStatistics_v1.getName() |
Modifier and Type | Method and Description |
---|---|
Set<SchemaPath> |
DrillStatsTable.getColumns() |
List<SchemaPath> |
DrillRelOptUtil.ProjectPushInfo.getFields() |
Modifier and Type | Method and Description |
---|---|
static List<StatisticsHolder<?>> |
DrillStatsTable.getEstimatedColumnStats(DrillStatsTable statsProvider,
SchemaPath fieldName)
Returns list of
StatisticsKind and statistics values obtained from specified DrillStatsTable for specified column. |
Histogram |
DrillStatsTable.getHistogram(SchemaPath column)
Get the histogram of a given column.
|
Double |
DrillStatsTable.getNdv(SchemaPath col)
Get the approximate number of distinct values of given column.
|
Double |
DrillStatsTable.getNNRowCount(SchemaPath col)
Get non-null rowcount for the column If stats are not present for the given column, a null is returned.
|
void |
DrillStatsTable.ColumnStatistics_v1.setName(SchemaPath name) |
Constructor and Description |
---|
DrillScanRelBase(org.apache.calcite.plan.RelOptCluster cluster,
org.apache.calcite.plan.RelTraitSet traits,
org.apache.calcite.plan.RelOptTable table,
List<SchemaPath> columns) |
ProjectPushInfo(List<SchemaPath> fields,
Map<String,FieldsReWriterUtil.DesiredField> desiredFields) |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
FunctionalIndexInfo.getNewPath(SchemaPath path)
getNewPath: for an original path, return new rename '$N' path, notice there could be multiple renamed paths
if the there are multiple functional indexes refer original path.
|
SchemaPath |
MapRDBFunctionalIndexInfo.getNewPath(SchemaPath path)
getNewPath: for an original path, return new rename '$N' path, notice there could be multiple renamed paths
if the there are multiple functional indexes refer original path.
|
SchemaPath |
FunctionalIndexInfo.getNewPathFromExpr(LogicalExpression expr)
return a plain field path if the incoming index expression 'expr' is replaced to be a plain field
|
SchemaPath |
MapRDBFunctionalIndexInfo.getNewPathFromExpr(LogicalExpression expr)
return a plain field path if the incoming index expression 'expr' is replaced to be a plain field
|
Modifier and Type | Method and Description |
---|---|
Set<SchemaPath> |
FunctionalIndexInfo.allNewSchemaPaths() |
Set<SchemaPath> |
MapRDBFunctionalIndexInfo.allNewSchemaPaths() |
Set<SchemaPath> |
FunctionalIndexInfo.allPathsInFunction() |
Set<SchemaPath> |
MapRDBFunctionalIndexInfo.allPathsInFunction() |
Map<LogicalExpression,Set<SchemaPath>> |
FunctionalIndexInfo.getPathsInFunctionExpr() |
Map<LogicalExpression,Set<SchemaPath>> |
MapRDBFunctionalIndexInfo.getPathsInFunctionExpr()
Suppose the index key has functions (rather than plain columns): CAST(a as int), CAST(b as varchar(10)),
then we want to maintain a mapping of the logical expression of that function to the schema path of the
base column involved in the function.
|
List<SchemaPath> |
IndexCallContext.getScanColumns() |
List<SchemaPath> |
IndexLogicalPlanCallContext.getScanColumns() |
List<SchemaPath> |
IndexPhysicalPlanCallContext.getScanColumns() |
static List<SchemaPath> |
IndexPlanUtils.rewriteFunctionColumn(List<SchemaPath> paths,
FunctionalIndexInfo functionInfo,
List<SchemaPath> addedPaths)
For IndexGroupScan, if a column is only appeared in the should-be-renamed function,
this column is to-be-replaced column, we replace that column(schemaPath) from 'a.b'
to '$1' in the list of SchemaPath.
|
Modifier and Type | Method and Description |
---|---|
SchemaPath |
FunctionalIndexInfo.getNewPath(SchemaPath path)
getNewPath: for an original path, return new rename '$N' path, notice there could be multiple renamed paths
if the there are multiple functional indexes refer original path.
|
SchemaPath |
MapRDBFunctionalIndexInfo.getNewPath(SchemaPath path)
getNewPath: for an original path, return new rename '$N' path, notice there could be multiple renamed paths
if the there are multiple functional indexes refer original path.
|
boolean |
AbstractIndexCollection.isColumnIndexed(SchemaPath path) |
boolean |
IndexCollection.isColumnIndexed(SchemaPath path)
Check if the field name is the leading key of any of the indexes in this collection
|
boolean |
DrillIndexDefinition.pathExactIn(SchemaPath path,
Collection<LogicalExpression> exprs) |
static boolean |
IndexPlanUtils.pathOnlyInIndexedFunction(SchemaPath path) |
Boolean |
PathInExpr.visitSchemaPath(SchemaPath path,
Void value) |
org.apache.calcite.rex.RexNode |
ExprToRex.visitSchemaPath(SchemaPath path,
Void value) |
Modifier and Type | Method and Description |
---|---|
static org.apache.calcite.rel.type.RelDataType |
FunctionalIndexHelper.rewriteFunctionalRowType(org.apache.calcite.rel.RelNode origScan,
IndexCallContext indexContext,
FunctionalIndexInfo functionInfo,
Collection<SchemaPath> addedPaths)
if a field in rowType serves only the to-be-replaced column(s), we should replace it with new name "$1",
otherwise we should keep this dataTypeField and add a new one for "$1"
|
static List<SchemaPath> |
IndexPlanUtils.rewriteFunctionColumn(List<SchemaPath> paths,
FunctionalIndexInfo functionInfo,
List<SchemaPath> addedPaths)
For IndexGroupScan, if a column is only appeared in the should-be-renamed function,
this column is to-be-replaced column, we replace that column(schemaPath) from 'a.b'
to '$1' in the list of SchemaPath.
|
static List<SchemaPath> |
IndexPlanUtils.rewriteFunctionColumn(List<SchemaPath> paths,
FunctionalIndexInfo functionInfo,
List<SchemaPath> addedPaths)
For IndexGroupScan, if a column is only appeared in the should-be-renamed function,
this column is to-be-replaced column, we replace that column(schemaPath) from 'a.b'
to '$1' in the list of SchemaPath.
|
Constructor and Description |
---|
PathInExpr(Map<LogicalExpression,Set<SchemaPath>> pathsInExpr) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
AbstractIndexPlanGenerator.checkRowKey(List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
DrillScanRel.getColumns() |
static List<SchemaPath> |
DrillScanRel.getProjectedColumns(org.apache.calcite.plan.RelOptTable table,
boolean isSelectStar) |
Modifier and Type | Method and Description |
---|---|
void |
ScanFieldDeterminer.FieldList.addProjected(SchemaPath path) |
void |
ScanFieldDeterminer.FieldList.addReferenced(SchemaPath path) |
Modifier and Type | Method and Description |
---|---|
void |
ScanFieldDeterminer.FieldList.addProjected(Collection<SchemaPath> paths) |
void |
ScanFieldDeterminer.FieldList.addReferenced(Collection<SchemaPath> paths) |
Constructor and Description |
---|
DrillScanRel(org.apache.calcite.plan.RelOptCluster cluster,
org.apache.calcite.plan.RelTraitSet traits,
org.apache.calcite.plan.RelOptTable table,
GroupScan groupScan,
org.apache.calcite.rel.type.RelDataType rowType,
List<SchemaPath> columns)
Creates a DrillScanRel for a particular GroupScan
|
DrillScanRel(org.apache.calcite.plan.RelOptCluster cluster,
org.apache.calcite.plan.RelTraitSet traits,
org.apache.calcite.plan.RelOptTable table,
GroupScan groupScan,
org.apache.calcite.rel.type.RelDataType rowType,
List<SchemaPath> columns,
boolean partitionFilterPushdown)
Creates a DrillScanRel for a particular GroupScan
|
DrillScanRel(org.apache.calcite.plan.RelOptCluster cluster,
org.apache.calcite.plan.RelTraitSet traits,
org.apache.calcite.plan.RelOptTable table,
org.apache.calcite.rel.type.RelDataType rowType,
List<SchemaPath> columns) |
DrillScanRel(org.apache.calcite.plan.RelOptCluster cluster,
org.apache.calcite.plan.RelTraitSet traits,
org.apache.calcite.plan.RelOptTable table,
org.apache.calcite.rel.type.RelDataType rowType,
List<SchemaPath> columns,
boolean partitionFilterPushdown) |
Modifier and Type | Method and Description |
---|---|
TypeProtos.MajorType |
HivePartitionDescriptor.getVectorType(SchemaPath column,
PlannerSettings plannerSettings) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
SqlMetastoreAnalyzeTable.getFieldNames() |
Modifier and Type | Method and Description |
---|---|
static List<SchemaPath> |
SchemaUtil.getSchemaPaths(TupleMetadata schema)
Returns list of
SchemaPath for fields taken from specified schema. |
Modifier and Type | Method and Description |
---|---|
TypedFieldId |
HyperVectorWrapper.getFieldIdIfMatches(int id,
SchemaPath expectedPath) |
TypedFieldId |
SimpleVectorWrapper.getFieldIdIfMatches(int id,
SchemaPath expectedPath) |
TypedFieldId |
VectorWrapper.getFieldIdIfMatches(int id,
SchemaPath expectedPath)
Traverse the object graph and determine whether the provided SchemaPath matches data within the Wrapper.
|
TypedFieldId |
RecordBatch.getValueVectorId(SchemaPath path)
Gets the value vector type and ID for the given schema path.
|
TypedFieldId |
RecordIterator.getValueVectorId(SchemaPath path) |
TypedFieldId |
SchemalessBatch.getValueVectorId(SchemaPath path) |
TypedFieldId |
SimpleRecordBatch.getValueVectorId(SchemaPath path) |
TypedFieldId |
VectorAccessible.getValueVectorId(SchemaPath path)
Get the value vector type and id for the given schema path.
|
TypedFieldId |
RecordBatchLoader.getValueVectorId(SchemaPath path) |
TypedFieldId |
AbstractRecordBatch.getValueVectorId(SchemaPath path) |
TypedFieldId |
VectorContainer.getValueVectorId(SchemaPath path) |
Modifier and Type | Field and Description |
---|---|
protected static List<SchemaPath> |
AbstractRecordReader.DEFAULT_TEXT_COLS_TO_READ |
Modifier and Type | Method and Description |
---|---|
protected Collection<SchemaPath> |
AbstractRecordReader.getColumns() |
protected List<SchemaPath> |
AbstractRecordReader.getDefaultColumnsToRead() |
List<SchemaPath> |
ColumnExplorer.getTableColumns() |
protected Collection<SchemaPath> |
AbstractRecordReader.transformColumns(Collection<SchemaPath> projected) |
Modifier and Type | Method and Description |
---|---|
static boolean |
ColumnExplorer.isPartitionColumn(OptionManager optionManager,
SchemaPath column)
Checks if given column is partition or not.
|
Modifier and Type | Method and Description |
---|---|
AbstractGroupScan |
AbstractStoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns) |
AbstractGroupScan |
StoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns)
Get the physical scan operator for the particular GroupScan (read) node.
|
AbstractGroupScan |
AbstractStoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns,
SessionOptionManager options) |
AbstractGroupScan |
StoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns,
SessionOptionManager options)
Get the physical scan operator for the particular GroupScan (read) node.
|
AbstractGroupScan |
AbstractStoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns,
SessionOptionManager options,
MetadataProviderManager metadataProviderManager) |
AbstractGroupScan |
StoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns,
SessionOptionManager options,
MetadataProviderManager providerManager)
Get the physical scan operator for the particular GroupScan (read) node.
|
protected void |
AbstractRecordReader.setColumns(Collection<SchemaPath> projected) |
protected Collection<SchemaPath> |
AbstractRecordReader.transformColumns(Collection<SchemaPath> projected) |
Constructor and Description |
---|
ColumnExplorer(OptionManager optionManager,
List<SchemaPath> columns)
Helper class that encapsulates logic for sorting out columns
between actual table columns, partition columns and implicit file columns.
|
Constructor and Description |
---|
BsonRecordReader(DrillBuf managedBuf,
List<SchemaPath> columns,
boolean readNumbersAsDouble) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
EasySubScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
boolean |
EasyGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
EasyGroupScan.clone(List<SchemaPath> columns) |
AbstractGroupScan |
EasyFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns) |
AbstractGroupScan |
EasyFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns,
MetadataProviderManager metadataProviderManager) |
RecordReader |
EasyFormatPlugin.getRecordReader(FragmentContext context,
DrillFileSystem dfs,
FileWork fileWork,
List<SchemaPath> columns,
String userName)
Return a record reader for the specific file format, when using the original
ScanBatch scanner. |
Constructor and Description |
---|
EasyGroupScan(String userName,
FileSelection selection,
EasyFormatPlugin<?> formatPlugin,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
int minWidth,
MetadataProviderManager metadataProvider) |
EasyGroupScan(String userName,
FileSelection selection,
EasyFormatPlugin<?> formatPlugin,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
MetadataProviderManager metadataProviderManager) |
EasyGroupScan(String userName,
List<org.apache.hadoop.fs.Path> files,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
StoragePluginRegistry engineRegistry,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
TupleMetadata schema,
int limit) |
EasySubScan(String userName,
List<CompleteFileWork.FileWorkImpl> files,
EasyFormatPlugin<?> plugin,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
int partitionDepth,
TupleMetadata schema,
int limit) |
EasySubScan(String userName,
List<CompleteFileWork.FileWorkImpl> files,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
StoragePluginRegistry engineRegistry,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
int partitionDepth,
TupleMetadata schema,
int limit) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
DirectGroupScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
GroupScan |
DirectGroupScan.clone(List<SchemaPath> columns) |
GroupScan |
MetadataDirectGroupScan.clone(List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
DruidCompareFunctionProcessor.getPath() |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
DruidGroupScan.getColumns() |
List<SchemaPath> |
DruidSubScan.getColumns() |
protected Collection<SchemaPath> |
DruidRecordReader.transformColumns(Collection<SchemaPath> projectedColumns) |
Modifier and Type | Method and Description |
---|---|
Boolean |
DruidCompareFunctionProcessor.visitSchemaPath(SchemaPath path,
LogicalExpression valueArg) |
Modifier and Type | Method and Description |
---|---|
boolean |
DruidGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
DruidGroupScan.clone(List<SchemaPath> columns) |
protected Collection<SchemaPath> |
DruidRecordReader.transformColumns(Collection<SchemaPath> projectedColumns) |
Constructor and Description |
---|
DruidGroupScan(String userName,
DruidScanSpec scanSpec,
DruidStoragePluginConfig storagePluginConfig,
List<SchemaPath> columns,
int maxRecordsToRead,
StoragePluginRegistry pluginRegistry) |
DruidGroupScan(String userName,
DruidStoragePlugin storagePlugin,
DruidScanSpec scanSpec,
List<SchemaPath> columns,
int maxRecordsToRead) |
DruidRecordReader(DruidSubScan.DruidSubScanSpec subScanSpec,
List<SchemaPath> projectedColumns,
int maxRecordsToRead,
FragmentContext context,
DruidStoragePlugin plugin) |
DruidSubScan(StoragePluginRegistry registry,
String userName,
StoragePluginConfig config,
LinkedList<DruidSubScan.DruidSubScanSpec> datasourceScanSpecList,
List<SchemaPath> columns,
int maxRecordsToRead) |
DruidSubScan(String userName,
DruidStoragePlugin plugin,
List<DruidSubScan.DruidSubScanSpec> dataSourceInfoList,
List<SchemaPath> columns,
int maxRecordsToRead) |
Modifier and Type | Method and Description |
---|---|
protected List<SchemaPath> |
JSONRecordReader.getDefaultColumnsToRead() |
Modifier and Type | Method and Description |
---|---|
RecordReader |
JSONFormatPlugin.getRecordReader(FragmentContext context,
DrillFileSystem dfs,
FileWork fileWork,
List<SchemaPath> columns,
String userName) |
Constructor and Description |
---|
JSONRecordReader(FragmentContext fragmentContext,
com.fasterxml.jackson.databind.JsonNode embeddedContent,
DrillFileSystem fileSystem,
List<SchemaPath> columns)
Create a new JSON Record Reader that uses a in memory materialized JSON stream.
|
JSONRecordReader(FragmentContext fragmentContext,
List<SchemaPath> columns)
Create a JSON Record Reader that uses an InputStream directly
|
JSONRecordReader(FragmentContext fragmentContext,
org.apache.hadoop.fs.Path inputPath,
DrillFileSystem fileSystem,
List<SchemaPath> columns)
Create a JSON Record Reader that uses a file based input stream.
|
Modifier and Type | Method and Description |
---|---|
AbstractGroupScan |
TextFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns,
MetadataProviderManager metadataProviderManager) |
AbstractGroupScan |
TextFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns,
OptionManager options,
MetadataProviderManager metadataProviderManager) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
EnumerableGroupScan.getColumns() |
List<SchemaPath> |
EnumerableSubScan.getColumns() |
Constructor and Description |
---|
EnumerableGroupScan(String code,
List<SchemaPath> columns,
Map<String,Integer> fieldsMap,
double rows,
TupleMetadata schema,
String schemaPath,
ColumnConverterFactoryProvider converterFactoryProvider) |
EnumerableRecordReader(List<SchemaPath> columns,
Map<String,Integer> fieldsMap,
String code,
String schemaPath,
ColumnConverterFactoryProvider factoryProvider) |
EnumerableSubScan(String code,
List<SchemaPath> columns,
Map<String,Integer> fieldsMap,
TupleMetadata schema,
String schemaPath,
ColumnConverterFactoryProvider converterFactoryProvider) |
Modifier and Type | Field and Description |
---|---|
static SchemaPath |
DrillHBaseConstants.ROW_KEY_PATH |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
CompareFunctionsProcessor.getPath() |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
HBaseGroupScan.getColumns() |
List<SchemaPath> |
HBaseSubScan.getColumns() |
protected Collection<SchemaPath> |
HBaseRecordReader.transformColumns(Collection<SchemaPath> columns)
Provides the projected columns information to the Hbase Scan instance.
|
Modifier and Type | Method and Description |
---|---|
protected void |
CompareFunctionsProcessor.setPath(SchemaPath path) |
Boolean |
CompareFunctionsProcessor.visitSchemaPath(SchemaPath path,
LogicalExpression valueArg) |
Modifier and Type | Method and Description |
---|---|
boolean |
HBaseGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
HBaseGroupScan.clone(List<SchemaPath> columns) |
protected Collection<SchemaPath> |
HBaseRecordReader.transformColumns(Collection<SchemaPath> columns)
Provides the projected columns information to the Hbase Scan instance.
|
static void |
HBaseUtils.verifyColumns(List<SchemaPath> columns,
org.apache.hadoop.hbase.HTableDescriptor hTableDesc)
Verify the presence of a column family in the schema path of the hbase table or whether the schema path is
the row key column.
|
Constructor and Description |
---|
HBaseGroupScan(String userName,
HBaseScanSpec hbaseScanSpec,
HBaseStoragePluginConfig storagePluginConfig,
List<SchemaPath> columns,
StoragePluginRegistry pluginRegistry) |
HBaseGroupScan(String userName,
HBaseStoragePlugin storagePlugin,
HBaseScanSpec scanSpec,
List<SchemaPath> columns) |
HBaseRecordReader(org.apache.hadoop.hbase.client.Connection connection,
HBaseSubScan.HBaseSubScanSpec subScanSpec,
List<SchemaPath> projectedColumns) |
HBaseSubScan(StoragePluginRegistry registry,
String userName,
HBaseStoragePluginConfig hbaseStoragePluginConfig,
LinkedList<HBaseSubScan.HBaseSubScanSpec> regionScanSpecList,
List<SchemaPath> columns) |
HBaseSubScan(String userName,
HBaseStoragePlugin hbaseStoragePlugin,
List<HBaseSubScan.HBaseSubScanSpec> regionInfoList,
List<SchemaPath> columns) |
Modifier and Type | Field and Description |
---|---|
protected List<SchemaPath> |
HiveScan.columns |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
HiveScan.getColumns() |
List<SchemaPath> |
HiveSubScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
boolean |
HiveScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
HiveDrillNativeParquetScan.clone(List<SchemaPath> columns) |
GroupScan |
HiveScan.clone(List<SchemaPath> columns) |
AbstractParquetRowGroupScan |
HiveDrillNativeParquetRowGroupScan.copy(List<SchemaPath> columns) |
HiveScan |
HiveStoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns) |
HiveScan |
HiveStoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns,
SessionOptionManager options) |
Constructor and Description |
---|
HiveDefaultRecordReader(HiveTableWithColumnCache table,
HivePartition partition,
Collection<org.apache.hadoop.mapred.InputSplit> inputSplits,
List<SchemaPath> projectedColumns,
FragmentContext context,
org.apache.hadoop.hive.conf.HiveConf hiveConf,
org.apache.hadoop.security.UserGroupInformation proxyUgi)
Readers constructor called by initializer.
|
HiveTextRecordReader(HiveTableWithColumnCache table,
HivePartition partition,
Collection<org.apache.hadoop.mapred.InputSplit> inputSplits,
List<SchemaPath> projectedColumns,
FragmentContext context,
org.apache.hadoop.hive.conf.HiveConf hiveConf,
org.apache.hadoop.security.UserGroupInformation proxyUgi)
Constructor matching super.
|
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
HttpGroupScan.columns() |
List<SchemaPath> |
HttpSubScan.columns() |
Modifier and Type | Method and Description |
---|---|
boolean |
HttpGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
HttpGroupScan.clone(List<SchemaPath> columns) |
Constructor and Description |
---|
HttpGroupScan(HttpGroupScan that,
List<SchemaPath> columns)
Applies columns.
|
HttpGroupScan(List<SchemaPath> columns,
HttpScanSpec httpScanSpec,
Map<String,String> filters,
double selectivity,
int maxRecords)
Deserialize a group scan.
|
HttpSubScan(HttpScanSpec tableSpec,
List<SchemaPath> columns,
Map<String,String> filters,
int maxRecords) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
IcebergGroupScan.getColumns() |
List<SchemaPath> |
IcebergSubScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
static String |
IcebergGroupScan.getPath(SchemaPath schemaPath) |
Modifier and Type | Method and Description |
---|---|
IcebergGroupScan |
IcebergGroupScan.clone(List<SchemaPath> columns) |
IcebergGroupScan.IcebergGroupScanBuilder |
IcebergGroupScan.IcebergGroupScanBuilder.columns(List<SchemaPath> columns) |
IcebergSubScan.IcebergSubScanBuilder |
IcebergSubScan.IcebergSubScanBuilder.columns(List<SchemaPath> columns) |
static org.apache.iceberg.TableScan |
IcebergGroupScan.projectColumns(org.apache.iceberg.TableScan tableScan,
List<SchemaPath> columns) |
Constructor and Description |
---|
IcebergGroupScan(String userName,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
List<SchemaPath> columns,
TupleMetadata schema,
String path,
LogicalExpression condition,
Integer maxRecords,
StoragePluginRegistry pluginRegistry) |
IcebergSubScan(String userName,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
List<SchemaPath> columns,
String path,
List<IcebergWork> workList,
TupleMetadata schema,
LogicalExpression condition,
Integer maxRecords,
StoragePluginRegistry pluginRegistry) |
Modifier and Type | Method and Description |
---|---|
AbstractGroupScan |
IcebergFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns) |
AbstractGroupScan |
IcebergFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns,
MetadataProviderManager metadataProviderManager) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
InfoSchemaGroupScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
InfoSchemaFilter.ExprNode |
InfoSchemaFilterBuilder.visitSchemaPath(SchemaPath path,
Void value) |
Modifier and Type | Method and Description |
---|---|
GroupScan |
InfoSchemaGroupScan.clone(List<SchemaPath> columns) |
InfoSchemaGroupScan |
InfoSchemaStoragePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
JdbcGroupScan.getColumns() |
List<SchemaPath> |
JdbcSubScan.getColumns() |
Constructor and Description |
---|
JdbcBatchReader(DataSource source,
String sql,
List<SchemaPath> columns) |
JdbcGroupScan(String sql,
List<SchemaPath> columns,
StoragePluginConfig config,
double rows,
StoragePluginRegistry plugins) |
JdbcSubScan(String sql,
List<SchemaPath> columns,
StoragePluginConfig config,
StoragePluginRegistry plugins) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
KafkaGroupScan.getColumns() |
List<SchemaPath> |
KafkaSubScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
boolean |
KafkaGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
KafkaGroupScan.clone(List<SchemaPath> columns) |
Constructor and Description |
---|
KafkaGroupScan(KafkaStoragePlugin kafkaStoragePlugin,
KafkaScanSpec kafkaScanSpec,
List<SchemaPath> columns) |
KafkaGroupScan(String userName,
KafkaStoragePluginConfig kafkaStoragePluginConfig,
List<SchemaPath> columns,
KafkaScanSpec scanSpec,
StoragePluginRegistry pluginRegistry) |
KafkaGroupScan(String userName,
KafkaStoragePlugin kafkaStoragePlugin,
List<SchemaPath> columns,
KafkaScanSpec kafkaScanSpec) |
KafkaSubScan(StoragePluginRegistry registry,
String userName,
KafkaStoragePluginConfig kafkaStoragePluginConfig,
List<SchemaPath> columns,
LinkedList<KafkaPartitionScanSpec> partitionSubScanSpecList) |
KafkaSubScan(String userName,
KafkaStoragePlugin kafkaStoragePlugin,
List<SchemaPath> columns,
List<KafkaPartitionScanSpec> partitionSubScanSpecList) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
KuduSubScan.getColumns() |
List<SchemaPath> |
KuduGroupScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
boolean |
KuduGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
KuduGroupScan.clone(List<SchemaPath> columns) |
Constructor and Description |
---|
KuduGroupScan(KuduScanSpec kuduScanSpec,
KuduStoragePluginConfig kuduStoragePluginConfig,
List<SchemaPath> columns,
StoragePluginRegistry pluginRegistry) |
KuduGroupScan(KuduStoragePlugin kuduStoragePlugin,
KuduScanSpec kuduScanSpec,
List<SchemaPath> columns) |
KuduRecordReader(org.apache.kudu.client.KuduClient client,
KuduSubScan.KuduSubScanSpec subScanSpec,
List<SchemaPath> projectedColumns) |
KuduSubScan(KuduStoragePlugin plugin,
List<KuduSubScan.KuduSubScanSpec> tabletInfoList,
List<SchemaPath> columns) |
KuduSubScan(StoragePluginRegistry registry,
KuduStoragePluginConfig kuduStoragePluginConfig,
LinkedList<KuduSubScan.KuduSubScanSpec> tabletScanSpecList,
List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
protected Collection<SchemaPath> |
LTSVRecordReader.transformColumns(Collection<SchemaPath> projected) |
Modifier and Type | Method and Description |
---|---|
RecordReader |
LTSVFormatPlugin.getRecordReader(FragmentContext context,
DrillFileSystem dfs,
FileWork fileWork,
List<SchemaPath> columns,
String userName) |
protected Collection<SchemaPath> |
LTSVRecordReader.transformColumns(Collection<SchemaPath> projected) |
Constructor and Description |
---|
LTSVRecordReader(FragmentContext fragmentContext,
org.apache.hadoop.fs.Path path,
DrillFileSystem fileSystem,
List<SchemaPath> columns) |
Modifier and Type | Field and Description |
---|---|
static SchemaPath |
PluginConstants.DOCUMENT_SCHEMA_PATH |
static SchemaPath |
PluginConstants.ID_SCHEMA_PATH |
Modifier and Type | Field and Description |
---|---|
protected List<SchemaPath> |
MapRDBGroupScan.columns |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
MapRDBSubScan.getColumns() |
List<SchemaPath> |
MapRDBGroupScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
boolean |
MapRDBGroupScan.canPushdownProjects(List<SchemaPath> columns) |
AbstractGroupScan |
MapRDBFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns) |
AbstractGroupScan |
MapRDBFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns,
com.mapr.db.index.IndexDesc indexDesc,
MetadataProviderManager metadataProviderManager) |
Constructor and Description |
---|
MapRDBGroupScan(AbstractStoragePlugin storagePlugin,
MapRDBFormatPlugin formatPlugin,
List<SchemaPath> columns,
String userName,
TableMetadataProvider metadataProvider) |
MapRDBSubScan(StoragePluginRegistry engineRegistry,
String userName,
MapRDBFormatPluginConfig formatPluginConfig,
StoragePluginConfig storageConfig,
List<MapRDBSubScanSpec> regionScanSpecList,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
MapRDBSubScan(String userName,
MapRDBFormatPlugin formatPlugin,
List<MapRDBSubScanSpec> maprSubScanSpecs,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
MapRDBSubScan(String userName,
MapRDBFormatPlugin formatPlugin,
List<MapRDBSubScanSpec> maprSubScanSpecs,
List<SchemaPath> columns,
String tableType,
TupleMetadata schema) |
RestrictedMapRDBSubScan(StoragePluginRegistry engineRegistry,
String userName,
MapRDBFormatPluginConfig formatPluginConfig,
StoragePluginConfig storageConfig,
List<RestrictedMapRDBSubScanSpec> regionScanSpecList,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
RestrictedMapRDBSubScan(String userName,
MapRDBFormatPlugin formatPlugin,
List<RestrictedMapRDBSubScanSpec> maprDbSubScanSpecs,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
Modifier and Type | Method and Description |
---|---|
GroupScan |
BinaryTableGroupScan.clone(List<SchemaPath> columns) |
Constructor and Description |
---|
BinaryTableGroupScan(String userName,
AbstractStoragePlugin storagePlugin,
MapRDBFormatPlugin formatPlugin,
HBaseScanSpec scanSpec,
List<SchemaPath> columns,
MapRDBTableStats tableStats,
TableMetadataProvider metadataProvider) |
BinaryTableGroupScan(String userName,
AbstractStoragePlugin storagePlugin,
MapRDBFormatPlugin formatPlugin,
HBaseScanSpec scanSpec,
List<SchemaPath> columns,
MetadataProviderManager metadataProviderManager) |
BinaryTableGroupScan(String userName,
HBaseScanSpec scanSpec,
FileSystemConfig storagePluginConfig,
MapRDBFormatPluginConfig formatPluginConfig,
List<SchemaPath> columns,
TupleMetadata schema,
StoragePluginRegistry pluginRegistry) |
Modifier and Type | Method and Description |
---|---|
static SchemaPath |
FieldPathHelper.fieldPath2SchemaPath(org.ojai.FieldPath fieldPath)
Returns
SchemaPath equivalent of the specified FieldPath . |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
JsonTableGroupScan.getColumns() |
protected Collection<SchemaPath> |
MaprDBJsonRecordReader.transformColumns(Collection<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
static org.ojai.FieldPath |
FieldPathHelper.schemaPath2FieldPath(SchemaPath column)
Returns
FieldPath equivalent of the specified SchemaPath . |
JsonScanSpec |
JsonConditionBuilder.visitSchemaPath(SchemaPath path,
Void value) |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonTableGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
RestrictedJsonTableGroupScan.clone(List<SchemaPath> columns) |
GroupScan |
JsonTableGroupScan.clone(List<SchemaPath> columns) |
RestrictedJsonTableGroupScan |
JsonTableGroupScan.getRestrictedScan(List<SchemaPath> columns) |
void |
JsonTableGroupScan.setColumns(List<SchemaPath> columns) |
protected Collection<SchemaPath> |
MaprDBJsonRecordReader.transformColumns(Collection<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
static org.ojai.FieldPath |
FieldPathHelper.schemaPathToFieldPath(SchemaPath schemaPath)
Returns
FieldPath equivalent of the specified SchemaPath . |
Modifier and Type | Method and Description |
---|---|
AbstractGroupScan |
StreamsFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
boolean |
MockGroupScanPOP.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
MockGroupScanPOP.clone(List<SchemaPath> columns) |
AbstractGroupScan |
MockStorageEngine.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
MongoCompareFunctionProcessor.getPath() |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
MongoSubScan.getColumns() |
List<SchemaPath> |
MongoGroupScan.getColumns() |
protected Collection<SchemaPath> |
MongoRecordReader.transformColumns(Collection<SchemaPath> projectedColumns) |
Modifier and Type | Method and Description |
---|---|
Boolean |
MongoCompareFunctionProcessor.visitSchemaPath(SchemaPath path,
LogicalExpression valueArg) |
Modifier and Type | Method and Description |
---|---|
boolean |
MongoGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
MongoGroupScan.clone(List<SchemaPath> columns) |
protected Collection<SchemaPath> |
MongoRecordReader.transformColumns(Collection<SchemaPath> projectedColumns) |
Constructor and Description |
---|
MongoGroupScan(String userName,
MongoScanSpec scanSpec,
MongoStoragePluginConfig storagePluginConfig,
List<SchemaPath> columns,
boolean useAggregate,
StoragePluginRegistry pluginRegistry) |
MongoGroupScan(String userName,
MongoStoragePlugin storagePlugin,
MongoScanSpec scanSpec,
List<SchemaPath> columns,
boolean useAggregate) |
MongoRecordReader(BaseMongoSubScanSpec subScanSpec,
List<SchemaPath> projectedColumns,
FragmentContext context,
MongoStoragePlugin plugin) |
MongoSubScan(StoragePluginRegistry registry,
String userName,
StoragePluginConfig mongoPluginConfig,
LinkedList<BaseMongoSubScanSpec> chunkScanSpecList,
List<SchemaPath> columns) |
MongoSubScan(String userName,
MongoStoragePlugin storagePlugin,
MongoStoragePluginConfig storagePluginConfig,
List<BaseMongoSubScanSpec> chunkScanSpecList,
List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
OpenTSDBSubScan.getColumns() |
List<SchemaPath> |
OpenTSDBGroupScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
boolean |
OpenTSDBGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
OpenTSDBGroupScan.clone(List<SchemaPath> columns) |
Constructor and Description |
---|
OpenTSDBGroupScan(OpenTSDBScanSpec openTSDBScanSpec,
OpenTSDBStoragePluginConfig openTSDBStoragePluginConfig,
List<SchemaPath> columns,
StoragePluginRegistry pluginRegistry) |
OpenTSDBGroupScan(OpenTSDBStoragePlugin storagePlugin,
OpenTSDBScanSpec scanSpec,
List<SchemaPath> columns) |
OpenTSDBRecordReader(Service client,
OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec,
List<SchemaPath> projectedColumns) |
OpenTSDBSubScan(OpenTSDBStoragePlugin plugin,
OpenTSDBStoragePluginConfig config,
List<OpenTSDBSubScan.OpenTSDBSubScanSpec> tabletInfoList,
List<SchemaPath> columns) |
OpenTSDBSubScan(StoragePluginRegistry registry,
OpenTSDBStoragePluginConfig storage,
LinkedList<OpenTSDBSubScan.OpenTSDBSubScanSpec> tabletScanSpecList,
List<SchemaPath> columns) |
Modifier and Type | Field and Description |
---|---|
protected List<SchemaPath> |
AbstractParquetRowGroupScan.columns |
Modifier and Type | Method and Description |
---|---|
static Map<SchemaPath,ColumnStatistics<?>> |
ParquetTableMetadataUtils.addImplicitColumnsStatistics(Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
List<SchemaPath> columns,
List<String> partitionValues,
OptionManager optionManager,
org.apache.hadoop.fs.Path location,
boolean supportsFileImplicitColumns)
Creates new map based on specified
columnStatistics with added statistics
for implicit and partition (dir) columns. |
List<SchemaPath> |
AbstractParquetRowGroupScan.getColumns() |
static Map<SchemaPath,ColumnStatistics<?>> |
ParquetTableMetadataUtils.getColumnStatistics(TupleMetadata schema,
DrillStatsTable statistics)
Returns map with schema path and
ColumnStatistics obtained from specified DrillStatsTable
for all columns from specified BaseTableMetadata . |
static Map<SchemaPath,TypeProtos.MajorType> |
ParquetTableMetadataUtils.getFileFields(MetadataBase.ParquetTableMetadataBase parquetTableMetadata,
MetadataBase.ParquetFileMetadata file)
Returns map of column names with their drill types for specified
file . |
static Map<SchemaPath,TypeProtos.MajorType> |
ParquetTableMetadataUtils.getIntermediateFields(MetadataBase.ParquetTableMetadataBase parquetTableMetadata,
MetadataBase.RowGroupMetadata rowGroup)
Returns map of column names with their Drill types for every
NameSegment in SchemaPath
in specified rowGroup . |
List<SchemaPath> |
BaseParquetMetadataProvider.getPartitionColumns() |
List<SchemaPath> |
ParquetGroupScanStatistics.getPartitionColumns() |
static Map<SchemaPath,ColumnStatistics<?>> |
ParquetTableMetadataUtils.getRowGroupColumnStatistics(MetadataBase.ParquetTableMetadataBase tableMetadata,
MetadataBase.RowGroupMetadata rowGroupMetadata)
Converts specified
MetadataBase.RowGroupMetadata into the map of ColumnStatistics
instances with column names as keys. |
static Map<SchemaPath,TypeProtos.MajorType> |
ParquetTableMetadataUtils.getRowGroupFields(MetadataBase.ParquetTableMetadataBase parquetTableMetadata,
MetadataBase.RowGroupMetadata rowGroup)
Returns map of column names with their drill types for specified
rowGroup . |
Set<SchemaPath> |
FilterEvaluatorUtils.FieldReferenceFinder.visitSchemaPath(SchemaPath path,
Void value) |
Set<SchemaPath> |
FilterEvaluatorUtils.FieldReferenceFinder.visitUnknown(LogicalExpression e,
Void value) |
Modifier and Type | Method and Description |
---|---|
long |
ParquetGroupScanStatistics.getColumnValueCount(SchemaPath column) |
List<PartitionMetadata> |
BaseParquetMetadataProvider.getPartitionMetadata(SchemaPath columnName) |
static PartitionMetadata |
ParquetTableMetadataUtils.getPartitionMetadata(SchemaPath partitionColumn,
List<FileMetadata> files)
Returns
PartitionMetadata instance received by merging specified FileMetadata list. |
Map<org.apache.hadoop.fs.Path,Object> |
ParquetGroupScanStatistics.getPartitionPaths(SchemaPath column) |
Object |
ParquetGroupScanStatistics.getPartitionValue(org.apache.hadoop.fs.Path path,
SchemaPath column) |
TypeProtos.MajorType |
ParquetGroupScanStatistics.getTypeForColumn(SchemaPath schemaPath) |
Set<SchemaPath> |
FilterEvaluatorUtils.FieldReferenceFinder.visitSchemaPath(SchemaPath path,
Void value) |
Modifier and Type | Method and Description |
---|---|
static Map<SchemaPath,ColumnStatistics<?>> |
ParquetTableMetadataUtils.addImplicitColumnsStatistics(Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
List<SchemaPath> columns,
List<String> partitionValues,
OptionManager optionManager,
org.apache.hadoop.fs.Path location,
boolean supportsFileImplicitColumns)
Creates new map based on specified
columnStatistics with added statistics
for implicit and partition (dir) columns. |
static Map<SchemaPath,ColumnStatistics<?>> |
ParquetTableMetadataUtils.addImplicitColumnsStatistics(Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
List<SchemaPath> columns,
List<String> partitionValues,
OptionManager optionManager,
org.apache.hadoop.fs.Path location,
boolean supportsFileImplicitColumns)
Creates new map based on specified
columnStatistics with added statistics
for implicit and partition (dir) columns. |
boolean |
AbstractParquetGroupScan.canPushdownProjects(List<SchemaPath> columns) |
static ParquetReaderUtility.DateCorruptionStatus |
ParquetReaderUtility.checkForCorruptDateValuesInStatistics(org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
List<SchemaPath> columns,
boolean autoCorrectCorruptDates)
Detect corrupt date values by looking at the min/max values in the metadata.
|
GroupScan |
ParquetGroupScan.clone(List<SchemaPath> columns) |
static boolean |
ParquetReaderUtility.containsComplexColumn(org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
List<SchemaPath> columns)
Check whether any of columns in the given list is either nested or repetitive.
|
abstract AbstractParquetRowGroupScan |
AbstractParquetRowGroupScan.copy(List<SchemaPath> columns) |
AbstractParquetRowGroupScan |
ParquetRowGroupScan.copy(List<SchemaPath> columns) |
static ParquetReaderUtility.DateCorruptionStatus |
ParquetReaderUtility.detectCorruptDates(org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
List<SchemaPath> columns,
boolean autoCorrectCorruptDates)
Check for corrupted dates in a parquet file.
|
protected void |
AbstractParquetGroupScan.RowGroupScanFilterer.filterFileMetadata(OptionManager optionManager,
FilterPredicate<?> filterPredicate,
Set<SchemaPath> schemaPathsInExpr)
Produces filtering of metadata at file level.
|
AbstractFileGroupScan |
ParquetFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns) |
AbstractFileGroupScan |
ParquetFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns,
OptionManager options) |
AbstractFileGroupScan |
ParquetFormatPlugin.getGroupScan(String userName,
FileSelection selection,
List<SchemaPath> columns,
OptionManager options,
MetadataProviderManager metadataProviderManager) |
static <T extends Comparable<T>> |
FilterEvaluatorUtils.matches(FilterPredicate<T> parquetPredicate,
Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
long rowCount,
TupleMetadata fileMetadata,
Set<SchemaPath> schemaPathsInExpr) |
static <T extends Comparable<T>> |
FilterEvaluatorUtils.matches(FilterPredicate<T> parquetPredicate,
Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
long rowCount,
TupleMetadata fileMetadata,
Set<SchemaPath> schemaPathsInExpr) |
static RowsMatch |
FilterEvaluatorUtils.matches(LogicalExpression expr,
Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
TupleMetadata schema,
long rowCount,
UdfUtilities udfUtilities,
FunctionLookupContext functionImplementationRegistry,
Set<SchemaPath> schemaPathsInExpr) |
static RowsMatch |
FilterEvaluatorUtils.matches(LogicalExpression expr,
Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
TupleMetadata schema,
long rowCount,
UdfUtilities udfUtilities,
FunctionLookupContext functionImplementationRegistry,
Set<SchemaPath> schemaPathsInExpr) |
Modifier and Type | Method and Description |
---|---|
protected List<SchemaPath> |
ParquetRecordReader.getDefaultColumnsToRead() |
Constructor and Description |
---|
ParquetRecordReader(FragmentContext fragmentContext,
long numRecordsToRead,
org.apache.hadoop.fs.Path path,
int rowGroupIndex,
org.apache.hadoop.fs.FileSystem fs,
org.apache.parquet.compression.CompressionCodecFactory codecFactory,
org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
List<SchemaPath> columns,
ParquetReaderUtility.DateCorruptionStatus dateCorruptionStatus) |
ParquetRecordReader(FragmentContext fragmentContext,
org.apache.hadoop.fs.Path path,
int rowGroupIndex,
org.apache.hadoop.fs.FileSystem fs,
org.apache.parquet.compression.CompressionCodecFactory codecFactory,
org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
List<SchemaPath> columns,
ParquetReaderUtility.DateCorruptionStatus dateCorruptionStatus) |
ParquetRecordReader(FragmentContext fragmentContext,
org.apache.hadoop.fs.Path path,
int rowGroupIndex,
long numRecordsToRead,
org.apache.hadoop.fs.FileSystem fs,
org.apache.parquet.compression.CompressionCodecFactory codecFactory,
org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
List<SchemaPath> columns,
ParquetReaderUtility.DateCorruptionStatus dateCorruptionStatus) |
ParquetSchema(OptionManager options,
int rowGroupIndex,
org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
Collection<SchemaPath> selectedCols)
Build the Parquet schema.
|
Modifier and Type | Field and Description |
---|---|
SchemaPath |
Metadata_V1.ColumnMetadata_v1.name |
Modifier and Type | Method and Description |
---|---|
static void |
Metadata.createMeta(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
ParquetReaderConfig readerConfig,
boolean allColumnsInteresting,
Set<SchemaPath> columnSet)
Create the parquet metadata file for the directory at the given path, and for any subdirectories.
|
static Metadata_V4.ParquetFileAndRowCountMetadata |
Metadata.getParquetFileMetadata_v4(Metadata_V4.ParquetTableMetadata_v4 parquetTableMetadata,
org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
org.apache.hadoop.fs.FileStatus file,
org.apache.hadoop.fs.FileSystem fs,
boolean allColumnsInteresting,
boolean skipNonInteresting,
Set<SchemaPath> columnSet,
ParquetReaderConfig readerConfig)
Get the file metadata for a single file
|
Constructor and Description |
---|
ColumnMetadata_v1(SchemaPath name,
org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName primitiveType,
org.apache.parquet.schema.OriginalType originalType,
Object max,
Object min,
Long nulls) |
Key(SchemaPath name) |
Key(SchemaPath name) |
Constructor and Description |
---|
FileMetadataCollector(org.apache.parquet.hadoop.metadata.ParquetMetadata metadata,
org.apache.hadoop.fs.FileStatus file,
org.apache.hadoop.fs.FileSystem fs,
boolean allColumnsInteresting,
boolean skipNonInteresting,
Set<SchemaPath> columnSet,
ParquetReaderConfig readerConfig) |
Constructor and Description |
---|
DrillParquetGroupConverter(OutputMutator mutator,
BaseWriter baseWriter,
org.apache.parquet.schema.GroupType schema,
Collection<SchemaPath> columns,
OptionManager options,
ParquetReaderUtility.DateCorruptionStatus containsCorruptedDates,
boolean skipRepeated,
String parentName)
The constructor is responsible for creation of converters tree and may invoke itself for
creation of child converters when nested field is group type field too.
|
DrillParquetReader(FragmentContext fragmentContext,
org.apache.parquet.hadoop.metadata.ParquetMetadata footer,
RowGroupReadEntry entry,
List<SchemaPath> columns,
DrillFileSystem fileSystem,
ParquetReaderUtility.DateCorruptionStatus containsCorruptedDates,
long recordsToRead) |
DrillParquetRecordMaterializer(OutputMutator mutator,
org.apache.parquet.schema.MessageType schema,
Collection<SchemaPath> columns,
OptionManager options,
ParquetReaderUtility.DateCorruptionStatus containsCorruptedDates) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
PhoenixGroupScan.columns() |
List<SchemaPath> |
PhoenixSubScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
GroupScan |
PhoenixGroupScan.clone(List<SchemaPath> columns) |
Constructor and Description |
---|
PhoenixGroupScan(PhoenixGroupScan scan,
List<SchemaPath> columns) |
PhoenixGroupScan(String user,
String sql,
List<SchemaPath> columns,
PhoenixScanSpec scanSpec,
PhoenixStoragePlugin plugin) |
PhoenixGroupScan(String userName,
String sql,
List<SchemaPath> columns,
PhoenixScanSpec scanSpec,
PhoenixStoragePluginConfig config,
StoragePluginRegistry plugins) |
PhoenixSubScan(String userName,
String sql,
List<SchemaPath> columns,
PhoenixScanSpec scanSpec,
PhoenixStoragePlugin plugin) |
PhoenixSubScan(String userName,
String sql,
List<SchemaPath> columns,
PhoenixScanSpec scanSpec,
StoragePluginConfig config,
StoragePluginRegistry registry) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
SplunkGroupScan.columns() |
List<SchemaPath> |
SplunkSubScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
boolean |
SplunkGroupScan.canPushdownProjects(List<SchemaPath> columns) |
GroupScan |
SplunkGroupScan.clone(List<SchemaPath> columns) |
Constructor and Description |
---|
SplunkGroupScan(SplunkGroupScan that,
List<SchemaPath> columns)
Applies columns.
|
SplunkGroupScan(SplunkPluginConfig config,
List<SchemaPath> columns,
SplunkScanSpec splunkScanSpec,
Map<String,ExprNode.ColRelOpConstNode> filters,
double selectivity,
int maxRecords)
Deserialize a group scan.
|
SplunkSubScan(SplunkPluginConfig config,
SplunkScanSpec splunkScanSpec,
List<SchemaPath> columns,
Map<String,ExprNode.ColRelOpConstNode> filters,
int maxRecords) |
Modifier and Type | Method and Description |
---|---|
List<SchemaPath> |
SystemTableScan.getColumns() |
Modifier and Type | Method and Description |
---|---|
GroupScan |
SystemTableScan.clone(List<SchemaPath> columns) |
AbstractGroupScan |
SystemTablePlugin.getPhysicalScan(String userName,
JSONOptions selection,
List<SchemaPath> columns) |
Modifier and Type | Method and Description |
---|---|
static Collection<SchemaPath> |
EncodedSchemaPathSet.decode(Collection<SchemaPath> encodedPaths)
Returns the decoded Collection of SchemaPath from the input which
may contain a mix of encoded and non-encoded SchemaPaths.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
EncodedSchemaPathSet.isEncodedSchemaPath(SchemaPath schemaPath) |
Modifier and Type | Method and Description |
---|---|
static Collection<SchemaPath> |
EncodedSchemaPathSet.decode(Collection<SchemaPath> encodedPaths)
Returns the decoded Collection of SchemaPath from the input which
may contain a mix of encoded and non-encoded SchemaPaths.
|
static boolean |
Utilities.isStarQuery(Collection<SchemaPath> projected)
Return true if list of schema path has star column.
|
Modifier and Type | Method and Description |
---|---|
static TypedFieldId |
FieldIdUtil.getFieldId(ValueVector vector,
int id,
SchemaPath expectedPath,
boolean hyper) |
Modifier and Type | Method and Description |
---|---|
static void |
JsonReaderUtils.ensureAtLeastOneField(BaseWriter.ComplexWriter writer,
Collection<SchemaPath> columns,
boolean allTextMode,
List<BaseWriter.ListWriter> emptyArrayWriters) |
static FieldSelection |
FieldSelection.getFieldSelection(List<SchemaPath> fields)
Generates a field selection based on a list of fields.
|
JsonReader.Builder |
JsonReader.Builder.schemaPathColumns(List<SchemaPath> columns) |
static void |
JsonReaderUtils.writeColumnsUsingSchema(BaseWriter.ComplexWriter writer,
Collection<SchemaPath> columns,
TupleMetadata schema,
boolean allTextMode)
Creates writers which correspond to the specified schema for specified root writer.
|
Modifier and Type | Field and Description |
---|---|
protected Map<SchemaPath,ColumnStatistics<?>> |
BaseMetadata.columnsStatistics |
protected Map<SchemaPath,ColumnStatistics<?>> |
BaseMetadata.BaseMetadataBuilder.columnsStatistics |
Modifier and Type | Method and Description |
---|---|
SchemaPath |
PartitionMetadata.getColumn()
It allows to obtain the column path for this partition
|
SchemaPath |
SegmentMetadata.getColumn() |
Modifier and Type | Method and Description |
---|---|
Map<SchemaPath,ColumnStatistics<?>> |
BaseMetadata.getColumnsStatistics() |
Map<SchemaPath,ColumnStatistics<?>> |
Metadata.getColumnsStatistics()
Returns statistics stored in current metadata represented
as Map of column
SchemaPath s and corresponding ColumnStatistics . |
Map<SchemaPath,ColumnStatistics<?>> |
NonInterestingColumnsMetadata.getColumnsStatistics() |
List<SchemaPath> |
BaseTableMetadata.getInterestingColumns() |
List<SchemaPath> |
TableMetadata.getInterestingColumns() |
List<SchemaPath> |
TableMetadataProvider.getPartitionColumns()
Returns list of partition columns for table from this
TableMetadataProvider . |
Modifier and Type | Method and Description |
---|---|
PartitionMetadata.PartitionMetadataBuilder |
PartitionMetadata.PartitionMetadataBuilder.column(SchemaPath column) |
SegmentMetadata.SegmentMetadataBuilder |
SegmentMetadata.SegmentMetadataBuilder.column(SchemaPath column) |
ColumnMetadata |
BaseMetadata.getColumn(SchemaPath name) |
ColumnMetadata |
Metadata.getColumn(SchemaPath name)
Returns metadata description for the specified column
|
ColumnMetadata |
NonInterestingColumnsMetadata.getColumn(SchemaPath name) |
ColumnStatistics<?> |
BaseMetadata.getColumnStatistics(SchemaPath columnName) |
ColumnStatistics<?> |
Metadata.getColumnStatistics(SchemaPath columnName)
Returns statistics for specified column stored in current metadata.
|
ColumnStatistics<?> |
NonInterestingColumnsMetadata.getColumnStatistics(SchemaPath columnName) |
List<PartitionMetadata> |
TableMetadataProvider.getPartitionMetadata(SchemaPath columnName)
Returns list of
PartitionMetadata instances which corresponds to partitions for specified column
and provides metadata for specific partitions and its columns. |
<V> V |
BaseMetadata.getStatisticsForColumn(SchemaPath columnName,
StatisticsKind<V> statisticsKind) |
<V> V |
Metadata.getStatisticsForColumn(SchemaPath columnName,
StatisticsKind<V> statisticsKind)
Returns value of column statistics which corresponds to specified
StatisticsKind
for column with specified columnName . |
<V> V |
NonInterestingColumnsMetadata.getStatisticsForColumn(SchemaPath columnName,
StatisticsKind<V> statisticsKind) |
Modifier and Type | Method and Description |
---|---|
BaseTableMetadata |
BaseTableMetadata.cloneWithStats(Map<SchemaPath,ColumnStatistics<?>> columnStatistics,
List<StatisticsHolder<?>> tableStatistics) |
TableMetadata |
TableMetadata.cloneWithStats(Map<SchemaPath,ColumnStatistics<?>> columnStatistics,
List<StatisticsHolder<?>> tableStatistics) |
T |
BaseMetadata.BaseMetadataBuilder.columnsStatistics(Map<SchemaPath,ColumnStatistics<?>> columnsStatistics) |
BaseTableMetadata.BaseTableMetadataBuilder |
BaseTableMetadata.BaseTableMetadataBuilder.interestingColumns(List<SchemaPath> interestingColumns) |
Constructor and Description |
---|
NonInterestingColumnsMetadata(Map<SchemaPath,ColumnStatistics<?>> columnsStatistics) |
Modifier and Type | Method and Description |
---|---|
static <T extends BaseMetadata> |
TableMetadataUtils.mergeColumnsStatistics(Collection<T> metadataList,
Set<SchemaPath> columns,
List<CollectableColumnStatisticsKind<?>> statisticsToCollect)
Merges list of specified metadata into the map of
ColumnStatistics with columns as keys. |
Modifier and Type | Method and Description |
---|---|
static void |
SchemaPathUtils.addColumnMetadata(TupleMetadata schema,
SchemaPath schemaPath,
TypeProtos.MajorType type,
Map<SchemaPath,TypeProtos.MajorType> types)
Adds column with specified schema path and type into specified
TupleMetadata schema . |
static ColumnMetadata |
SchemaPathUtils.getColumnMetadata(SchemaPath schemaPath,
TupleMetadata schema)
Returns
ColumnMetadata instance obtained from specified TupleMetadata schema which corresponds to
the specified column schema path. |
static boolean |
SchemaPathUtils.isFieldNestedInDictOrRepeatedMap(SchemaPath schemaPath,
TupleMetadata schema)
Checks if field identified by the schema path is child in either
DICT or REPEATED MAP . |
Modifier and Type | Method and Description |
---|---|
static void |
SchemaPathUtils.addColumnMetadata(TupleMetadata schema,
SchemaPath schemaPath,
TypeProtos.MajorType type,
Map<SchemaPath,TypeProtos.MajorType> types)
Adds column with specified schema path and type into specified
TupleMetadata schema . |
static <T extends BaseMetadata> |
TableMetadataUtils.mergeColumnsStatistics(Collection<T> metadataList,
Set<SchemaPath> columns,
List<CollectableColumnStatisticsKind<?>> statisticsToCollect)
Merges list of specified metadata into the map of
ColumnStatistics with columns as keys. |
Copyright © 1970 The Apache Software Foundation. All rights reserved.