Package | Description |
---|---|
org.apache.drill.exec.expr |
Drill expression materialization and evaluation facilities.
|
org.apache.drill.exec.expr.fn.impl | |
org.apache.drill.exec.metastore.store | |
org.apache.drill.exec.metastore.store.parquet | |
org.apache.drill.exec.physical.base | |
org.apache.drill.exec.physical.impl.scan.columns |
Handles the special "columns" column used by the text reader,
and available to similar readers.
|
org.apache.drill.exec.physical.impl.scan.convert |
Standard type conversion tools for the case in which the input
types are the standard Java types already supported by the
ValuesWriter interface. |
org.apache.drill.exec.physical.impl.scan.file |
Handles optional file metadata columns: implicit columns and
partition columns.
|
org.apache.drill.exec.physical.impl.scan.framework |
Defines the projection, vector continuity and other operations for
a set of one or more readers.
|
org.apache.drill.exec.physical.impl.scan.project |
Provides run-time semantic analysis of the projection list for the
scan operator.
|
org.apache.drill.exec.physical.impl.scan.v3 |
Provides the "version 3" scan framework (which can also be thought of
as EVF version 2).
|
org.apache.drill.exec.physical.impl.scan.v3.file | |
org.apache.drill.exec.physical.impl.scan.v3.lifecycle |
Implements the details of the scan lifecycle for a set of readers,
primarily the process of resolving the scan output schema from a variety
of input schemas, then running each reader, each of which will produce
some number of batches.
|
org.apache.drill.exec.physical.impl.scan.v3.schema |
Provides run-time semantic analysis of the projection list for the
scan operator.
|
org.apache.drill.exec.physical.resultSet |
Provides a second-generation row set (AKA "record batch") writer used
by client code to
Define the schema of a result set.
Write data into the vectors backing a row set.
|
org.apache.drill.exec.physical.resultSet.impl |
Handles the details of the result set loader implementation.
|
org.apache.drill.exec.physical.resultSet.model |
The "row set model" provides a "dual" of the vector structure used to create,
allocate and work with a collection of vectors.
|
org.apache.drill.exec.physical.resultSet.model.hyper |
Implementation of a row set model for hyper-batches.
|
org.apache.drill.exec.physical.resultSet.model.single |
This set of classes models the structure of a batch consisting
of single vectors (as contrasted with a hyper batch.) Provides tools
or metdata-based construction, allocation, reading and writing of
the vectors.
|
org.apache.drill.exec.physical.rowSet |
Provides a set of tools to work with row sets.
|
org.apache.drill.exec.record | |
org.apache.drill.exec.record.metadata |
Provides a fluent schema builder.
|
org.apache.drill.exec.record.metadata.schema | |
org.apache.drill.exec.record.metadata.schema.parser | |
org.apache.drill.exec.store.avro | |
org.apache.drill.exec.store.cassandra | |
org.apache.drill.exec.store.dfs.easy | |
org.apache.drill.exec.store.easy.json.loader | |
org.apache.drill.exec.store.easy.text.reader |
Version 3 of the text reader.
|
org.apache.drill.exec.store.elasticsearch | |
org.apache.drill.exec.store.enumerable | |
org.apache.drill.exec.store.hdf5.writers | |
org.apache.drill.exec.store.hive | |
org.apache.drill.exec.store.httpd | |
org.apache.drill.exec.store.iceberg | |
org.apache.drill.exec.store.iceberg.read | |
org.apache.drill.exec.store.log | |
org.apache.drill.exec.store.mapr.db | |
org.apache.drill.exec.store.mapr.db.binary | |
org.apache.drill.exec.store.mapr.db.json | |
org.apache.drill.exec.store.parquet | |
org.apache.drill.exec.store.pcap.schema | |
org.apache.drill.exec.store.syslog | |
org.apache.drill.exec.vector.accessor |
Provides a light-weight, simplified set of column readers and writers that
can be plugged into a variety of row-level readers and writers.
|
org.apache.drill.exec.vector.accessor.reader |
Provides the reader hierarchy as explained in the API package.
|
org.apache.drill.exec.vector.accessor.writer |
Implementation of the vector writers.
|
org.apache.drill.exec.vector.complex.fn | |
org.apache.drill.metastore.metadata | |
org.apache.drill.metastore.util |
Modifier and Type | Method and Description |
---|---|
static LogicalExpression |
ExpressionTreeMaterializer.materializeFilterExpr(LogicalExpression expr,
TupleMetadata fieldTypes,
ErrorCollector errorCollector,
FunctionLookupContext functionLookupContext) |
Modifier and Type | Method and Description |
---|---|
static TupleMetadata |
SchemaFunctions.getTupleMetadata(String serialized)
Wraps static method from TupleMetadata to avoid
IncompatibleClassChangeError for JDK 9+. |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
MetastoreFileTableMetadataProvider.schema |
protected TupleMetadata |
MetastoreFileTableMetadataProvider.Builder.schema |
Modifier and Type | Method and Description |
---|---|
static TableMetadataProvider |
FileSystemMetadataProviderManager.getMetadataProviderForSchema(TupleMetadata schema)
Returns
TableMetadataProvider which provides specified schema. |
T |
MetastoreFileTableMetadataProvider.Builder.withSchema(TupleMetadata schema) |
SimpleFileTableMetadataProvider.Builder |
SimpleFileTableMetadataProvider.Builder.withSchema(TupleMetadata schema) |
Modifier and Type | Method and Description |
---|---|
T |
ParquetMetadataProviderBuilder.withSchema(TupleMetadata schema) |
MetastoreParquetTableMetadataProvider.Builder |
MetastoreParquetTableMetadataProvider.Builder.withSchema(TupleMetadata schema) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer.tableSchema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
AbstractGroupScanWithMetadata.getSchema() |
Modifier and Type | Method and Description |
---|---|
static FilterPredicate<?> |
AbstractGroupScanWithMetadata.getFilterPredicate(LogicalExpression filterExpr,
UdfUtilities udfUtilities,
FunctionLookupContext functionImplementationRegistry,
OptionManager optionManager,
boolean omitUnsupportedExprs,
boolean supportsFileImplicitColumns,
TupleMetadata schema)
Returns parquet filter predicate built from specified
filterExpr . |
B |
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer.schema(TupleMetadata tableSchema) |
Modifier and Type | Method and Description |
---|---|
static TupleMetadata |
ColumnsScanFramework.columnsSchema() |
Modifier and Type | Method and Description |
---|---|
boolean |
ColumnsArrayManager.resolveColumn(ColumnProjection col,
ResolvedTuple outputTuple,
TupleMetadata tableSchema) |
Modifier and Type | Method and Description |
---|---|
StandardConversions.Builder |
StandardConversions.Builder.withSchema(TupleMetadata providedSchema) |
Modifier and Type | Method and Description |
---|---|
boolean |
ImplicitColumnManager.resolveColumn(ColumnProjection col,
ResolvedTuple tuple,
TupleMetadata tableSchema)
Resolves metadata columns to concrete, materialized columns with the
proper value for the present file.
|
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
SchemaNegotiatorImpl.providedSchema |
protected TupleMetadata |
SchemaNegotiatorImpl.tableSchema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
ManagedScanFramework.outputSchema() |
TupleMetadata |
SchemaNegotiator.providedSchema()
Returns the provided schema, if defined.
|
TupleMetadata |
SchemaNegotiatorImpl.providedSchema() |
Modifier and Type | Method and Description |
---|---|
void |
SchemaNegotiator.tableSchema(TupleMetadata schema,
boolean isComplete)
Specify the table schema if this is an early-schema reader.
|
void |
SchemaNegotiatorImpl.tableSchema(TupleMetadata schema,
boolean isComplete) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
NullColumnBuilder.outputSchema |
protected TupleMetadata |
NullColumnBuilder.NullBuilderBuilder.outputSchema |
TupleMetadata |
ScanSchemaOrchestrator.ScanSchemaOptions.providedSchema |
protected TupleMetadata |
ScanLevelProjection.readerSchema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
ScanSchemaOrchestrator.providedSchema()
Returns the provided reader schema.
|
TupleMetadata |
ScanSchemaOrchestrator.ScanOrchestratorBuilder.providedSchema() |
protected TupleMetadata |
ScanSchemaOrchestrator.ScanSchemaOptions.providedSchema() |
TupleMetadata |
ScanLevelProjection.Builder.providedSchema() |
TupleMetadata |
ScanLevelProjection.readerSchema() |
Modifier and Type | Method and Description |
---|---|
static ScanLevelProjection |
ScanLevelProjection.build(List<SchemaPath> projectionList,
List<ScanLevelProjection.ScanProjectionParser> parsers,
TupleMetadata outputSchema)
Builder shortcut, primarily for tests.
|
ResultSetLoader |
ReaderSchemaOrchestrator.makeTableLoader(CustomErrorContext errorContext,
TupleMetadata readerSchema,
long localLimit) |
ResultSetLoader |
ReaderSchemaOrchestrator.makeTableLoader(TupleMetadata readerSchema) |
void |
ScanSchemaOrchestrator.ScanOrchestratorBuilder.providedSchema(TupleMetadata providedSchema) |
ScanLevelProjection.Builder |
ScanLevelProjection.Builder.providedSchema(TupleMetadata providedSchema) |
ReaderLevelProjection |
SchemaSmoother.resolve(TupleMetadata tableSchema,
ResolvedTuple outputTuple) |
boolean |
ReaderLevelProjection.ReaderProjectionResolver.resolveColumn(ColumnProjection col,
ResolvedTuple tuple,
TupleMetadata tableSchema) |
protected void |
ReaderLevelProjection.resolveSpecial(ResolvedTuple rootOutputTuple,
ColumnProjection col,
TupleMetadata tableSchema) |
NullColumnBuilder.NullBuilderBuilder |
NullColumnBuilder.NullBuilderBuilder.setOutputSchema(TupleMetadata outputSchema) |
Constructor and Description |
---|
ExplicitSchemaProjection(ScanLevelProjection scanProj,
TupleMetadata readerSchema,
ResolvedTuple rootTuple,
List<ReaderLevelProjection.ReaderProjectionResolver> resolvers) |
SmoothingProjection(ScanLevelProjection scanProj,
TupleMetadata tableSchema,
ResolvedTuple priorSchema,
ResolvedTuple outputTuple,
List<ReaderLevelProjection.ReaderProjectionResolver> resolvers) |
WildcardProjection(ScanLevelProjection scanProj,
TupleMetadata tableSchema,
ResolvedTuple rootTuple,
List<ReaderLevelProjection.ReaderProjectionResolver> resolvers) |
WildcardSchemaProjection(ScanLevelProjection scanProj,
TupleMetadata readerSchema,
ResolvedTuple rootTuple,
List<ReaderLevelProjection.ReaderProjectionResolver> resolvers) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
ScanLifecycleBuilder.definedSchema |
protected TupleMetadata |
ScanLifecycleBuilder.providedSchema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
ScanLifecycleBuilder.definedSchema() |
TupleMetadata |
SchemaNegotiator.inputSchema()
Returns the reader input schema: the schema which describes the
set of columns this reader should produce.
|
static TupleMetadata |
FixedReceiver.Builder.mergeSchemas(TupleMetadata providedSchema,
TupleMetadata readerSchema)
Given a desired provided schema and an actual reader schema, create a merged
schema that contains the provided column where available, but the reader
column otherwise.
|
TupleMetadata |
SchemaNegotiator.providedSchema()
Returns the provided schema, if defined.
|
TupleMetadata |
ScanLifecycleBuilder.providedSchema() |
Modifier and Type | Method and Description |
---|---|
FixedReceiver |
FixedReceiver.Builder.build(TupleMetadata readerSchema)
Create a fixed receiver for the provided schema (if any) in the
scan plan, and the given reader schema.
|
void |
ScanLifecycleBuilder.definedSchema(TupleMetadata definedSchema) |
static TupleMetadata |
FixedReceiver.Builder.mergeSchemas(TupleMetadata providedSchema,
TupleMetadata readerSchema)
Given a desired provided schema and an actual reader schema, create a merged
schema that contains the provided column where available, but the reader
column otherwise.
|
void |
ScanLifecycleBuilder.providedSchema(TupleMetadata providedSchema) |
void |
SchemaNegotiator.tableSchema(TupleMetadata schema) |
void |
SchemaNegotiator.tableSchema(TupleMetadata schema,
boolean isComplete)
Specify the table schema if this is an early-schema reader.
|
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
ImplicitColumnResolver.ParseResult.schema() |
Constructor and Description |
---|
ParseResult(List<ImplicitColumnMarker> columns,
TupleMetadata schema,
boolean isMetadataScan) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
MissingColumnHandlerBuilder.inputSchema |
protected TupleMetadata |
OutputBatchBuilder.MapSource.mapSchema |
protected TupleMetadata |
MissingColumnHandlerBuilder.outputSchema |
protected TupleMetadata |
ReaderLifecycle.readerInputSchema |
protected TupleMetadata |
SchemaNegotiatorImpl.readerSchema |
protected TupleMetadata |
StaticBatchBuilder.schema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
MissingColumnHandlerBuilder.buildSchema() |
TupleMetadata |
SchemaNegotiatorImpl.inputSchema() |
TupleMetadata |
ScanLifecycle.outputSchema() |
TupleMetadata |
SchemaNegotiatorImpl.providedSchema() |
TupleMetadata |
ReaderLifecycle.readerInputSchema() |
TupleMetadata |
ReaderLifecycle.readerOutputSchema() |
TupleMetadata |
StaticBatchBuilder.schema() |
Modifier and Type | Method and Description |
---|---|
protected void |
OutputBatchBuilder.defineSourceBatchMapping(TupleMetadata schema,
int source)
Define the mapping for one of the sources.
|
MissingColumnHandlerBuilder |
MissingColumnHandlerBuilder.inputSchema(TupleMetadata inputSchema) |
MissingColumnHandlerBuilder |
ReaderLifecycle.missingColumnsBuilder(TupleMetadata readerSchema) |
void |
SchemaNegotiatorImpl.tableSchema(TupleMetadata schema) |
void |
SchemaNegotiatorImpl.tableSchema(TupleMetadata schema,
boolean isComplete) |
Constructor and Description |
---|
BatchSource(TupleMetadata schema,
VectorContainer container) |
MapSource(TupleMetadata mapSchema,
AbstractMapVector mapVector) |
NullBatchBuilder(ResultVectorCache vectorCache,
TupleMetadata schema) |
OutputBatchBuilder(TupleMetadata outputSchema,
List<OutputBatchBuilder.BatchSource> sources,
BufferAllocator allocator) |
RepeatedBatchBuilder(ResultVectorCache vectorCache,
TupleMetadata schema,
Object[] values) |
StaticBatchBuilder(ResultVectorCache vectorCache,
TupleMetadata schema) |
Modifier and Type | Field and Description |
---|---|
TupleMetadata |
ScanProjectionParser.ProjectionParseResult.dynamicSchema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
ScanSchemaTracker.applyImplicitCols()
Indicate that implicit column parsing is complete.
|
TupleMetadata |
AbstractSchemaTracker.applyImplicitCols() |
TupleMetadata |
ProjectedColumn.explicitMembers() |
TupleMetadata |
ScanSchemaTracker.missingColumns(TupleMetadata readerOutputSchema)
Identifies the missing columns given a reader output schema.
|
TupleMetadata |
AbstractSchemaTracker.missingColumns(TupleMetadata readerOutputSchema) |
TupleMetadata |
ScanSchemaTracker.outputSchema()
Returns the scan output schema which is a somewhat complicated
computation that depends on the projection type.
|
TupleMetadata |
AbstractSchemaTracker.outputSchema() |
TupleMetadata |
ScanSchemaTracker.readerInputSchema()
The schema which the reader should produce.
|
TupleMetadata |
AbstractSchemaTracker.readerInputSchema() |
TupleMetadata |
MutableTupleSchema.toSchema() |
TupleMetadata |
ProjectedColumn.tupleSchema() |
Modifier and Type | Method and Description |
---|---|
void |
ProjectionSchemaTracker.applyEarlyReaderSchema(TupleMetadata readerSchema) |
void |
ScanSchemaTracker.applyEarlyReaderSchema(TupleMetadata readerSchema)
If a reader can define a schema before reading data, apply that
schema to the scan schema.
|
void |
SchemaBasedTracker.applyEarlyReaderSchema(TupleMetadata readerSchema) |
void |
ProjectionSchemaTracker.applyProvidedSchema(TupleMetadata providedSchema) |
void |
ProjectionSchemaTracker.applyReaderSchema(TupleMetadata readerOutputSchema,
CustomErrorContext errorContext) |
void |
ScanSchemaTracker.applyReaderSchema(TupleMetadata readerOutputSchema,
CustomErrorContext errorContext)
Once a reader has read a batch, the reader will have provided a type
for each projected column which the reader knows about.
|
void |
SchemaBasedTracker.applyReaderSchema(TupleMetadata readerOutputSchema,
CustomErrorContext errorContext) |
void |
ScanSchemaResolver.applySchema(TupleMetadata sourceSchema) |
void |
MutableTupleSchema.copyFrom(TupleMetadata from) |
ScanSchemaConfigBuilder |
ScanSchemaConfigBuilder.definedSchema(TupleMetadata definedSchema) |
static boolean |
SchemaUtils.isProjectAll(TupleMetadata tuple) |
static boolean |
SchemaUtils.isProjectNone(TupleMetadata tuple) |
static boolean |
SchemaUtils.isStrict(TupleMetadata schema) |
static void |
SchemaUtils.markStrict(TupleMetadata schema) |
TupleMetadata |
ScanSchemaTracker.missingColumns(TupleMetadata readerOutputSchema)
Identifies the missing columns given a reader output schema.
|
TupleMetadata |
AbstractSchemaTracker.missingColumns(TupleMetadata readerOutputSchema) |
protected ProjectedColumn |
ScanProjectionParser.project(TupleMetadata tuple,
String colName) |
ScanSchemaConfigBuilder |
ScanSchemaConfigBuilder.providedSchema(TupleMetadata providedSchema) |
void |
ScanSchemaTracker.resolveMissingCols(TupleMetadata missingCols)
The missing column handler obtains the list of missing columns from
#missingColumns() . |
void |
AbstractSchemaTracker.resolveMissingCols(TupleMetadata missingCols) |
void |
SchemaBasedTracker.validateProjection(TupleMetadata projection)
Validate a projection list (provided as an argument) against a
defined schema already held by this tracker.
|
protected static void |
AbstractSchemaTracker.validateProjection(TupleMetadata projection,
TupleMetadata schema)
Validate a projection list against a defined-schema tuple.
|
Constructor and Description |
---|
DynamicTupleFilter(TupleMetadata mapSchema,
boolean isOpen,
CustomErrorContext errorContext,
String source) |
DynamicTupleFilter(TupleMetadata projectionSet,
CustomErrorContext errorContext) |
ProjectionParseResult(int wildcardPosn,
TupleMetadata dynamicSchema) |
ProjectionSchemaTracker(TupleMetadata definedSchema,
ScanProjectionParser.ProjectionParseResult parseResult,
CustomErrorContext errorContext) |
SchemaBasedTracker(TupleMetadata definedSchema,
CustomErrorContext errorContext) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
ResultSetLoader.activeSchema()
Returns the active output schema; the schema used by the writers,
minus any unprojected columns.
|
TupleMetadata |
ResultSetLoader.outputSchema()
The schema of the harvested batch.
|
TupleMetadata |
PullResultSetReader.schema()
Return the schema for this result set.
|
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
TupleState.outputSchema
Metadata description of the output container (for the row) or map
(for map or repeated map.)
|
protected TupleMetadata |
ResultSetOptionBuilder.readerSchema |
protected TupleMetadata |
TupleState.schema
Internal writer schema that matches the column list.
|
protected TupleMetadata |
ProjectionFilter.BaseSchemaProjectionFilter.schema |
protected TupleMetadata |
ResultSetLoaderImpl.ResultSetOptions.schema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
ResultSetLoaderImpl.activeSchema() |
TupleMetadata |
TupleState.outputSchema() |
TupleMetadata |
ResultSetLoaderImpl.outputSchema() |
TupleMetadata |
PullResultSetReaderImpl.schema() |
TupleMetadata |
TupleState.schema() |
Modifier and Type | Method and Description |
---|---|
protected void |
TupleState.bindOutputSchema(TupleMetadata outputSchema) |
void |
BuildFromSchema.buildTuple(TupleWriter writer,
TupleMetadata schema)
When creating a schema up front, provide the schema of the desired tuple,
then build vectors and writers to match.
|
static ProjectionFilter |
ProjectionFilter.definedSchemaFilter(TupleMetadata definedSchema,
CustomErrorContext errorContext) |
static ProjectionFilter |
ProjectionFilter.providedSchemaFilter(RequestedTuple tupleProj,
TupleMetadata providedSchema,
CustomErrorContext errorContext) |
ResultSetOptionBuilder |
ResultSetOptionBuilder.readerSchema(TupleMetadata readerSchema)
Clients can use the row set builder in several ways:
Provide the schema up front, when known, by using this method to
provide the schema.
Discover the schema on the fly, adding columns during the write
operation.
|
Constructor and Description |
---|
RowSetLoaderImpl(ResultSetLoaderImpl rsLoader,
TupleMetadata schema) |
SchemaProjectionFilter(TupleMetadata definedSchema,
CustomErrorContext errorContext) |
TypeProjectionFilter(TupleMetadata providedSchema,
CustomErrorContext errorContext) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
BaseTupleModel.schema
Descriptive schema associated with the columns above.
|
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
BaseTupleModel.schema() |
TupleMetadata |
TupleModel.schema() |
TupleMetadata |
MetadataProvider.tuple() |
TupleMetadata |
MetadataProvider.MetadataCreator.tuple() |
TupleMetadata |
MetadataProvider.VariantSchemaCreator.tuple() |
TupleMetadata |
MetadataProvider.ArraySchemaCreator.tuple() |
TupleMetadata |
MetadataProvider.MetadataRetrieval.tuple() |
TupleMetadata |
MetadataProvider.VariantSchemaRetrieval.tuple() |
TupleMetadata |
MetadataProvider.ArraySchemaRetrieval.tuple() |
Constructor and Description |
---|
BaseTupleModel(TupleMetadata schema,
List<TupleModel.ColumnModel> columns) |
MetadataRetrieval(TupleMetadata schema) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
HyperSchemaInference.infer(VectorContainer container) |
Modifier and Type | Method and Description |
---|---|
static RowSetReaderImpl |
HyperReaderBuilder.build(VectorContainer container,
TupleMetadata schema,
SelectionVector4 sv4) |
protected List<AbstractObjectReader> |
HyperReaderBuilder.buildContainerChildren(VectorContainer container,
TupleMetadata schema) |
protected List<AbstractObjectReader> |
HyperReaderBuilder.buildMapMembers(VectorAccessor va,
TupleMetadata mapSchema) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
SingleSchemaInference.infer(VectorContainer container) |
Modifier and Type | Method and Description |
---|---|
void |
VectorAllocator.allocate(int rowCount,
TupleMetadata schema) |
VectorContainer |
BuildVectorsFromMetadata.build(TupleMetadata schema) |
static RowSetReaderImpl |
SimpleReaderBuilder.build(VectorContainer container,
TupleMetadata schema,
ReaderIndex rowIndex) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
AbstractRowSet.schema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
AbstractRowSet.schema() |
TupleMetadata |
RowSet.schema() |
TupleMetadata |
RowSetReaderImpl.tupleSchema() |
Modifier and Type | Method and Description |
---|---|
static RowSet |
RowSetBuilder.emptyBatch(BufferAllocator allocator,
TupleMetadata schema) |
static DirectRowSet |
DirectRowSet.fromSchema(BufferAllocator allocator,
TupleMetadata schema) |
Constructor and Description |
---|
AbstractRowSet(VectorContainer container,
TupleMetadata schema) |
AbstractSingleRowSet(VectorContainer container,
TupleMetadata schema) |
HyperRowSetImpl(TupleMetadata schema,
VectorContainer container,
SelectionVector4 sv4) |
RowSetBuilder(BufferAllocator allocator,
TupleMetadata schema) |
RowSetBuilder(BufferAllocator allocator,
TupleMetadata schema,
int capacity) |
RowSetReaderImpl(TupleMetadata schema,
ReaderIndex index,
AbstractObjectReader[] readers) |
RowSetReaderImpl(TupleMetadata schema,
ReaderIndex index,
List<AbstractObjectReader> readers) |
RowSetWriterImpl(RowSet.ExtendableRowSet rowSet,
TupleMetadata schema,
org.apache.drill.exec.physical.rowSet.RowSetWriterImpl.WriterIndexImpl index,
List<AbstractObjectWriter> writers) |
Modifier and Type | Method and Description |
---|---|
static TupleMetadata |
SchemaUtil.fromBatchSchema(BatchSchema batchSchema) |
Modifier and Type | Method and Description |
---|---|
ColumnConverter |
ColumnConverterFactory.getConverter(TupleMetadata providedSchema,
ColumnMetadata readerSchema,
ObjectWriter writer)
Based on column type, creates corresponding column converter
which holds conversion logic and appropriate writer to set converted data into.
|
protected ColumnConverter |
ColumnConverterFactory.getMapConverter(TupleMetadata providedSchema,
TupleMetadata readerSchema,
TupleWriter tupleWriter) |
ColumnConverter |
ColumnConverterFactory.getRootConverter(TupleMetadata providedSchema,
TupleMetadata readerSchema,
TupleWriter tupleWriter) |
static List<SchemaPath> |
SchemaUtil.getSchemaPaths(TupleMetadata schema)
Returns list of
SchemaPath for fields taken from specified schema. |
Constructor and Description |
---|
ColumnConverterFactory(TupleMetadata providedSchema) |
MapColumnConverter(ColumnConverterFactory factory,
TupleMetadata providedSchema,
TupleWriter tupleWriter,
Map<String,ColumnConverter> converters) |
Modifier and Type | Class and Description |
---|---|
class |
TupleSchema
Defines the schema of a tuple: either the top-level row or a nested
"map" (really structure).
|
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
AbstractMapColumnMetadata.parentTuple |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
SchemaBuilder.build() |
TupleMetadata |
SchemaBuilder.buildSchema() |
TupleMetadata |
TupleMetadata.copy() |
static TupleMetadata |
MetadataUtils.diffTuple(TupleMetadata base,
TupleMetadata subtend) |
static TupleMetadata |
TupleMetadata.of(String jsonString)
Converts given JSON string into
TupleMetadata instance. |
TupleMetadata |
AbstractMapColumnMetadata.parentTuple() |
TupleMetadata |
AbstractMapColumnMetadata.tupleSchema() |
TupleMetadata |
AbstractColumnMetadata.tupleSchema() |
TupleMetadata |
ColumnMetadata.tupleSchema()
Schema for TUPLE columns.
|
Modifier and Type | Method and Description |
---|---|
SchemaBuilder |
SchemaBuilder.addAll(TupleMetadata from) |
void |
AbstractMapColumnMetadata.bind(TupleMetadata parentTuple) |
void |
AbstractColumnMetadata.bind(TupleMetadata parentTuple) |
void |
ColumnMetadata.bind(TupleMetadata parentTuple) |
static ColumnMetadata |
MetadataUtils.cloneMapWithSchema(ColumnMetadata source,
TupleMetadata members) |
static TupleMetadata |
MetadataUtils.diffTuple(TupleMetadata base,
TupleMetadata subtend) |
static boolean |
MetadataUtils.hasDynamicColumns(TupleMetadata schema) |
boolean |
TupleMetadata.isEquivalent(TupleMetadata other) |
boolean |
TupleSchema.isEquivalent(TupleMetadata other) |
static MapColumnMetadata |
MetadataUtils.newMap(String name,
TupleMetadata schema) |
static MapColumnMetadata |
MetadataUtils.newMap(String name,
TypeProtos.DataMode dataMode,
TupleMetadata schema) |
static ColumnMetadata |
MetadataUtils.newMapArray(String name,
TupleMetadata schema) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
SchemaContainer.getSchema() |
Constructor and Description |
---|
SchemaContainer(String table,
TupleMetadata schema,
Integer version) |
Modifier and Type | Method and Description |
---|---|
static TupleMetadata |
SchemaExprParser.parseSchema(String schema)
Parses string definition of the schema and converts it
into
TupleMetadata instance. |
TupleMetadata |
SchemaVisitor.visitColumns(SchemaParser.ColumnsContext ctx) |
TupleMetadata |
SchemaVisitor.visitSchema(SchemaParser.SchemaContext ctx) |
Modifier and Type | Method and Description |
---|---|
static TupleMetadata |
AvroSchemaUtil.convert(org.apache.avro.Schema schema)
Converts Avro schema into Drill metadata description of the schema.
|
Modifier and Type | Method and Description |
---|---|
void |
AvroColumnConverterFactory.buildMapMembers(org.apache.avro.generic.GenericRecord genericRecord,
TupleMetadata providedSchema,
TupleWriter tupleWriter,
List<ColumnConverter> converters) |
protected ColumnConverter |
AvroColumnConverterFactory.getMapConverter(TupleMetadata providedSchema,
TupleMetadata readerSchema,
TupleWriter tupleWriter)
Based on provided schema, given converted Avro schema and current row writer
generates list of column converters based on column type for
AvroColumnConverterFactory.MapColumnConverter and returns it. |
List<ColumnConverter> |
AvroColumnConverterFactory.initConverters(TupleMetadata providedSchema,
TupleMetadata readerSchema,
RowSetLoader rowWriter)
Based on given converted Avro schema and current row writer generates list of
column converters based on column type.
|
Constructor and Description |
---|
AvroColumnConverterFactory(TupleMetadata providedSchema) |
MapColumnConverter(AvroColumnConverterFactory factory,
TupleMetadata providedSchema,
TupleWriter tupleWriter,
List<ColumnConverter> converters) |
Modifier and Type | Method and Description |
---|---|
ColumnConverterFactory |
CassandraColumnConverterFactoryProvider.getFactory(TupleMetadata schema) |
protected ColumnConverter |
CassandraColumnConverterFactory.getMapConverter(TupleMetadata providedSchema,
TupleMetadata readerSchema,
TupleWriter tupleWriter) |
Constructor and Description |
---|
CassandraColumnConverterFactory(TupleMetadata providedSchema) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
EasySubScan.getSchema() |
TupleMetadata |
EasyGroupScan.getSchema() |
Constructor and Description |
---|
EasyGroupScan(String userName,
List<org.apache.hadoop.fs.Path> files,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
StoragePluginRegistry engineRegistry,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
TupleMetadata schema,
int limit) |
EasySubScan(String userName,
List<CompleteFileWork.FileWorkImpl> files,
EasyFormatPlugin<?> plugin,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
int partitionDepth,
TupleMetadata schema,
int limit) |
EasySubScan(String userName,
List<CompleteFileWork.FileWorkImpl> files,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
StoragePluginRegistry engineRegistry,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
int partitionDepth,
TupleMetadata schema,
int limit) |
Modifier and Type | Method and Description |
---|---|
protected TupleMetadata |
TupleParser.providedSchema() |
Modifier and Type | Method and Description |
---|---|
ElementParser |
BaseFieldFactory.multiDimObjectArrayFor(ObjectWriter writer,
int dims,
TupleMetadata providedSchema)
Create a repeated list listener for a Map.
|
protected ElementParser |
BaseFieldFactory.objectArrayParserFor(ArrayWriter arrayWriter,
TupleMetadata providedSchema) |
protected ElementParser |
BaseFieldFactory.objectArrayParserFor(FieldDefn fieldDefn,
ColumnMetadata colSchema,
TupleMetadata providedSchema)
Create a map array column and its associated parsers and listeners
for the given column schema and optional provided schema.
|
protected ElementParser |
BaseFieldFactory.objectParserFor(FieldDefn fieldDefn,
ColumnMetadata colSchema,
TupleMetadata providedSchema)
Create a map column and its associated object value listener for the
given key and optional provided schema.
|
protected ElementParser |
BaseFieldFactory.objectParserFor(TupleWriter writer,
TupleMetadata providedSchema) |
JsonLoaderImpl.JsonLoaderBuilder |
JsonLoaderImpl.JsonLoaderBuilder.providedSchema(TupleMetadata providedSchema) |
Constructor and Description |
---|
TupleParser(JsonLoaderImpl loader,
TupleWriter tupleWriter,
TupleMetadata providedSchema) |
TupleParser(JsonStructureParser structParser,
JsonLoaderImpl loader,
TupleWriter tupleWriter,
TupleMetadata providedSchema) |
Constructor and Description |
---|
TextParsingSettings(TextFormatPlugin.TextFormatConfig config,
TupleMetadata providedSchema)
Configure the properties for this one scan based on:
|
Modifier and Type | Method and Description |
---|---|
ColumnConverterFactory |
ElasticsearchColumnConverterFactoryProvider.getFactory(TupleMetadata schema) |
protected ColumnConverter |
ElasticsearchColumnConverterFactory.getMapConverter(TupleMetadata providedSchema,
TupleMetadata readerSchema,
TupleWriter tupleWriter) |
Constructor and Description |
---|
ElasticsearchColumnConverterFactory(TupleMetadata providedSchema) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
EnumerableGroupScan.getSchema() |
TupleMetadata |
EnumerableSubScan.getSchema() |
Modifier and Type | Method and Description |
---|---|
ColumnConverterFactory |
DefaultColumnConverterFactoryProvider.getFactory(TupleMetadata schema) |
ColumnConverterFactory |
ColumnConverterFactoryProvider.getFactory(TupleMetadata schema) |
Constructor and Description |
---|
EnumerableGroupScan(String code,
List<SchemaPath> columns,
Map<String,Integer> fieldsMap,
double rows,
TupleMetadata schema,
String schemaPath,
ColumnConverterFactoryProvider converterFactoryProvider) |
EnumerableSubScan(String code,
List<SchemaPath> columns,
Map<String,Integer> fieldsMap,
TupleMetadata schema,
String schemaPath,
ColumnConverterFactoryProvider converterFactoryProvider) |
Modifier and Type | Field and Description |
---|---|
TupleMetadata |
WriterSpec.providedSchema |
Constructor and Description |
---|
WriterSpec(TupleWriter tupleWriter,
TupleMetadata providedSchema,
CustomErrorContext errorContext) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
HttpdParser.setupParser() |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
IcebergGroupScan.getSchema() |
TupleMetadata |
IcebergSubScan.getSchema() |
Modifier and Type | Method and Description |
---|---|
IcebergGroupScan.IcebergGroupScanBuilder |
IcebergGroupScan.IcebergGroupScanBuilder.schema(TupleMetadata schema) |
IcebergSubScan.IcebergSubScanBuilder |
IcebergSubScan.IcebergSubScanBuilder.schema(TupleMetadata schema) |
Constructor and Description |
---|
IcebergGroupScan(String userName,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
List<SchemaPath> columns,
TupleMetadata schema,
String path,
LogicalExpression condition,
Integer maxRecords,
StoragePluginRegistry pluginRegistry) |
IcebergSubScan(String userName,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
List<SchemaPath> columns,
String path,
List<IcebergWork> workList,
TupleMetadata schema,
LogicalExpression condition,
Integer maxRecords,
StoragePluginRegistry pluginRegistry) |
Modifier and Type | Method and Description |
---|---|
void |
MapColumnConverter.buildMapMembers(org.apache.iceberg.data.Record record,
TupleMetadata providedSchema,
TupleWriter tupleWriter,
Map<String,ColumnConverter> converters) |
protected ColumnConverter |
IcebergColumnConverterFactory.getMapConverter(TupleMetadata providedSchema,
TupleMetadata readerSchema,
TupleWriter tupleWriter) |
Constructor and Description |
---|
IcebergColumnConverterFactory(TupleMetadata providedSchema) |
MapColumnConverter(ColumnConverterFactory factory,
TupleMetadata providedSchema,
TupleWriter tupleWriter,
Map<String,ColumnConverter> converters) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
LogBatchReader.LogReaderConfig.providedSchema |
protected TupleMetadata |
LogBatchReader.LogReaderConfig.readerSchema |
protected TupleMetadata |
LogBatchReader.LogReaderConfig.tableSchema |
Modifier and Type | Method and Description |
---|---|
int |
LogFormatPlugin.maxErrors(TupleMetadata providedSchema) |
Constructor and Description |
---|
LogReaderConfig(LogFormatPlugin plugin,
Pattern pattern,
TupleMetadata providedSchema,
TupleMetadata tableSchema,
TupleMetadata readerSchema,
boolean asArray,
int groupCount,
int maxErrors) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
MapRDBSubScan.getSchema() |
TupleMetadata |
MapRDBGroupScan.getSchema() |
Constructor and Description |
---|
MapRDBSubScan(StoragePluginRegistry engineRegistry,
String userName,
MapRDBFormatPluginConfig formatPluginConfig,
StoragePluginConfig storageConfig,
List<MapRDBSubScanSpec> regionScanSpecList,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
MapRDBSubScan(String userName,
MapRDBFormatPlugin formatPlugin,
List<MapRDBSubScanSpec> maprSubScanSpecs,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
MapRDBSubScan(String userName,
MapRDBFormatPlugin formatPlugin,
List<MapRDBSubScanSpec> maprSubScanSpecs,
List<SchemaPath> columns,
String tableType,
TupleMetadata schema) |
RestrictedMapRDBSubScan(StoragePluginRegistry engineRegistry,
String userName,
MapRDBFormatPluginConfig formatPluginConfig,
StoragePluginConfig storageConfig,
List<RestrictedMapRDBSubScanSpec> regionScanSpecList,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
RestrictedMapRDBSubScan(String userName,
MapRDBFormatPlugin formatPlugin,
List<RestrictedMapRDBSubScanSpec> maprDbSubScanSpecs,
List<SchemaPath> columns,
int maxRecordsToRead,
String tableType,
TupleMetadata schema) |
Constructor and Description |
---|
BinaryTableGroupScan(String userName,
HBaseScanSpec scanSpec,
FileSystemConfig storagePluginConfig,
MapRDBFormatPluginConfig formatPluginConfig,
List<SchemaPath> columns,
TupleMetadata schema,
StoragePluginRegistry pluginRegistry) |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
AbstractParquetRowGroupScan.schema |
protected TupleMetadata |
BaseParquetMetadataProvider.schema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
AbstractParquetRowGroupScan.getSchema() |
Modifier and Type | Method and Description |
---|---|
static Map<SchemaPath,ColumnStatistics<?>> |
ParquetTableMetadataUtils.getColumnStatistics(TupleMetadata schema,
DrillStatsTable statistics)
Returns map with schema path and
ColumnStatistics obtained from specified DrillStatsTable
for all columns from specified BaseTableMetadata . |
static <T extends Comparable<T>> |
FilterEvaluatorUtils.matches(FilterPredicate<T> parquetPredicate,
Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
long rowCount,
TupleMetadata fileMetadata,
Set<SchemaPath> schemaPathsInExpr) |
static RowsMatch |
FilterEvaluatorUtils.matches(LogicalExpression expr,
Map<SchemaPath,ColumnStatistics<?>> columnsStatistics,
TupleMetadata schema,
long rowCount,
UdfUtilities udfUtilities,
FunctionLookupContext functionImplementationRegistry,
Set<SchemaPath> schemaPathsInExpr) |
T |
BaseParquetMetadataProvider.Builder.withSchema(TupleMetadata schema) |
Constructor and Description |
---|
AbstractParquetRowGroupScan(String userName,
List<RowGroupReadEntry> rowGroupReadEntries,
List<SchemaPath> columns,
ParquetReaderConfig readerConfig,
LogicalExpression filter,
org.apache.hadoop.fs.Path selectionRoot,
TupleMetadata schema) |
ParquetGroupScan(StoragePluginRegistry engineRegistry,
String userName,
List<ReadEntryWithPath> entries,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
List<SchemaPath> columns,
org.apache.hadoop.fs.Path selectionRoot,
org.apache.hadoop.fs.Path cacheFileRoot,
ParquetReaderConfig readerConfig,
LogicalExpression filter,
TupleMetadata schema) |
ParquetRowGroupScan(StoragePluginRegistry registry,
String userName,
StoragePluginConfig storageConfig,
FormatPluginConfig formatConfig,
LinkedList<RowGroupReadEntry> rowGroupReadEntries,
List<SchemaPath> columns,
ParquetReaderConfig readerConfig,
org.apache.hadoop.fs.Path selectionRoot,
LogicalExpression filter,
TupleMetadata schema) |
ParquetRowGroupScan(String userName,
ParquetFormatPlugin formatPlugin,
List<RowGroupReadEntry> rowGroupReadEntries,
List<SchemaPath> columns,
ParquetReaderConfig readerConfig,
org.apache.hadoop.fs.Path selectionRoot,
LogicalExpression filter,
TupleMetadata schema) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
Schema.buildSchema(SchemaBuilder builder) |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
SyslogBatchReader.buildSchema() |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
TupleReader.tupleSchema() |
TupleMetadata |
TupleWriter.tupleSchema() |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
DictEntryReader.tupleSchema() |
TupleMetadata |
MapReader.tupleSchema() |
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
AbstractTupleWriter.tupleSchema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
AbstractTupleWriter.tupleSchema() |
Constructor and Description |
---|
AbstractTupleWriter(TupleMetadata schema) |
AbstractTupleWriter(TupleMetadata schema,
List<AbstractObjectWriter> writers) |
Modifier and Type | Method and Description |
---|---|
static void |
JsonReaderUtils.writeColumnsUsingSchema(BaseWriter.ComplexWriter writer,
Collection<SchemaPath> columns,
TupleMetadata schema,
boolean allTextMode)
Creates writers which correspond to the specified schema for specified root writer.
|
Modifier and Type | Field and Description |
---|---|
protected TupleMetadata |
BaseMetadata.schema |
protected TupleMetadata |
BaseMetadata.BaseMetadataBuilder.schema |
Modifier and Type | Method and Description |
---|---|
TupleMetadata |
BaseMetadata.getSchema() |
TupleMetadata |
Metadata.getSchema()
Returns schema stored in current metadata represented as
TupleMetadata . |
TupleMetadata |
NonInterestingColumnsMetadata.getSchema() |
Modifier and Type | Method and Description |
---|---|
T |
BaseMetadata.BaseMetadataBuilder.schema(TupleMetadata schema) |
TableMetadataProviderBuilder |
TableMetadataProviderBuilder.withSchema(TupleMetadata schema) |
Modifier and Type | Method and Description |
---|---|
static void |
SchemaPathUtils.addColumnMetadata(TupleMetadata schema,
SchemaPath schemaPath,
TypeProtos.MajorType type,
Map<SchemaPath,TypeProtos.MajorType> types)
Adds column with specified schema path and type into specified
TupleMetadata schema . |
static ColumnMetadata |
SchemaPathUtils.getColumnMetadata(SchemaPath schemaPath,
TupleMetadata schema)
Returns
ColumnMetadata instance obtained from specified TupleMetadata schema which corresponds to
the specified column schema path. |
static boolean |
SchemaPathUtils.isFieldNestedInDictOrRepeatedMap(SchemaPath schemaPath,
TupleMetadata schema)
Checks if field identified by the schema path is child in either
DICT or REPEATED MAP . |
Copyright © 1970 The Apache Software Foundation. All rights reserved.