Package | Description |
---|---|
org.apache.drill.exec.physical.impl | |
org.apache.drill.exec.planner.sql.handlers | |
org.apache.drill.exec.store | |
org.apache.drill.exec.store.druid | |
org.apache.drill.exec.store.easy.json | |
org.apache.drill.exec.store.hbase | |
org.apache.drill.exec.store.hive.readers | |
org.apache.drill.exec.store.kudu | |
org.apache.drill.exec.store.ltsv | |
org.apache.drill.exec.store.mapr.db.json | |
org.apache.drill.exec.store.mock |
Defines a mock data source which generates dummy test data for use
in testing.
|
org.apache.drill.exec.store.mongo |
MongoDB storage plugin.
|
org.apache.drill.exec.store.openTSDB | |
org.apache.drill.exec.store.parquet.columnreaders | |
org.apache.drill.exec.store.parquet2 | |
org.apache.drill.exec.store.pojo | |
org.apache.drill.exec.vector.complex.impl |
Modifier and Type | Class and Description |
---|---|
static class |
ScanBatch.Mutator
Row set mutator implementation provided to record readers created by
this scan batch.
|
Modifier and Type | Method and Description |
---|---|
void |
FindLimit0Visitor.RelDataTypeReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
RecordReader.setup(OperatorContext context,
OutputMutator output)
Configure the RecordReader with the provided schema and the record batch that should be written to.
|
Modifier and Type | Method and Description |
---|---|
void |
DruidRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
JSONRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
HBaseRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
HiveDefaultRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
KuduRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
LTSVRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Field and Description |
---|---|
protected OutputMutator |
MaprDBJsonRecordReader.vectorWriterMutator |
Modifier and Type | Method and Description |
---|---|
void |
MaprDBJsonRecordReader.setup(OperatorContext context,
OutputMutator output) |
void |
RestrictedJsonRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
MockRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
MongoRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
OpenTSDBRecordReader.setup(OperatorContext context,
OutputMutator output) |
Modifier and Type | Method and Description |
---|---|
void |
ReadState.buildReader(ParquetRecordReader reader,
OutputMutator output)
Create the readers needed to read columns: fixed-length or variable length.
|
void |
ParquetSchema.createNonExistentColumns(OutputMutator output,
List<NullableIntVector> nullFilledVectors)
Create "dummy" fields for columns which are selected in the SELECT clause, but not
present in the Parquet schema.
|
void |
ParquetRecordReader.setup(OperatorContext operatorContext,
OutputMutator output)
Prepare the Parquet reader.
|
Modifier and Type | Method and Description |
---|---|
void |
DrillParquetReader.setup(OperatorContext context,
OutputMutator output) |
Constructor and Description |
---|
DrillParquetGroupConverter(OutputMutator mutator,
BaseWriter baseWriter,
org.apache.parquet.schema.GroupType schema,
Collection<SchemaPath> columns,
OptionManager options,
ParquetReaderUtility.DateCorruptionStatus containsCorruptedDates,
boolean skipRepeated,
String parentName)
The constructor is responsible for creation of converters tree and may invoke itself for
creation of child converters when nested field is group type field too.
|
DrillParquetGroupConverter(OutputMutator mutator,
BaseWriter baseWriter,
OptionManager options,
ParquetReaderUtility.DateCorruptionStatus containsCorruptedDates)
Constructor is responsible for creation of converter without creation of child converters.
|
DrillParquetRecordMaterializer(OutputMutator mutator,
org.apache.parquet.schema.MessageType schema,
Collection<SchemaPath> columns,
OptionManager options,
ParquetReaderUtility.DateCorruptionStatus containsCorruptedDates) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractPojoWriter.init(OutputMutator output) |
void |
PojoWriter.init(OutputMutator output)
Initializes value vector.
|
protected PojoWriter |
AbstractPojoRecordReader.initWriter(Class<?> type,
String fieldName,
OutputMutator output)
Creates writer based input class type and then initiates it.
|
void |
AbstractPojoRecordReader.setup(OperatorContext context,
OutputMutator output) |
protected abstract List<PojoWriter> |
AbstractPojoRecordReader.setupWriters(OutputMutator output)
Setups writers for each field in the row.
|
protected List<PojoWriter> |
DynamicPojoRecordReader.setupWriters(OutputMutator output)
Initiates writers based on given schema which contains field name and its type.
|
protected List<PojoWriter> |
PojoRecordReader.setupWriters(OutputMutator output)
Creates writers based on pojo field class types.
|
Constructor and Description |
---|
VectorContainerWriter(OutputMutator mutator) |
VectorContainerWriter(OutputMutator mutator,
boolean unionEnabled) |
Copyright © 1970 The Apache Software Foundation. All rights reserved.