public class LTSVFormatPlugin extends EasyFormatPlugin<LTSVFormatPluginConfig>
EasyFormatPlugin.EasyFormatConfig, EasyFormatPlugin.EasyFormatConfigBuilder
formatConfig
Constructor and Description |
---|
LTSVFormatPlugin(String name,
DrillbitContext context,
org.apache.hadoop.conf.Configuration fsConf,
StoragePluginConfig storageConfig) |
LTSVFormatPlugin(String name,
DrillbitContext context,
org.apache.hadoop.conf.Configuration fsConf,
StoragePluginConfig config,
LTSVFormatPluginConfig formatPluginConfig) |
Modifier and Type | Method and Description |
---|---|
RecordReader |
getRecordReader(FragmentContext context,
DrillFileSystem dfs,
FileWork fileWork,
List<SchemaPath> columns,
String userName)
Return a record reader for the specific file format, when using the original
ScanBatch scanner. |
RecordWriter |
getRecordWriter(FragmentContext context,
EasyWriter writer) |
String |
getWriterOperatorType() |
DrillStatsTable.TableStatistics |
readStatistics(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path statsTablePath) |
boolean |
supportsPushDown()
Does this plugin support projection push down? That is, can the reader
itself handle the tasks of projecting table columns, creating null
columns for missing table columns, and so on?
|
boolean |
supportsStatistics() |
void |
writeStatistics(DrillStatsTable.TableStatistics statistics,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path statsTablePath) |
easyConfig, frameworkBuilder, getConfig, getContext, getFsConf, getGroupScan, getGroupScan, getMatcher, getName, getOptimizerRules, getReaderBatch, getReaderOperatorType, getScanStats, getStatisticsRecordWriter, getStorageConfig, getWriter, getWriterBatch, initScanBuilder, isBlockSplittable, isCompressible, isStatisticsRecordWriter, newBatchReader, supportsAutoPartitioning, supportsFileImplicitColumns, supportsLimitPushdown, supportsRead, supportsWrite, useEnhancedScan
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getGroupScan, getGroupScan, getOptimizerRules
public LTSVFormatPlugin(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig storageConfig)
public LTSVFormatPlugin(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig config, LTSVFormatPluginConfig formatPluginConfig)
public RecordReader getRecordReader(FragmentContext context, DrillFileSystem dfs, FileWork fileWork, List<SchemaPath> columns, String userName)
EasyFormatPlugin
ScanBatch
scanner.getRecordReader
in class EasyFormatPlugin<LTSVFormatPluginConfig>
context
- fragment contextdfs
- Drill file systemfileWork
- metadata about the file to be scannedcolumns
- list of projected columns (or may just contain the wildcard)userName
- the name of the user running the querypublic String getWriterOperatorType()
getWriterOperatorType
in class EasyFormatPlugin<LTSVFormatPluginConfig>
public boolean supportsPushDown()
EasyFormatPlugin
supportsPushDown
in class EasyFormatPlugin<LTSVFormatPluginConfig>
true
if the plugin supports projection push-down,
false
if Drill should do the task by adding a project operatorpublic RecordWriter getRecordWriter(FragmentContext context, EasyWriter writer)
getRecordWriter
in class EasyFormatPlugin<LTSVFormatPluginConfig>
public boolean supportsStatistics()
supportsStatistics
in interface FormatPlugin
supportsStatistics
in class EasyFormatPlugin<LTSVFormatPluginConfig>
public DrillStatsTable.TableStatistics readStatistics(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath)
readStatistics
in interface FormatPlugin
readStatistics
in class EasyFormatPlugin<LTSVFormatPluginConfig>
public void writeStatistics(DrillStatsTable.TableStatistics statistics, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath)
writeStatistics
in interface FormatPlugin
writeStatistics
in class EasyFormatPlugin<LTSVFormatPluginConfig>
Copyright © 1970 The Apache Software Foundation. All rights reserved.