Class AbstractRecordReader

java.lang.Object
org.apache.drill.exec.store.AbstractRecordReader
All Implemented Interfaces:
AutoCloseable, RecordReader
Direct Known Subclasses:
AbstractPojoRecordReader, CommonParquetRecordReader, DruidRecordReader, FindLimit0Visitor.RelDataTypeReader, HBaseRecordReader, HiveDefaultRecordReader, JSONRecordReader, KuduRecordReader, MockRecordReader, MongoRecordReader, OpenTSDBRecordReader

public abstract class AbstractRecordReader extends Object implements RecordReader
  • Field Details

    • DEFAULT_TEXT_COLS_TO_READ

      protected static final List<SchemaPath> DEFAULT_TEXT_COLS_TO_READ
  • Constructor Details

    • AbstractRecordReader

      public AbstractRecordReader()
  • Method Details

    • toString

      public String toString()
      Overrides:
      toString in class Object
    • setColumns

      protected final void setColumns(Collection<SchemaPath> projected)
      Parameters:
      projected - : The column list to be returned from this RecordReader. 1) empty column list: this is for skipAll query. It's up to each storage-plugin to choose different policy of handling skipAll query. By default, it will use * column. 2) NULL : is NOT allowed. It requires the planner's rule, or GroupScan or ScanBatchCreator to handle NULL.
    • getColumns

      protected Collection<SchemaPath> getColumns()
    • transformColumns

      protected Collection<SchemaPath> transformColumns(Collection<SchemaPath> projected)
    • isStarQuery

      protected boolean isStarQuery()
    • isSkipQuery

      protected boolean isSkipQuery()
      Returns true if reader should skip all of the columns, reporting number of records only. Handling of a skip query is storage plugin-specific.
    • allocate

      public void allocate(Map<String,ValueVector> vectorMap) throws OutOfMemoryException
      Specified by:
      allocate in interface RecordReader
      Throws:
      OutOfMemoryException
    • hasNext

      public boolean hasNext()
      Description copied from interface: RecordReader
      Check if the reader may have potentially more data to be read in subsequent iterations. Certain types of readers such as repeatable readers can be invoked multiple times, so this method will allow ScanBatch to check with the reader before closing it.
      Specified by:
      hasNext in interface RecordReader
      Returns:
      return true if there could potentially be more reads, false otherwise
    • getDefaultColumnsToRead

      protected List<SchemaPath> getDefaultColumnsToRead()