Provides a variety of ways to filter columns: no filtering, filter by (parsed) projection list, or filter by projection list and provided schema.
Handles batch and overflow operation for a (possibly compound) vector.
Build the set of writers from a defined schema.
Algorithms for building a column given a metadata description of the column and the parent context that will hold the column.
Represents the write-time state for a column including the writer and the (optional) backing vector.
Primitive (non-map) column state.
Abstract representation of a container of vectors: a row, a map, a repeated map, a list or a union.
Represents the contents of a list vector.
Wrapper around the list vector (and its optional contained union).
A vector cache implementation which does not actually cache.
Do-nothing vector state for a map column which has no actual vector associated with it.
Near-do-nothing state for a vector that requires no work to allocate or roll-over, but where we do want to at least track the vector itself.
Compound filter for combining direct and provided schema projections.
Projection filter based on the (parsed) projection list.
Implied projection: either project all or project none.
Projection filter in which a schema exactly defines the set of allowed columns, and their types.
Projection based on a non-strict provided schema which enforces the type of known columns, but has no opinion about additional columns.
Represents the internal state of a RepeatedList vector.
Repeated list column state.
Track the repeated list vector.
Vector state for a scalar array (repeated scalar) vector.
Implementation of the result set loader.
Read-only set of options for the result set loader.
Builder for the options for the row set loader.
Manages an inventory of value vectors used across row batch readers.
Implementation of the row set loader.
Base class for a single vector.
State for a scalar value vector.
Special case for an offset vector.
State for a scalar value vector.
Represents the loader state for a tuple: a row or a map.
|TupleState.DictVectorState<T extends ValueVector>|
Represents a map column (either single or repeated).
Represents a tuple defined as a Drill map: single or repeated.
State for a map vector.
Handles the details of the top-level tuple, the data row itself.
Represents the contents of a union vector (or a pseudo-union for lists).
Union or list (repeated union) column state.
Vector wrapper for a union vector.
Columns move through various lifecycle states as identified by this enum.
The primary purpose of this loader, and the most complex to understand and maintain, is overflow handling.
The scenarios, identified by column names above, are:
At the time of overflow on row n:
As the overflow write proceeds:
At harvest time:
When starting the next batch:
Arrays are a different matter: each row can have many values associated with it. Consider an array of scalars. We have:
Here, the letters indicate values. The brackets show the overall vector (outer brackets) and individual rows (inner brackets). The vertical line shows where overflow occurred. The same rules as discussed earier still apply, but we must consider both the row indexes and the array indexes.
Row 0 Row 1 Row 2 0 1 2 3 4 5 6 7 8 [ [a b c] [d e f] | [g h i] ]
Further, we must consider lists: a column may consist of a list of arrays. Or, a column may consist of an array of maps, one of which is a list of arrays. So, the above reasoning must apply recursively down the value tree.
Row 0 Row 1 Row 0 0 1 2 3 4 5 0 1 2 [ [a b c] [d e f] ] [ [g h i] ]
As it turns out, there is a simple recursive algorithm, which is a simple extension of the reasoning for the top-level scalar case, that can handle arrays:
Consider the writers. Each writer corresponds to a single vector. Writers are grouped into logical tree nodes. Those in the root node write to (single, scalar) columns that are either top-level columns, or nested some level down in single-value (not array) tuples. Another tree level occurs in an array: the elements of the array use a different (faster-changing) index than the top (row-level) writers. Different arrays have different indexes: a row may have, say, four elements in array A, but 20 elements in array B.
Further, arrays can be singular (a repeated int, say) or for an entire tuple (a repeated map.) And, since Drill supports the full JSON model, in the most general case, there is a tree of array indexes that can be nested to an arbitrary level. (A row can have an array of maps which contains a column that is, itself, a list of repeated maps, a field of which is an array of ints.)
Writers handle this index tree via a tree of
objects, often specialized for various tasks.
Now we can get to the key concept in this section: how we update those indexes after an overflow. The top-level index reverts to zero. (We start writing the 0th row in the new look-ahead batch.) But, nested indexes (those for arrays) will start at some other position depending on the number elements already written in an overflow row. The number of such elements is determined by a top-down traversal of the tree (to determine the start offset of each array for the row.) Resetting the writer indexes is a bottom-up process: based on the number of elements in that array, the writer index is reset to match.
This flow is the opposite of the "normal" case in which a new batch is started top-down, with each index being reset to zero.
TupleModeabstraction. In particular, we use the single tuple model which works with a single batch. This model provides a simple, uniform interface to work with columns and tuples (rows, maps), and a simple way to work with arrays. This interface reduces the above array algorithm to a simple set of recursive method calls.
Copyright © 1970 The Apache Software Foundation. All rights reserved.