All Classes and Interfaces
Class
Description
Writer for an array-valued column.
Object representation of an array writer.
A delegating wrapper around a
ListenableFuture
that adds support for the AbstractCheckedFuture.checkedGet()
and AbstractCheckedFuture.checkedGet(long, TimeUnit)
methods.Abstract definition of column metadata.
Base class for composite vectors.
Abstract class for string-to-something conversions.
DeMuxExchange is opposite of MuxExchange.
Helps to run a query and await on the results.
Base class for schedulers (pools) for Drillbits.
Captures all properties and turns them into an object node for late bind conversion.
Abstract base class for all JSON element parsers.
Base class for writers for fixed-width vectors.
Base class for writers that use the Java int type as their native
type.
Represents table group scan with metadata usage.
AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer<B extends AbstractGroupScanWithMetadata.GroupScanWithMetadataFilterer<B>>
This class is responsible for filtering different metadata levels.
AbstractHandshakeHandler<T extends com.google.protobuf.MessageLite>
Base class for the runtime execution implementation of the Hash-Join and
Hash-SetOp operator
This holds information about the spilled partitions for the build and probe
side.
Abstract base class for Index collection (collection of Index descriptors)
Abstract base class for an Index descriptor
Describes a base column type for map, dict, repeated map and repeated dict.
Base class for MapVectors.
Abstract implementation of
MetadataMapper
interface which contains
common code for all Metastore component metadata and RDBMS table types.Describes an operator that expects more than one children operators as its input.
Multiplexing Exchange (MuxExchange) is used when results from multiple minor fragments belonging to the same
major fragment running on a node need to be collected at one fragment on the same node before distributing the
results further.
Abstract base class for the object layer in writers.
AbstractParquetGroupScan.RowGroupScanFilterer<B extends AbstractParquetGroupScan.RowGroupScanFilterer<B>>
This class is responsible for filtering different metadata levels including row group level.
Helper class responsible for creating and managing DrillFileSystem.
Abstract base class for file system based partition descriptors and Hive
partition descriptors.
Abstract base implementation of
PluginImplementor
that can be used by
plugin implementors which can support only a subset of all provided operations.Parent class for all pojo readers.
Parent class for all pojo writers created for each field.
Base class for an object with properties.
Parent class for records inspectors which responsible for counting of processed records
and managing free and used value holders.
Base class-holder for the list of
RelDataTypeField
s.Abstract base class for a resource manager.
Basic implementation of a row set for both the single and multiple
(hyper) varieties, both the fixed and extensible varieties.
Base class for concrete scalar column writers including actual vector
writers, and wrappers for nullable types.
Column writer implementation that acts as the basis for the
generated, vector-specific implementations.
Wraps a scalar writer and its event handler to provide a uniform
JSON-like interface for all writer types.
Abstract implementation of
SchemaFactory
, ensures that given schema
name is always converted is lower case.Base class for the projection-based and defined-schema-based
scan schema trackers.
Describes an operator that expects a single child operator as its input.
Implements an AbstractUnaryRecordBatch where the incoming record batch is
known at the time of creation
Base class for row sets backed by a single record batch.
AbstractSingleValueWriter<I extends org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector,W extends BaseWriter>
Parent class for all primitive value writers
To get good performance for most commonly used pattern matches
CONSTANT('ABC')
SqlPatternConstantMatcher
STARTSWITH('%ABC') SqlPatternStartsWithMatcher
ENDSWITH('ABC%') SqlPatternEndsWithMatcher
CONTAINS('%ABC%') SqlPatternContainsMatcher
we have simple pattern matchers.Abstract class for StorePlugin implementations.
Implements AbstractUnaryRecodBatch for operators that do not have an incoming
record batch available at creation time; the input is typically set up a few
steps after creation.
Task manager that does nothing.
Abstract implementation of
Transformer
interface which contains
common code for all Metastore component metadata types.Reader for a tuple (a row or a map.) Provides access to each
column using either a name or a numeric index.
Implementation for a writer for a tuple (a row or a map.) Provides access to each
column using either a name or a numeric index.
Generic object wrapper for the tuple writer.
Listener (callback) to handle requests to add a new column to a tuple (row
or map).
Base class for operators that have a single input.
Represents a projected column that has not yet been bound to a
table column, special column or a null column.
Represents an unresolved table column to be provided by the
reader (or filled in with nulls.) May be associated with
a provided schema column.
This class intercepts HTTP requests without the requisite OAuth credentials and
adds them to the request.
Provides a concurrent way to manage account for memory usage without locking.
Describes the type of outcome that occurred when trying to account for allocation of memory.
Wrapper around a
DataTunnel
that tracks the status of batches sent to
to other Drillbits.Wrapper around a
UserClientConnection
that tracks the status of batches
sent to User.Utility class that allows a group of receivers to confirm reception of a record batch as a single unit.
Evaluates if a query can be admitted to a ResourcePool or not by comparing query user/groups with the
configured users/groups policies for this selector.
Defines possible actions on the file and performs the necessary action
AdjustOperatorsSchemaVisitor visits corresponding operators' which depending upon their functionality
adjusts their output row types.
Specialized aggregate function for SUMing the COUNTs.
A shim making an aircompressor (de)compressor available through the BytesInputCompressor
and BytesInputDecompressor interfaces.
Aliases table.
List aliases as a System Table
Representation of an entry in the System table - Aliases
Registry for public and user-owned aliases.
Class for obtaining and managing storage and table alias registries.
Target object type for which alias will be applied.
Manages the relationship between one or more allocators and a particular
UDLE.
Supports cumulative allocation reservation.
Exception thrown when a closed BufferAllocator is used.
SQLException for object-already-closed conditions, e.g., calling a method
on a closed
Statement
.Interface to register the AM.
Register this App Master in ZK to prevent duplicates.
Returns cluster status as a tree of JSON objects.
Stop the cluster.
Launch the AM through YARN.
Security manager for the Application Master.
Implements the three supported AM security models: Drill,
hard-coded user and password, and open access.
Defines the interface between the Application Master and YARN.
Provides a collection of web UI links for the YARN Resource Manager and the
Node Manager that is running the Drill-on-YARN AM.
Wrapper around the asynchronous versions of the YARN AM-RM and AM-NM
interfaces.
Implementation of
AnalyzeInfoProvider
for file-based tables.Implementation of
AnalyzeInfoProvider
for easy group scan tables.Interface for obtaining information required for analyzing tables such as table segment columns, etc.
Implementation of
AnalyzeInfoProvider
for parquet tables.Complex selector whose value is list of other Simple or Complex Selectors.
describe a class that was annotated with one of the configured annotations
a class annotation
Abstract description of a remote process launch that describes the many
details needed to launch a process on a remote node.
Represents one level within array.
Parses a JSON array, which consists of a list of elements,
represented by a
ValueListener
.Generic array reader.
Reader for an array-valued column.
Object representation of an array reader.
Index into the vector of elements for a repeated vector.
Simple Map type data structure for storing entries of (int -> int) mappings where the max key value is below 2^16
to avoid hashing keys and use direct array index reference for retrieving the values.
Writer for values into an array.
Utilities commonly used with ASM.
Responsible for assigning a set of work units to the available slices.
a field set in an annotation
(to simplify we have a list of string representations of the values)
Implementation of
DynamicFeature
.Implementation of
DynamicFeature
.AuthenticationOutcomeListener<T extends com.google.protobuf.Internal.EnumLite,C extends ClientConnection,HS extends com.google.protobuf.MessageLite,HR extends com.google.protobuf.MessageLite>
Handles SASL exchange, on the client-side.
An implementation of this factory will be initialized once at startup, if the authenticator is enabled
(see
AuthenticatorFactory.getSimpleName()
).Simple wrapper class that allows Locks to be released via an try-with-resources block.
A class similar to Pointer<>, but with features unique to holding
AutoCloseable pointers.
Utilities for AutoCloseable classes.
Converts and writes all map children using provided
AvroColumnConverterFactory.MapColumnConverter.converters
.Format plugin config for Avro data files.
Format plugin for Avro data files.
Utility class that provides methods to interact with Avro schema.
General mechanism for waiting on the query to be executed
Base field factor class which handles the common tasks for
building column writers and JSON listeners.
Common implementation for both the test and production versions
of the fragment context.
Basic reader implementation for json documents.
Common provider of tuple schema, column metadata, and statistics for table, partition, file or row group.
BaseMongoSubScanSpec.BaseMongoSubScanSpecBuilder<B extends BaseMongoSubScanSpec.BaseMongoSubScanSpecBuilder<B>>
Implementation of
OperatorContext
that provides services
needed by most run-time operators.This
OptionManager
implements some the basic methods and should be
extended by concrete implementations.Implementation of
ParquetMetadataProvider
which contains base methods for obtaining metadata from
parquet statistics.Column reader implementation that acts as the basis for the
generated, vector-specific implementations.
Provide access to the DrillBuf for the data vector.
Column writer implementation that acts as the basis for the
generated, vector-specific implementations.
Base wrapper for algorithms that use sort comparisons.
Implementation of
StatisticsKind
which contain base
table statistics kinds with implemented mergeStatistics()
method.Base implementation of
TableMetadata
interface.Base implementation for a tuple model which is common to the "single"
and "hyper" cases.
Base class for variable-width (VarChar, VarBinary, etc.) writers.
Base class for code-generation-based tasks.
Build a set of writers for a single (non-hyper) vector container.
BasicClient<T extends com.google.protobuf.Internal.EnumLite,CC extends ClientConnection,HS extends com.google.protobuf.MessageLite,HR extends com.google.protobuf.MessageLite>
A JSON output class that generates standard JSON.
Basic reader builder for simple non-file readers.
A server is bound to a port and is responsible for responding to various type of requests.
BasicServer.ServerHandshakeHandler<T extends com.google.protobuf.MessageLite>
Provides handy methods to retrieve Metastore Tables data for analysis.
Request metadata holder that provides request metadata types, filters and columns.
Basic metadata transformer class which can transform given list of
TableMetadataUnit
into BaseTableMetadata
, SegmentMetadata
, FileMetadata
,
RowGroupMetadata
, PartitionMetadata
or all metadata types returned in one holder
(BasicTablesTransformer.MetadataHolder
).Provides access to the row set (record batch) produced by an
operator.
Represents a group of batches spilled to disk.
Tool for printing the content of record batches to screen.
Base strategy for reading a batch of Parquet records.
Strategy for reading a record batch when all columns are
fixed-width.
Strategy for reading mock records.
Strategy for reading a record batch when at last one column is
variable width.
Holder class that contains batch naming, batch and record index.
Historically
BatchSchema
is used to represent the schema of a batch.This class predicts the sizes of batches given an input batch.
A factory for creating
BatchSizePredictor
s.Helper class to assist the Flat Parquet reader build batches which adhere to memory sizing constraints
A container class to hold a column batch memory usage information.
Container class which holds memory usage information about a variable length
ValueVector
;
all values are in bytes.Validate a batch of value vectors.
Helps to select a queue whose
QueryQueueConfig.getMaxQueryMemoryInMBPerNode()
is nearest to the max memory
on a node required by the given query.Listener for JSON integer values.
BigInt implements a vector of fixed width values.
Parquet value writer for passing decimal values
into
RecordConsumer
to be stored as BINARY type.Specialized reader for bit columns.
Specialized writer for bit columns.
Protobuf type
exec.bit.control.BitControlHandshake
Protobuf type
exec.bit.control.BitControlHandshake
Protobuf type
exec.bit.control.BitStatus
Protobuf type
exec.bit.control.BitStatus
Protobuf type
exec.bit.control.Collector
Protobuf type
exec.bit.control.Collector
Protobuf type
exec.bit.control.CustomMessage
Protobuf type
exec.bit.control.CustomMessage
Protobuf type
exec.bit.control.FinishedReceiver
Protobuf type
exec.bit.control.FinishedReceiver
Protobuf type
exec.bit.control.FragmentStatus
Protobuf type
exec.bit.control.FragmentStatus
Protobuf type
exec.bit.control.InitializeFragments
Protobuf type
exec.bit.control.InitializeFragments
Protobuf type
exec.bit.control.PlanFragment
Protobuf type
exec.bit.control.PlanFragment
Protobuf type
exec.bit.control.QueryContextInformation
Protobuf type
exec.bit.control.QueryContextInformation
//// BitControl RPC ///////
Protobuf type
exec.bit.control.WorkQueueStatus
Protobuf type
exec.bit.control.WorkQueueStatus
Protobuf type
exec.bit.data.AckWithCredit
Protobuf type
exec.bit.data.AckWithCredit
Protobuf type
exec.bit.data.BitClientHandshake
Protobuf type
exec.bit.data.BitClientHandshake
Protobuf type
exec.bit.data.BitServerHandshake
Protobuf type
exec.bit.data.BitServerHandshake
Protobuf type
exec.bit.data.FragmentRecordBatch
Protobuf type
exec.bit.data.FragmentRecordBatch
Protobuf enum
exec.bit.data.RpcType
Protobuf type
exec.bit.data.RuntimeFilterBDef
Protobuf type
exec.bit.data.RuntimeFilterBDef
Function templates for Bit/BOOLEAN functions other than comparison
functions.
Utility class providing common methods shared between
DataClient
and
ControlClient
Add a system table for listing connected users on a cluster
Bit implements a vector of bit-width values.
According to Putze et al.'s "Cache-, Hash- and Space-Efficient BloomFilter
Filters", see this paper
for details, the main theory is to construct tiny bucket bloom filters which benefit to
the cpu cache and SIMD opcode.
Listener for JSON Boolean fields.
Enum that contains two boolean types: TRUE and FALSE.
A decorating accessor that returns null for indices that is beyond underlying vector's capacity.
Broadcast Sender broadcasts incoming batches to all receivers (one or more).
Wrapper class to deal with byte buffer allocation.
Represents the set of in-memory batches accumulated by
the external sort.
BufferedDirectBufInputStream
reads from the
underlying InputStream
in blocks of data, into an
internal buffer.Manages a list of
DrillBuf
s that can be reallocated as needed.Build the set of writers from a defined schema.
main class to integrate classpath scanning in the build.
Build (materialize) as set of vectors based on a provided
metadata schema.
Evaluate a substring expression for a given value; specifying the start
position, and optionally the end position.
Modeled after
org.apache.hadoop.io.WritableUtils
.Class loader for "plain-old Java" generated classes.
A special type of
Map
with String
s as keys, and the case of a key is ignored for operations involving
keys like CaseInsensitiveMap.put(java.lang.String, VALUE)
, CaseInsensitiveMap.get(java.lang.Object)
, etc.Wrapper around
PersistentStore
to ensure all passed keys are
converted to lower case and stored this way.This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/CastEmptyStringVarTypesToNullableNumeric.java
This is a master class used to generate code for
HashTable
s.A CharSequence is a readable sequence of char values.
Evaluate a substring expression for a given UTF-8 value; specifying the start
position, and optionally the end position.
A ClassVisitor that verifies the required call sequence described in
http://asm.ow2.org/asm50/javadoc/user/org/objectweb/asm/ClassVisitor.html .
Extension of
Function
that allows to throw checked exception.A
CheckedFuture
is a ListenableFuture
that includes versions of the get
methods that can throw a checked exception.The java standard library does not provide a lambda function interface for functions that take no arguments,
but that throw an exception.
A MethodVisitor that verifies the required call sequence according to
http://asm.ow2.org/asm50/javadoc/user/org/objectweb/asm/MethodVisitor.html .
a class that implements a specific type
Represents an additional level of error context detail that
adds to that provided by some outer context.
Implements the "plain Java" method of code generation and
compilation.
Selects between the two supported Java compilers: Janino and
the build-in Java compiler.
Represents a (Nullable)?(Type)Holder instance.
Plugin locator for the "classic" class-path method of locating connectors.
Build the original scanner based on the
RecordReader
interface.Classpath scanning utility.
Compiles generated code, merges the resulting class with the
template class, and performs byte-code cleanup on the resulting
byte codes.
Provides a static set of contextual operations that can be configured one way
for production, a separate way for unit tests.
Creates a deep copy of a LogicalExpression.
It allows setting the current value in the iterator and can be used once after
ClosingStreamIterator.next()
callInterface which identifies the cluster controller methods that are save to
call from the
Dispatcher
.Controls the Drill cluster by representing the current cluster state with a
desired state, taking corrective action to keep the cluster in the desired
state.
Controller lifecycle state.
Pluggable interface built to manage cluster coordination.
Defined cluster tier types.
Global code compiler mechanism shared by all threads and operators.
Abstracts out the details of compiling code using the two available
mechanisms.
A code generator is responsible for generating the Java source code required
to complete the implementation of an abstract template.
This class represents kinds of column statistics which may be received as a union
of other statistics, for example column nulls count may be received as a sum of nulls counts
of underlying metadata parts.
This class represents kinds of table statistics which may be received as a union
of other statistics, for example row count may be received as a sum of row counts
of underlying metadata parts.
Aggregate function which stores incoming fields into the map.
Aggregate function which collects incoming VarChar column values into the list.
Basic accessors for most Drill vector types and modes.
Algorithms for building a column given a metadata description of the column and
the parent context that will hold the column.
Build a column schema (AKA "materialized field") based on name and a
variety of schema options.
Base class for any kind of column converter.
Converts and sets given value into the specific column writer.
Converts and writes array values using
ColumnConverter.ArrayColumnConverter.valueConverter
into ColumnConverter.ArrayColumnConverter.arrayWriter
.Converts and writes dict values using provided key / value converters.
Does nothing, is used when column is not projected to avoid unnecessary
column values conversions and writes.
Converts and writes all map children using provided
ColumnConverter.MapColumnConverter.converters
.Converts and writes scalar values using provided
ColumnConverter.ScalarColumnConverter.valueConverter
.Deprecated.
it is never used.
Defines a column for the "enhanced" version of the mock data
source.
Columns that give information from where file data comes from.
Columns that give internal information about file or its parts.
The class represents "cache" for partition and table columns.
Metadata description of a column including names, types and structure
information.
Rough characterization of Drill types into metadata categories.
Holds system / session options that are used for obtaining partition / implicit / special column names.
Core interface for a projected column.
Base interface for all column readers, defining a generic set of methods
that all readers provide.
Gather generated reader classes into a set of class tables to allow rapid
run-time creation of readers.
The reader structure is heavily recursive.
Handles the special case in which the entire row is returned as a
"columns" array.
Parses the `columns` array.
Scan framework for a file that supports the special "columns" column.
Implementation of the columns array schema negotiator.
Schema negotiator that supports the file scan options plus access
to the specific selected columns indexes.
Represents the write-time state for a column including the writer and the (optional)
backing vector.
Primitive (non-map) column state.
Columns move through various lifecycle states as identified by this
enum.
Represents collection of statistics values for specific column.
Implementation of
CollectableColumnStatisticsKind
which contain base
column statistics kinds with implemented mergeStatistics()
method.Generic information about a column writer including:
Metadata
Write position information about a writer needed by a vector overflow
implementation.
Gather generated writer classes into a set of class tables to allow rapid
run-time creation of writers.
A Drill record batch consists of a variety of vectors, including maps and lists.
Drill YARN client command line options.
Comparator type.
Comparison predicates for metadata filter pushdown.
Container that holds a complete work unit.
This function exists to help the user understand the inner schemata of maps
It is NOT recursive (yet).
Visitor that moves non-
RexFieldAccess
rex node from project below Uncollect
to the left side of the Correlate
.Text reader, Complies with the RFC 4180 standard for text/csv files.
Implementation of
SqlVisitor
that converts bracketed compound SqlIdentifier
to bracket-less compound SqlIdentifier
(also known as DrillCompoundIdentifier
)
to provide ease of use while querying complex types.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.CONCAT
.Interface that defines implementation to get all the config files names for default, module specific, distribution
specific and override files.
ConnectionMultiListener<T extends com.google.protobuf.Internal.EnumLite,CC extends ClientConnection,HS extends com.google.protobuf.MessageLite,HR extends com.google.protobuf.MessageLite,BC extends BasicClient<T,CC,HS,HR>>
ConnectionMultiListener.Builder<T extends com.google.protobuf.Internal.EnumLite,CC extends ClientConnection,HS extends com.google.protobuf.MessageLite,HR extends com.google.protobuf.MessageLite,BC extends BasicClient<T,CC,HS,HR>>
Defines a storage connector: a storage plugin config along with the
locator which can create a plugin instance given an instance of the
config.
Locates storage plugins.
Populate metadata columns either file metadata (AKA "implicit
columns") or directory metadata (AKA "partition columns.") In both
cases the column type is nullable Varchar and the column value
is predefined by the projection planner; this class just copies
that value into each row.
Description of a constant argument of an expression.
For CSV files without headers, but with a provided schema,
handles the case where extra fields appear in the file beyond
the columns enumerated in the schema.
Describes a container request in terms of priority, memory, cores and
placement preference.
Abstract representation of a container of vectors: a row, a map, a
repeated map, a list or a union.
A mix-in used for introducing container vector-like behaviour.
Implement "current_schema" function.
Implement "session_id" function.
Implement "user", "session_user" or "system_user" function.
Provides query context information (such as query start time, query user, default schema etc.) for UDFs.
Maintains connection between two particular bits.
Service that allows one Drillbit to communicate with another.
Defines how the Controller should handle custom messages.
A simple interface that describes the nature of the response to the custom incoming message.
Interface for defining how to serialize and deserialize custom message for consumer who want to use something other
than Protobuf messages.
Manages communication tunnels between nodes.
Purely to simplify memory debugging.
Holds metrics related to bit control rpc layer
ControlTunnel.ProtoSerDe<MSG extends com.google.protobuf.MessageLite>
This rule will convert " select count(*) as mycount from table "
or " select count(not-nullable-expr) as mycount from table " into
This rule is a logical planning counterpart to a corresponding ConvertCountToDirectScanPrule
physical rule
Converter utility class which helps to convert Metastore metadata objects from / to string value.
Convert Hive scan to use Drill's native parquet reader instead of Hive's native reader.
Rule which converts
Convert a VARCHAR column to an BIT column following the Java rules
for parsing Boolean values, then using 1 if the boolean is true, 0
if false.
Convert a VARCHAR column to an DATE column following the Java rules
for parsing a date time, optionally using the formatter provided in
the column schema.
Convert a VARCHAR column to an decimal column following the Java rules
for parsing integers (i.e.
Convert a VARCHAR column to a DOUBLE column following the Java rules
for parsing doubles (i.e.
Convert a VARCHAR column to an INT column following the Java rules
for parsing integers (i.e.
Convert a VARCHAR column to an INTERVAL column following the Java rules
for parsing a period.
Convert a VARCHAR column to an BIGINT column following the Java rules
for parsing longs (i.e.
Convert a VARCHAR column to an TIME column following the Java rules
for parsing a date time, optionally using the formatter provided in
the column schema.
Convert a VARCHAR column to an TIMESTAMP column following the Java rules
for parsing a date time, optionally using the formatter provided in
the column schema.
Protobuf type
exec.DrillbitEndpoint
Protobuf type
exec.DrillbitEndpoint
Protobuf enum
exec.DrillbitEndpoint.State
Protobuf type
exec.DrillServiceInstance
Protobuf type
exec.DrillServiceInstance
Protobuf type
exec.Roles
Protobuf type
exec.Roles
This class is used for backward compatibility when reading older query profiles that
stored operator id instead of its name.
This class is used internally for tracking injected countdown latches.
See
CountDownLatchInjection
Degenerates to
PauseInjection.pause()
, if initialized to zero count.A utility class that contains helper functions used by rules that convert COUNT(*) and COUNT(col)
aggregates (no group-by) to DirectScan
Generate a covering index plan that is equivalent to the original plan.
Creates a Cpu GaugeSet
Handler for handling CREATE ALIAS statements.
Interface that provides the info needed to create a new table.
Provider of authentication credentials.
AES_DECRYPT() decrypts the encrypted string crypt_str using the key string key_str and returns the original cleartext string.
aes_encrypt()/ aes_decrypt(): implement encryption and decryption of data using the official AES (Advanced Encryption Standard) algorithm,
previously known as "Rijndael." AES_ENCRYPT() encrypts the string str using the key string key_str and returns a
binary string containing the encrypted output.
This class returns the md2 digest of a given input string.
This function returns the MD5 digest of a given input string.
sha() / sha1(): Calculates an SHA-1 160-bit checksum for the string, as described in RFC 3174 (Secure Hash Algorithm).
sha2() / sha256(): Calculates an SHA-2 256-bit checksum for the string.
This function returns the SHA384 digest of a given input string.
This function returns the SHA512 digest of a given input string.
Generates and adds a CSRF token to a HTTP session.
Generates and adds a CSRF token to a HTTP session.
All forms on site have a field with a CSRF token injected by server.
All forms on site have a field with a CSRF token injected by server.
Generic mechanism to pass error context throughout the row set
mechanism and scan framework.
Holder for store version.
Manages a connection for each endpoint.
Holds metrics related to bit data rpc layer
Listener that keeps track of the status of batches sent, and updates the SendingAccountor when status is received
for each batch
Specifies the time grouping to be used with the nearest date function
This function takes two arguments, an input date object, and an interval and returns
the previous date that is the first date in that period.
This function takes three arguments, an input date string, an input date format string, and an interval and returns
the previous date that is the first date in that period.
Very simple date value generator that produces ISO dates
uniformly distributed over the last year.
Describes the default date output format to use for JSON.
Function to check if a varchar value can be cast to a date.
Utility class for Date, DateTime, TimeStamp, Interval data types.
Utility class for Date, DateTime, TimeStamp, Interval data types.
Parse local time dates.
Date implements a vector of fixed width values.
A DbGroupScan operator represents the scan associated with a database.
Provides methods to configure database prior to data source initialization.
No-op implementation of
DbHelper
for those databases that do not require
any preparation before data source creation.SQLite implementation of
DbHelper
, creates database path if needed.DBQuery is an abstraction of an openTSDB query,
that used for extracting data from the storage system by POST request to DB.
Utility class to build a debug string for an object
in a standard format.
Deprecated.
Decimal18 implements a vector of fixed width values.
Deprecated.
Decimal28Dense implements a vector of fixed width values.
Deprecated.
Decimal28Sparse implements a vector of fixed width values.
Deprecated.
Decimal38Dense implements a vector of fixed width values.
Deprecated.
Decimal38Sparse implements a vector of fixed width values.
Deprecated.
Decimal9 implements a vector of fixed width values.
Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_ADD_SCALE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_AGGREGATE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_AVG_AGGREGATE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_CAST
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_DIV_SCALE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_MAX_SCALE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_MOD_SCALE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_SET_SCALE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_SUM_AGGREGATE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_SUM_SCALE
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DECIMAL_ZERO_SCALE
.This class is generated by jOOQ.
Non RM version of the parallelizer.
Helps to select the first default queue in the list of all the provided queues.
Represents a default resource manager for clusters that do not provide query
queues.
Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.DEFAULT
.This class is generated by jOOQ.
When selector configuration is absent for a ResourcePool then it is associated with a DefaultSelector.
Collects one or more exceptions that may occur, using
suppressed exceptions.
Iceberg delete operation: deletes data based on given row filter.
Delete operation holder, it includes filter by which Metastore data will be deleted
and set of metadata types to which filter will be applied.
This is Metadata provider for Delta tables, which are read by Drill native Parquet reader
Composite partition location corresponds to a directory in the file system.
Facade to the distributed file system (DFS) system that implements
Drill-on-YARN related operations.
Defines a single partition in a DFS table.
Internal structure for building a dict.
Reader for a Dict entry.
Writer for a Dict entry.
A
ValueVector
holding key-value pairs.Physically the writer is an array writer with special tuple writer as its element.
Base class for Java type-based conversion.
Encapsulates a Java expression, defined as anything that is
valid in the following code:
(expr)
Reader index that points directly to each row in the row set.
Implementation of a single row set with no indirection (selection)
vector.
Dispatches YARN, timer and ZooKeeper events to the cluster controller.
Interface for an add-on to the dispatcher that
should be started at start of the run and ended
at the end of the run.
Distributed query queue which uses a Zookeeper distributed semaphore to
control queuing across the cluster.
Exposes a snapshot of internal state information for use in status
reporting, such as in the UI.
Describes an operator's endpoint assignment requirements.
Indicates double expression predicate implementations.
Indicates
FilterExpression.Operator.AND
operator expression:
storagePlugin = 'dfs' and workspace = 'tmp'.Indicates
FilterExpression.Operator.OR
operator expression:
storagePlugin = 'dfs' or storagePlugin = 's3'.Generates random field values uniformly distributed over
the range +-1 million, with any number of digits past
the decimal point.
Listener for the JSON double type.
Aggregate function interface.
Aggregation implemented in Drill.
Base class for logical and physical Aggregations implemented in Drill
Rule that converts an
LogicalAggregate
to a DrillAggregateRel
, implemented by a Drill "segment" operation
followed by a "collapseaggregate" operation.Drill logical node for "Analyze".
Application Master for Drill.
Provides functionality comparable to Guava's Closeables for AutoCloseables.
Starts, tracks and stops all the required services for a Drillbit daemon to work.
Implementation of the storage registry context which obtains the
needed resources from the
DrillbitContext
.Interface to define the listener to take actions when the set of active drillbits is changed.
Drill data structure for accessing and manipulating data buffers.
An InputStream that wraps a DrillBuf and implements the seekable interface.
Thin wrapper around byte array.
An extend of AbstractByteBufAllocator that wraps a Drill BufferAllocator.
This class serves as a wrapper class for SqlAggFunction.
This class serves as a wrapper class for SqlBetweenOperator.
This class serves as a wrapper class for SqlFunction.
This class serves as a wrapper class for SqlOperator.
This class serves as a wrapper class for
SqlSumEmptyIsZeroAggFunction
with the same goal as DrillCalciteSqlAggFunctionWrapper
but extends SqlSumEmptyIsZeroAggFunction
to allow using
additional Calcite functionality designated for SqlSumEmptyIsZeroAggFunction
.This interface is meant for the users of the wrappers,
DrillCalciteSqlOperatorWrapper
,
DrillCalciteSqlFunctionWrapper
and DrillCalciteSqlAggFunctionWrapper
, to access the wrapped Calcite
SqlOperator
without knowing exactly which wrapper it is.This utility contains the static functions to manipulate
DrillCalciteSqlWrapper
, DrillCalciteSqlOperatorWrapper
DrillCalciteSqlFunctionWrapper
and DrillCalciteSqlAggFunctionWrapper
.There's a bug in ASM's CheckClassAdapter.
Thin wrapper around a UserClient that handles connect/close and transforms
String into ByteBuf.
A delegating compression codec factory that returns (de)compressors based on
https://github.com/airlift/aircompressor when possible and falls back to
parquet-mr otherwise.
Drill's SQL conformance is SqlConformanceEnum.DEFAULT with a couple of deviations.
Drill-specific
Connection
.
NOTE: DrillConnectionConfig will be changed from a class to an interface.
Drill's implementation of
Connection
.Builds a controller for a cluster of Drillbits.
Convertlet table which allows to plug-in custom rex conversion of calls to
Calcite's standard operators.
Implementation of the DrillRelOptCost, modeled similar to VolcanoCost
Holder containing query state counter metrics.
Drill-specific
DatabaseMetaData
.Drill's implementation of
DatabaseMetaData
.Extends regular
Instant.parse(java.lang.CharSequence)
with more formats.Logical RelNode representing a
DirectGroupScan
.Converts join with distinct right input to semi-join.
Stores distribution field index and field name to be used in exchange operators.
Custom ErrorHandler class for Drill's WebServer to have better error message in case when SPNEGO login failed and
what to do next.
Utility class which contain methods for conversion Drill ProtoBuf Error and Throwable
Minus implemented in Drill.
DrillFileSystem is the wrapper around the actual FileSystem implementation.
In Drill file system all directories and files that start with dot or underscore is ignored.
Rule will transform item star fields in filter and replaced with actual field references.
Base class for logical and physical Filters implemented in Drill
Rule that converts a
LogicalFilter
to a Drill "filter" operation.Wrapper around FSDataInputStream to collect IO Stats.
Represents the call of a function within a query and includes
the actual arguments and a reference to the function declaration (as a
"function holder.")
The base class of hash classes used in Drill.
Extension of HiveMetaStoreClient with addition of cache and methods useful
for Drill schema.
Provides factory methods for initialization of
DrillHiveMetaStoreClient
instances.DrillViewTable which may be created from Hive view metadata and will work
similar to views defined in Drill.
Accessor class that extends the
ConstraintSecurityHandler
to expose
protected method's for start and stop of Handler.Context for converting a tree of
DrillRel
nodes into a Drill logical plan.Intersect implemented in Drill.
Implementation of
net.hydromatic.avatica.AvaticaFactory
for Drill and JDBC 4.0 (corresponds to JDK 1.6).Implementation of
AvaticaFactory
for Drill and
JDBC 4.1 (corresponds to JDK 1.7).Convention with set of rules to register for jdbc plugin
Interface which needs to be implemented by all the join relation expressions.
Logical Join implemented in Drill.
Base class for logical and physical Joins implemented in Drill.
Rule that converts a
LogicalJoin
to a DrillJoinRel
, which is implemented by Drill "join" operation.Base class for logical and physical Limits implemented in Drill
This rule converts a Sort that has either a offset and fetch into a Drill Sort and LimitPOP Rel
MergeFilterRule implements the rule for combining two
Filter
sRule for merging two projects provided the projects aren't projecting identical sets of
input references.
Metadata describing a column.
Client for the Drill-on-YARN integration.
Configuration used within the Drill-on-YARN code.
Implementation of
SqlOperatorTable
that contains standard operators and functions provided through
SqlStdOperatorTable
, and Drill User Defined Functions.Utilities for Drill's planner.
Parquet currently supports a fixed binary type INT96 for storing hive, impala timestamp
with nanoseconds precision.
Parquet currently supports a fixed binary type, which is not implemented in Drill.
SQL parser, generated from Parser.jj by JavaCC.
Token literal values and constants.
Token Manager.
Helper methods or constants used in parsing a SQL query.
Drill-specific
PreparedStatement
.Project implemented in Drill.
Base class for logical and physical Project implemented in Drill
Rule that converts a
LogicalProject
to a Drill "project" operation.When table support project push down, rule can be applied to reduce number of read columns
thus improving scan operator performance.
This rule implements the run-time filter pushdown via the rowkey join for queries with row-key filters.
Rule to reduce aggregates to simpler forms.
Relational expression that is implemented in Drill.
Contains factory implementation for creating various Drill Logical Rel nodes.
Utility class that is a subset of the RelOptUtil class and is a placeholder
for Drill specific static methods that are needed during either logical or
physical planning.
InputRefVisitor is a utility class used to collect all the RexInputRef nodes in a
RexNode.
Stores information about fields, their names and types.
RexFieldsTransformer is a utility class used to convert column refs in a RexNode
based on inputRefMap (input to output ref map).
LoginService used when user authentication is enabled in Drillbit.
Provider which injects DrillUserPrincipal directly instead of getting it
from SecurityContext and typecasting
Drill-specific
ResultSet
.Drill's implementation of
ResultSet
.Pretty-printing wrapper class around the ZK-based queue summary.
GroupScan of a Drill table.
Base class for logical/physical scan rel implemented in Drill.
Base class for logical and physical Screen implemented in Drill
Classes that can be put in the Distributed Cache must implement this interface.
Definition of a Drill function defined using the
@FunctionTemplate
annotation of the class which
implements the function.Sort implemented in Drill.
Base class for logical and physical Sort implemented in Drill.
Rule that converts an
Sort
to a DrillSortRel
, implemented by a Drill "order" operation.Custom SpnegoAuthenticator for Drill
Custom implementation of DrillSpnegoLoginService to avoid the need of passing targetName in a config file,
to include the SPNEGO OID and the way UserIdentity is created.
SqlCall interface with addition of method to get the handler.
Sql parser tree node to represent statement:
{ DESCRIBE | DESC } tblname [col_name | wildcard ]
Drill SqlLine application configuration.
Customized
SqlParseException
classSql parse tree node to represent statement:
RESET { <NAME> | ALL }
.Sql parse tree node to represent statement:
SET <NAME> [ = VALUE ]
.Drill-specific
Statement
.Drill's implementation of
Statement
.Wraps the stats table info including schema and tableName.
Struct which contains the statistics for the entire directory structure
TableMacros must return a TranslatableTable
This class adapts the existing DrillTable to a TranslatableTable
Rule that converts a
LogicalUnion
to a DrillUnionRel
, implemented by a "union" operation.Union implemented in Drill.
Captures Drill user credentials and privilege's of the session user.
DrillUserPrincipal
for anonymous (auth disabled) mode.Logical Values implementation in Drill.
Base class for logical and physical Values implemented in Drill.
Rule that converts a
LogicalValues
to a Drill "values" operation.Give access to Drill version as captured during the build
Caution don't rely on major, minor and patch versions only to compare two
Drill versions.
Interface used by Drill components such as InformationSchema generator to get view info.
Base class for logical and physical Writer implemented in Drill.
Main class of Apache Drill JDBC driver.
Optiq JDBC driver.
Handler for handling DROP ALIAS statements.
Handler for handling DROP ALL ALIASES statements.
A Class containing information to read a single druid data source.
Dummy scalar array writer that allows a client to write values into
the array, but discards all of them.
This and
DummyConvertTo
class merely act as a placeholder so that Optiq
allows 'convert_to()' and 'convert_from()' functions in SQL.This and
DummyConvertFrom
class merely act as a placeholder so that Optiq
allows 'convert_to()' and 'convert_from()' functions in SQL.This and
DummyConvertTo
class merely act as a placeholder so that Optiq
allows the 'flatten()' function in SQL.Represents a non-projected column.
Parse and ignore an unprojected value.
Used to ensure the param "batch" is a non-negative number.
A dynamic column has a name but not a type.
Dynamically reads values from the given list of records.
An utility class that converts from
JsonNode
to DynamicPojoRecordReader during physical plan fragment deserialization.Wrapper around the default and/or distributed resource managers
to allow dynamically enabling and disabling queueing.
Loads schemas from storage plugins later when
CalciteSchema.getSubSchema(String, boolean)
is called.Unlike SimpleCalciteSchema, DynamicSchema could have an empty or partial schemaMap, but it could maintain a map of
name->SchemaFactory, and only register schema when the correspondent name is requested.
Projection filter based on the scan schema which typically starts as fully
dynamic, then becomes more concrete as the scan progresses.
Filter for a map, represented by a
TupleMetadata
.Describes how to handle candidate columns not currently in the
scan schema, which turns out to be a surprisingly complex
question.
Filter for the top-level dynamic schema.
Dynamic credit based flow control:
The sender initially sends batch to the sender by the initial static credit (3).
Create the file scan lifecycle that manages the scan.
Base class for file readers.
Defines the static, programmer-defined options for this plugin.
Implementation of RelShuttleImpl that transforms plan to fit Calcite ElasticSearch rel implementor.
Implementation of RexShuttle that replaces RexInputRef expressions with ITEM calls to _MAP field.
Rule for converting Drill project to ElasticSearch project.
Parser for a JSON element.
Query queue to be used in an embedded Drillbit.
Implementation of aliases table that doesn't hold or return information.
Represents a run of empty arrays for which we have no type information.
Represents an empty array: the case where the parser has seen only
[]
, but no array elements which would indicate the type.Internal implementation for a list of (possible) variants when
the list has no type associated with it at all.
Tracks and populate empty values in repeated value vectors.
This class provided utility methods to encode and decode a set of user specified
SchemaPaths to a set of encoded SchemaPaths with the following properties.
Context to help initializing encryption related configurations for a connection.
EndpointAffinity captures affinity value for a given single Drillbit endpoint.
Presents an interface that describes the number of bytes for a particular work unit associated with a particular DrillbitEndpoint.
LeafPrel
implementation that generates java code that may be executed to obtain results
for the provided plan part.ManagedReader
implementation that compiles and executes specified code,
calls the method on it for obtaining the values, and reads the results using column converters.Implementation of
CredentialsProvider
that obtains credential values from
environment variables.To avoid coupling the JSON structure parser with Drill's error
reporting mechanism, the caller passes in an instance of this
error factory which will build the required errors, including
filling in caller-specific context.
Utility class that handles error message generation from protobuf error objects.
Custom error listener that converts all syntax errors into
ExpressionParsingException
.Visitor that generates code for eval
Extended variable descriptor ("holding container") for the variable
which references the value holder ("FooHolder") that stores the value
from a value vector.
Reads records from the RecordValueAccessor and writes into RecordWriter.
An abstraction used for dispatching store
events
.Process events serially.
Our use of listeners that deliver events directly can sometimes cause problems when events are delivered recursively in the middle of event handling by the same object.
Our use of listeners that deliver events directly can sometimes cause problems when events are delivered recursively in the middle of event handling by the same object.
Injection for a single exception.
Exchanges are fragment boundaries in physical operator tree.
This provides the resources required by an exchange operator.
Materializer visitor to remove exchange(s)
NOTE: this Visitor does NOT set OperatorId, as after Exchange removal all operators need renumbering
Use OperatorIdVisitor on top to set correct OperatorId
Protobuf type
exec.bit.FragmentHandle
Protobuf type
exec.bit.FragmentHandle
Prepared statement state on server side.
Prepared statement state on server side.
SQLException for execution-canceled condition.
Tracks the simulated controls that will be injected for testing purposes.
The JSON specified for the
ExecConstants.DRILLBIT_CONTROL_INJECTIONS
option is validated using this class.Injects exceptions and pauses at execution time for testing.
The context that is used by a Drillbit in classes like the
FragmentExecutor
.Utility class to enhance the Java
ExecutorService
class functionalityExecutor task wrapper to enhance task cancellation behavior
Allows us to decorate DrillBuf to make it expandable so that we can use them in the context of the Netty framework
(thus supporting RPC level memory accounting).
A special type of concurrent map which attempts to create an object before returning that it does not exist.
Iceberg table generates metadata for each modification operation:
snapshot, manifest file, table metadata file.
Perform a schema projection for the case of an explicit list of
projected columns.
Condensed form of a Drill WHERE clause expression
node.
Represents a set of AND'ed expressions in Conjunctive Normal
Form (CNF).
Semanticized form of a Calcite relational operator.
An expression node with an unlimited set of children.
Represents a set of OR'ed expressions in Disjunctive Normal
Form (CNF).
This class provides an empty implementation of
ExprParserListener
,
which can be extended to create a listener which only needs to handle a subset
of the available methods.This interface defines a complete listener for a parse tree produced by
ExprParser
.Convert a logicalExpression to RexNode, notice the inputRel could be in an old plan, but newRowType is the newly built rowType
that the new RexNode will be applied upon, so when reference fields, use newRowType, when need cluster, plannerSetting, etc, use old inputRel
Writes JSON Output that will wrap Binary, Date, Time, Timestamp, Integer,
Decimal and Interval types with wrapping maps for better type resolution upon
deserialization.
An extended CountDownLatch which allows us to await uninterruptibly.
Extended form of the mock record reader that uses generator class
instances to create the mock values.
Extends the original Option iterator.
Wrapper class for Extended Option Value
Based on
V1 of the Mongo extended type spec.
Names of Mongo extended types.
External sort batch: a sort batch which can spill to disk in
order to operate within a defined memory footprint.
This handler fails any request on the connection.
An
OptionManager
which allows for falling back onto another
OptionManager
when retrieving options.Describes a new field within an object.
a class fields
Extensible mechanism to build fields for a JSON object (a Drill
row or Map).
Interface which all mock column data generators must
implement.
Creates a field parser given a field description and an optional field
listener.
This class manages the projection pushdown for a complex path.
Holder class to store field information (name and type) with the list of nodes this field is used in.
Replaces original node with provided in mapper, otherwise returns original node.
Written file information holder.
Describes one file within a scan and is used to populate implicit columns.
FileGroupScan operator represents all data which will be scanned from FileSystem by a given physical plan.
Specify the file name and optional selection root.
Metadata which corresponds to the file level of table.
Collects file metadata for the given parquet file.
Represents projection column which resolved to a file metadata
(AKA "implicit") column such as "filename", "fqn", etc.
Definition of a file metadata (AKA "implicit") column for this query.
Parses the implicit file metadata columns out of a project list,
and marks them for special handling by the file metadata manager.
Implementation of
MetadataInfoCollector
for file-based tables.Iterates over the splits for the present scan.
This class is generated by jOOQ.
The file scan framework adds into the scan framework support for implicit
reading from DFS splits (a file and a block).
Iterates over the splits for the present scan.
Options for a file-based scan.
The file schema negotiator adds no behavior at present, but is
created as a placeholder anticipating the need for file-specific
behavior later.
Implementation of the file-level schema negotiator.
The file scan framework adds into the scan framework support for
reading from DFS splits (a file and a block) and for the file-related
implicit and partition columns.
The file schema negotiator provides access to the Drill file system
and to the file split which the reader is to consume.
Implementation of the file-level schema negotiator which holds the
file split which the reader is to process.
Jackson serializable description of a file selection.
This class is generated by jOOQ.
Implements
CreateTableEntry
interface to create new tables in FileSystem storage.Implementation of
MetadataProviderManager
which uses file system providers and returns
builders for file system based TableMetadataProvider
instances.Partition descriptor for file system based tables.
A Storage engine associated with a Hadoop FileSystem Implementation.
This is the top level schema that responds to root level path requests.
Helper class that provides methods to list directories or file or both statuses.
Performs the file upload portion of the operation by uploading an archive to
the target DFS system and directory.
Generic interface that provides functionality to write data into the file.
A visitor which visits a materialized logical expression, and build
FilterPredicate If a visitXXX method returns null, that means the
corresponding filter branch is not qualified for push down.
Evaluates information schema for the given condition.
Evaluates necessity to visit certain type of information_schema data using provided filter.
Evaluates necessity to visit certain type of information_schema data based
on given schema type.
Search through a LogicalExpression, finding all internal schema path references and returning them in a set.
Interface which defines filter expression types by which Metastore data can be read or deleted.
Indicates list of supported operators that can be used in filter expressions.
Transforms
FilterExpression
implementations into suitable
for Metastore implementation representation.Visits
FilterExpression
implementations and transforms them into Iceberg Expression
.Visits
FilterExpression
implementations and transforms them into Bson
implementations.Call-back (listener) implementation for a push-down filter.
Listener for a one specific group scan.
Generalized filter push-down strategy which performs all the tree-walking
and tree restructuring work, allowing a "listener" to do the work needed
for a particular scan.
Transforms given input into Iceberg
Expression
which is used as filter
to retrieve, overwrite or delete Metastore component data.Transforms given input into Mongo
Document
which is used as filter
to retrieve, overwrite or delete Metastore component data.A visitor class that analyzes a filter condition (typically an index condition)
and a supplied input collation and determines what the output collation would be
after applying the filter.
A visitor that is very similar to
FindLimit0SqlVisitor
in that it looks for a LIMIT 0
in the root portion of the query tree for the sake of enabling optimisations but that is
different in the following aspects.Visitor that will identify whether the root portion of the RelNode tree contains a limit 0 pattern.
Reader for column names and types.
Parquet value writer for passing decimal values
into
RecordConsumer
to be stored as FIXED_LEN_BYTE_ARRAY type.Layer above the
ResultSetLoader
which handles standard conversions
for scalar columns where the schema is known up front (i.e.Float4 implements a vector of fixed width values.
Float8 implements a vector of fixed width values.
a maven plugin to run the freemarker generation incrementally
(if output has not changed, the files are not touched)
Foreman manages all the fragments (local and remote) for a single query where this
is the driving/root node.
Responsible for instantiating format plugins
Provides the ability to transform format location before creating
FileSelection
if required.Similar to a storage engine but built specifically to work within a FileSystem context.
Interface for defining a Drill format plugin.
Provides the resources required by a non-exchange operator to execute.
This is the core Context which implements all the Context interfaces:
FragmentContext
: A context provided to non-exchange
operators.
ExchangeFragmentContext
: A context provided to exchange
operators.
RootFragmentContext
: A context provided to fragment roots.
ExecutorFragmentContext
: A context used by the Drillbit.
The interfaces above expose resources to varying degrees.Fragment context interface: separates implementation from definition.
Runs a single fragment on a single Drillbit.
A Physical Operator that can be the leaf node of one particular execution
fragment.
The Fragment Manager is responsible managing incoming data and executing a fragment.
OptionManager
that holds options within FragmentContextImpl
.Generic interface to provide different parallelization strategies for MajorFragments.
Describes the root operation within a particular Fragment.
Is responsible for submitting query fragments for running (locally and remotely).
Holds statistics of a particular (minor) fragment.
The status reporter is responsible for receiving changes in fragment state and propagating the status back to the
Foreman either through a control tunnel or locally.
Wrapper class for a major fragment profile.
WindowFramer implementation that supports the FRAME clause.
Maintains state while traversing a finite state machine described by
an FsmDescriptor.
Describes a finite state machine in terms of a mapping of tokens to
characters, a regular expression describing the valid transitions
using the characters, and an end state.
Is used to provide schema based on table location on file system
and default schema file name
SchemaProvider.DEFAULT_SCHEMA_NAME
.Definition of a function as presented to code generation.
FunctionalIndexInfo is to collect Functional fields in IndexDescriptor, derive information needed for index plan,
e.g.
Attributes of a function used in code generation and optimization.
Converts FunctionCalls to Java Expressions.
Holder class that contains:
function name
function signature which is string representation of function name and its input parameters
DrillFuncHolder
associated with the function
Represents an actual call (a reference) to a declared function.
Registry for functions.
To avoid the cost of initializing all functions up front, this class contains
all information required to initializing a function when it is used.
Function registry holder stores function implementations by jar name, function name.
An implementing class of FunctionResolver provide their own algorithm to choose a DrillFuncHolder from a given list of
candidates, with respect to a given FunctionCall
List functions as a System Table
Representation of an entry in the System table - Functions
Function scope is used to indicate function output rows relation:
simple / scalar (1 -> 1) or aggregate (n -> 1).
Estimates the average size of the output
produced by a function that produces variable length output
Return type enum is used to indicate which return type calculation logic
should be used for functions.
FutureBitCommand<T extends com.google.protobuf.MessageLite,C extends RemoteConnection,E extends com.google.protobuf.Internal.EnumLite,M extends com.google.protobuf.MessageLite>
Binary form, returns the interval between `right` and `left`.
Unary form, subtracts `right` from midnight so equivalent to
`select age(current_date, right)`.
Binary form, returns the interval between `right` and `left`.
Binary form, returns the interval between `right` and `left`.
Binary form, returns the interval between `right` and `left`.
Unary form, subtracts `right` from midnight so equivalent to
`select age(current_date, right)`.
Binary form, returns the interval between `right` and `left`.
Binary form, returns the interval between `right` and `left`.
Binary form, returns the interval between `right` and `left`.
Unary form, subtracts `right` from midnight so equivalent to
`select age(current_date, right)`.
Binary form, returns the interval between `right` and `left`.
Binary form, returns the interval between `right` and `left`.
This class merely act as a placeholder so that Calcite allows 'trunc('truncationUnit', col)'
function in SQL.
This class merely act as a placeholder so that Calcite allows 'trunc('truncationUnit', col)'
function in SQL.
This class merely act as a placeholder so that Calcite allows 'trunc('truncationUnit', col)'
function in SQL.
This class merely act as a placeholder so that Calcite allows 'trunc('truncationUnit', col)'
function in SQL.
This class merely act as a placeholder so that Calcite allows 'trunc('truncationUnit', col)'
function in SQL.
This class merely act as a placeholder so that Calcite allows 'trunc('truncationUnit', col)'
function in SQL.
Protobuf type
exec.rpc.Ack
Protobuf type
exec.rpc.Ack
Protobuf type
exec.rpc.CompleteRpcMessage
Protobuf type
exec.rpc.CompleteRpcMessage
Protobuf type
exec.rpc.RpcHeader
Protobuf type
exec.rpc.RpcHeader
Protobuf enum
exec.rpc.RpcMode
The code generator works with four conceptual methods which can
have any actual names.
This class is the representation of a GoogleSheets column.
This class represents the actual tab within a GoogleSheets document.
The GoogleSheets storage plugin accepts filters which are:
A single column = value expression
An AND'ed set of such expressions,
If the value is one with an unambiguous conversion to
a string.
This class is used to construct a range with the GoogleSheet reader in Drill.
Represents the possible data types found in a GoogleSheets document
A GroupScan operator represents all data which will be scanned by a given physical
plan.
Utility class which contain methods for conversion guava and shaded guava classes.
Implementation of
CredentialsProvider
that obtains credential values from
Configuration
properties.Implementation of
FragmentParallelizer
where fragment requires
running on a given set of endpoints.Describes a physical operator that has affinity to particular nodes.
Implement this interface if a Prel has distribution affinity requirements.
hash32 function definitions for numeric data types.
hash32 with seed function definitions for numeric data types.
Implements the runtime execution for the Hash-Join operator supporting INNER,
LEFT OUTER, RIGHT OUTER, and FULL OUTER joins
This calculator class is used when the Hash-Join_helper is not used (i.e., return size of zero)
This class is responsible for managing the memory calculations for the HashJoin operator.
The interface representing the
HashJoinStateCalculator
corresponding to the
HashJoinState.BUILD_SIDE_PARTITIONING
state.This class represents the memory size statistics for an entire set of partitions.
The interface representing the
HashJoinStateCalculator
corresponding to the
HashJoinState.POST_BUILD_CALCULATIONS
state.At this point we need to reserve memory for the following:
An incoming batch
An incomplete batch for each partition
If there is available memory we keep the batches for each partition in memory.
In this state, we need to make sure there is enough room to spill probe side batches, if
spilling is necessary.
A
HashJoinStateCalculator
is a piece of code that compute the memory requirements for one of the states
in the HashJoinState
enum.Overview
Contains utility methods for creating hash expression for either distribution (in PartitionSender) or for HashTable.
Interface for creating different forms of hash expression types.
Implements the runtime execution for the Hash-SetOp operator supporting EXCEPT,
EXCEPT ALL, INTERSECT, and INTERSECT ALL
A singleton class which manages the lifecycle of HBase connections.
Deprecated.
will be removed in 1.7
use
HBasePersistentStoreProvider
instead.Information for reading a single HBase region
This class represents an HDF5 attribute and is used when the attributes are projected.
Text output that implements a header reader/parser.
Interface for dumping object state in a hierarchical fashion during
debugging.
Prints a complex object structure in a quasi-JSON format for use
in debugging.
A column specific histogram
Utility class that can be used to log activity within a class
for later logging and debugging.
Reader which uses complex writer underneath to fill in value vectors with data read from Hive.
Class which provides methods to get metadata of given Hive table selection.
Contains stats.
Contains group of input splits along with the partition.
This is Metadata provider for Hive Parquet tables, which are read by Drill native reader
This class is wrapper of
Partition
class and used for
storage of such additional information as index of list in column lists cache.Helper class that stores partition values per key.
This class is wrapper of
Table
class and used for
storage of such additional information as column lists cache.Wrapper for
ColumnListsCache
class.Wrapper for
Partition
class.Wrapper for
StorageDescriptor
class.Reading of Hive table stored in text format may require to skip few header/footer records.
This class is responsible for data type conversions
from
FieldSchema
instances
to RelDataType
instancesThe writer is used to abstract writing of row values or values embedded into row
(for example, elements of List complex type).
Factory used by reader to create Hive writers for columns.
Wrapper around a representation of a "Holder" to represent that
Holder as an expression.
Define all attributes and values that can be injected by various Wrapper classes in org.apache.drill.exec.server.rest.*
Implementation of UserAuthenticator that reads passwords from an htpasswd
formatted file.
Config variable to determine how POST variables are sent to the downstream API
In the HTTP storage plugin, users can define specific connections or APIs.
Implement HTTP Basic authentication for REST API access
HTTP proxy settings.
The HTTP storage plugin accepts filters which are:
A single column = value expression, where the column is
a filter column from the config, or
An AND'ed set of such expressions,
If the value is one with an unambiguous conversion to
a string.
Base reader builder for a hyper-batch.
Vector accessor used by the column accessors to obtain the vector for
each column value.
Read-only row index into the hyper row set with batch and index
values mapping via an SV4.
Implements a row set wrapper around a collection of "hyper vectors."
A hyper-vector is a logical vector formed by a series of physical vectors
stacked on top of one another.
Infer the schema for a hyperbatch.
Drill Iceberg Metastore configuration which is defined
in
MetastoreConfigConstants.MODULE_RESOURCE_FILE_NAME
file.Implementation of
Metadata
interface.Iceberg Drill Metastore implementation that inits / loads Iceberg tables
which correspond to Metastore components: tables, views, etc.
Provides Iceberg Metastore component tools to transform, read or write data from / in Iceberg table.
Specific Iceberg Drill Metastore runtime exception to indicate exceptions thrown
during Iceberg Drill Metastore code execution.
Implementation of
Modify
interface based on AbstractModify
parent class.Iceberg operation main goal is to add itself to the given transaction.
Implementation of
Read
interface based on AbstractRead
parent class.Metastore Tables component which stores tables metadata in the corresponding Iceberg table.
Provides Iceberg table schema and its partition specification for specific component.
Special deserializer for
IcebergWork
class that deserializes
scanTask
field from byte array string created using Serializable
.Special serializer for
IcebergWork
class that serializes
scanTask
field to byte array string created using Serializable
since CombinedScanTask
doesn't use any Jackson annotations.The class mainly process schema definition, index binding,
and set up the vector (Column Writers) values.
Responsible for process of the image GenericMetadataDirectory
metadata and create data type based on different tags.
Responsible for process of the list-map with array writer.
Responsible for process of the map writer (nested structure).
Although each image format can contain different metadata,
they also have common basic information.
Utilities for impersonation purpose.
Create RecordBatch tree (PhysicalOperator implementations) for a given
PhysicalOperator tree.
Manages the insertion of file metadata (AKA "implicit" and partition) columns.
Marks a column as implicit and provides a function to resolve an
implicit column given a description of the input file.
Marker for a file-based, non-internal implicit column that
extracts parts of the file name as defined by the implicit
column definition.
Partition column defined by a partition depth from the scan
root folder.
Manages the resolution of implicit file metadata and partition columns.
The result of scanning the scan output schema to identify implicit and
partition columns.
This class represents an implicit column in a dataset.
Manages implicit columns for files and partition columns for
directories.
Represents a wildcard: SELECT * when used at the root tuple.
Helper class to manage inbound impersonation.
Validator for impersonation policies.
Determines when a particular fragment has enough data for each of its receiving exchanges to commence execution.
An incoming batch of data.
The filter expressions that could be indexed
Other than SchemaPaths, which represent columns of a table and could be indexed,
we consider only function expressions, and specifically, CAST function.
Interface used to describe an index collection
Types of an index collections: NATIVE_SECONDARY_INDEX_COLLECTION, EXTERNAL_SECONDARY_INDEX_COLLECTION
Top level interface used to define an index.
Types of an index: PRIMARY_KEY_INDEX, NATIVE_SECONDARY_INDEX, EXTERNAL_SECONDARY_INDEX
IndexDefinition + functions to access materialized index(index table/scan, etc)
SchemaFactory of a storage plugin that can used to store index tables should expose this interface to allow
IndexDiscovers discovering the index table without adding dependency to the storage plugin.
IndexDiscoverBase is the layer to read index configurations of tables on storage plugins,
then based on the properties it collected, get the StoragePlugin from StoragePluginRegistry,
together with indexes information, build an IndexCollection
With this factory, we allow user to load a different indexDiscover class to obtain index information
Encapsulates one or more IndexProperties representing (non)covering or intersecting indexes.
An IndexGroupScan operator represents the scan associated with an Index.
IndexScanIntersectGenerator is to generate index plan against multiple index tables,
the input indexes are assumed to be ranked by selectivity(low to high) already.
IndexProperties encapsulates the various metrics of a single index that are related to
the current query.
IndexProperties encapsulates the various metrics of a single index that are related to
the current query.
Extension of the container accessor that holds an optional selection
vector, presenting the batch row count as the selection vector
count.
Reader index that points to each row indirectly through the
selection vector.
Single row set coupled with an indirection (selection) vector,
specifically an SV2.
Create Drill field listeners based on the observed look-ahead
tokens in JSON.
Builds a InfoSchemaFilter out of the Filter condition.
Generates records for POJO RecordReader by scanning the given schema.
Base class for tables in INFORMATION_SCHEMA.
Layout for the CATALOGS table.
Layout for the COLUMNS table.
Layout for the FILES table.
Layout for the PARTITIONS table.
Layout for the SCHEMATA table.
Layout for the TABLES table.
Layout for the VIEWS table.
The set of tables / views in INFORMATION_SCHEMA.
The base class for all types of injections (currently, pause and exception).
An Exception thrown when injection configuration is incorrect.
Key Deserializer for InjectionSite.
Is used to provide schema when passed using table function.
This is an
OptionManager
that holds options in memory rather than in
a persistent store.An ASM ClassVisitor that strips class the access bits that are only possible
on inner classes (ACC_PROTECTED, ACC_PRIVATE, and ACC_FINAL).
The input batch group gathers batches buffered in memory before
spilling.
Converts list of into Metastore component units into
WriteData
holder.Converts list of Metastore component units into
Document
.Constructs plan to be executed for inserting data into the table.
Parquet value writer for passing decimal values
into
RecordConsumer
to be stored as INT32 type.Parquet value writer for passing decimal values
into
RecordConsumer
to be stored as INT32 type.IntervalDay implements a vector of fixed width values.
Drill-specific extension for a time interval (AKA time
span or time period).
Interval implements a vector of fixed width values.
IntervalYear implements a vector of fixed width values.
Generates integer values uniformly randomly distributed over
the entire 32-bit integer range from
Integer.MIN_VALUE
to Integer.MAX_VALUE
.Int implements a vector of fixed width values.
Helper class to buffer container mutation as a means to optimize native memory copy operations.
Exception for malformed connection string from client
Raised when a conversion from one type to another is supported at
setup type, but a value provided at runtime is not valid for that
conversion.
SQLException for invalid-cursor-state conditions, e.g., calling a column
accessor method before calling
ResultSet#next()
or after
ResultSet#next()
returns false.An InvalidIndexDefinitionException may be thrown if Drill does not recognize the
type or expression of the index during the index discovery phase
JdbcApiSqlException
for invalid-parameter-value conditions.Indicates IS predicate implementations.
Indicates
FilterExpression.Operator.IS_NOT_NULL
operator expression:
storagePlugin is not null.Indicates
FilterExpression.Operator.IS_NULL
operator expression:
storagePlugin is null.Utility class which contain methods for interacting with Jackson.
Holder class that contains:
jar name
scan of packages, classes, annotations found in jar
unique jar classLoader
SQLException for JDBC API calling-sequence/state problems.
Interface for different implementations of databases connected using the
JdbcStoragePlugin.
Prel used to represent a JDBC Conversion within an expression tree.
Represents a JDBC Plan once the children nodes have been rewritten into SQL.
Adapter for compatibility of metrics-jms for 3 and 4 versions.
For the int type control,
the meaning of each bit start from lowest:
bit 0: intersect or not, 0 -- default(no intersect), 1 -- INTERSECT (DISTINCT as default)
bit 1: intersect type, 0 -- default (DISTINCT), 1 -- INTERSECT_ALL
Base class for MergeJoinPrel and HashJoinPrel
Maintain join state.
Merge Join implementation using RecordIterator.
Abstract implementation of StatisticsRecordWriter interface which exposes interface:
#writeHeader(List)
#addField(int,String)
to output the data in string format instead of implementing addField for each type holder.EVF based reader.
The two functions defined here convert_toJSON and convert_toEXTENDEDJSON are almost
identical.
Enhanced second-generation JSON loader which takes an input
source and creates a series of record batches using the
ResultSetLoader
abstraction.Revised JSON loader that is based on the
ResultSetLoader
abstraction.Extends the
JsonStructureOptions
class, which provides
JSON syntactic options, with a number of semantic options enforced
at the JSON loader level.MessageReader class which will convert ConsumerRecord into JSON and writes to
VectorContainerWriter of JsonReader
Interface through which UDFs, RecordWriters and other systems can write out a
JSON output.
Abstract implementation of RecordWriter interface which exposes interface:
#writeHeader(List)
#addField(int,String)
to output the data in string format instead of implementing addField for each type holder.Deprecated.
Deprecated.
EVF based JSON reader which uses input stream as data source.
Input to the JSON structure parser which defines guidelines
for low-level parsing as well as listeners for higher-level
semantics.
Parser for a subset of the jsonlines
format.
Parses an arbitrary JSON value (which can be a subtree of any
complexity) into a JSON string.
Closes Kafka resources asynchronously which result does not depend on close method
in order to improve query execution performance.
This class is generated using Freemarker and the KeyAccessors.java template.
A class modelling foreign key relationships and constraints of tables of
the
schema.Information for reading a single Kudu tablet
A MutableWrappedByteBuf that also maintains a metric of the number of huge buffer bytes and counts.
Contract between Lateral Join and any operator on right side of it consuming the input
from left side.
RecordBatch implementation for the lateral join operator.
LateralUnnestRowIDVisitor traverses the physical plan and modifies all the operators in the
pipeline of Lateral and Unnest operators to accommodate IMPLICIT_COLUMN.
Visitor for RelNodes which applies specified
RexShuttle
visitor
for every node in the tree.Abstract description of a remote process launch that describes the many
details needed to launch a process on a remote node.
Operator which specifically is a lowest level leaf node of a query plan
across all possible fragments.
Prel without children.
ListeningCommand<T extends com.google.protobuf.MessageLite,C extends RemoteConnection,E extends com.google.protobuf.Internal.EnumLite,M extends com.google.protobuf.MessageLite>
Indicates list predicate implementations which have column and list of values.
Indicates
FilterExpression.Operator.IN
operator expression:
storagePlugin in ('dfs', 's3').Indicates
FilterExpression.Operator.NOT_IN
operator expression:
storagePlugin not in ('dfs', 's3').Represents the contents of a list vector.
Wrapper around the list vector (and its optional contained union).
"Non-repeated" LIST vector.
List writer, which is basically an array writer, with the addition
that each list element can be null.
Registry of Drill functions.
Local persistent store stores its data on the given file system.
A really simple provider that stores data in the local file system, one value per file.
A syncable local extension of the Hadoop FileSystem
A metadata which has specific location.
A mock class to avoid NoClassDefFoundError after excluding Apache commons-logging from Hadoop dependency.
The three configuration options for a field are:
The field name
The data type (fieldType).
Helper class for parsing logical expression.
A programmatic builder for logical plans.
Visitor class designed to traversal of a operator tree.
A simple Writer that will forward whole lines (lines ending with a newline) to
a Logger.
MajorTypeInLogicalExpression is a LogicalExpression, which wraps a given @{TypeProtos.MajorType}
Responsible for breaking a plan into its constituent Fragments.
Extended version of a record reader which uses a size-aware batch mutator.
Extended version of a record reader which uses a size-aware batch mutator.
Exception thrown from the constructor if the data source is empty and
can produce no data or schema.
Basic scan framework for a "managed" reader which uses the scan schema
mechanisms encapsulated in the scan schema orchestrator.
Creates a batch reader on demand.
Internal structure for building a map.
Describes a map and repeated map.
Reader for a Drill Map type.
An implementation of map that supports constant time look-up by a generic key or an ordinal.
Writer for a Drill Map type.
Writer for a an array of maps.
Writer for a single (non-array) map.
Meta-data description of a column characterized by a name and a type
(including both data type and cardinality AKA mode).
A data loader for Maven
A visitor to compute memory requirements for each operator in a minor fragment.
Utility class which helps in parsing memory configuration string using the passed in pattern to get memory value in
bytes.
A join operator that merges two sorted streams using a record iterator.
Wrapper around the "MSorter" (in memory merge sorter).
Merges pre-sorted record batches from remote senders.
Optional custom parser for the portion of a JSON message that
surrounds the data "payload".
MessageReader interface provides mechanism to handle various Kafka Message
Formats like JSON, AVRO or custom message formats.
This is an utility class, holder for Parquet Table Metadata and
ParquetReaderConfig
.Provider of tuple schema, column metadata, and statistics for table, partition, file or row group.
Provides Metastore component implementation metadata,
including information about versioning support if any
and current properties applicable to the Metastore component instance.
A struct that contains the metadata for a column in a parquet file
Struct which contains the metadata for a single parquet file
A struct that contains the metadata for a parquet row group
A struct that contains the metadata for a column in a parquet file
Struct which contains the metadata for a single parquet file
Struct which contains the metadata for an entire parquet directory structure
A struct that contains the metadata for a parquet row group
A struct that contains the metadata for a column in a parquet file
Struct which contains the metadata for a single parquet file
A struct that contains the metadata for a parquet row group
A struct that contains the metadata for a column in a parquet file.
Struct which contains the metadata for a single parquet file
A struct that contains the metadata for a parquet row group
Implementation of
RelCollationImpl
with field name.Implementation of
LogicalOperator
for MetadataAggRel
rel node.Class which provides information required for producing metadata aggregation when performing analyze.
Helper class for constructing aggregate value expressions required for metadata collecting.
Basic class for parquet metadata.
Resolved value for a metadata column (implicit file or partition column.) Resolution
here means identifying a value for the column.
A metadata context that holds state across multiple invocations of
the Parquet metadata APIs.
Implementation of
LogicalOperator
for MetadataControllerRel
rel node.Terminal operator for producing ANALYZE statement.
Class which provides information required for storing metadata to the Metastore when performing analyze.
Represents direct scan based on metadata information.
Metadata runtime exception to indicate issues connected with table metadata.
MetaData fields provide additional information about each message.
Implementation of
LogicalOperator
for MetadataHandlerRel
rel node.Responsible for handling metadata returned by incoming aggregate operators
and fetching required metadata form the Metastore.
Class which provides information required for handling results of metadata aggregation when performing analyze.
Class that specifies metadata type and metadata information
which will be used for obtaining specific metadata from metastore.
Interface for obtaining information about segments, files etc which should be handled in Metastore
when producing incremental analyze.
Queries can contain a wildcard (*), table columns, or special
system-defined columns (the file metadata columns AKA implicit
columns, the `columns` column of CSV, etc.).
Provides various mapping, transformation methods for the given
RDBMS table and Metastore component metadata unit.
Util class that contains helper methods for converting paths in the table and directory metadata structures
Interface for retrieving and/or creating metadata given a vector.
Contains worker
Runnable
classes for providing the metadata and related helper methods.Base interface for passing and obtaining
SchemaProvider
, DrillStatsTable
and
TableMetadataProvider
, responsible for creating required
TableMetadataProviderBuilder
which constructs required TableMetadataProvider
based on specified providersOperator which adds aggregate calls for all incoming columns to calculate
required metadata and produces aggregations.
Enum with possible types of metadata.
Provides list of supported metadata types for concrete Metastore component unit
and validates if given metadata types are supported.
A collection of utility methods for working with column and tuple metadata.
Supported metadata versions.
Drill Metastore interface contains methods needed to be implemented by Metastore implementations.
Constructs plan to be executed for collecting metadata and storing it to the Metastore.
Metastore column definition, contains all Metastore column and their name
to unique their usage in the code.
Holds Metastore configuration files names and their properties names.
Metastore
ConfigFileInfo
implementation which provides names of Metastore specific configuration files.Drill Metastore runtime exception to indicate that exception was caused by
Drill Metastore.
Annotation used to determine to which metadata types Metastore units fields belong.
Implementation of
TableMetadataProvider
which uses Drill Metastore for providing table metadata
for file-based tables.Implementation of
MetadataProviderManager
which uses Drill Metastore providers.Implementation of
TableMetadataProvider
which uses Drill Metastore for providing
table metadata for parquet tables.Class is responsible for returning instance of
Metastore
class
which will be initialized based on MetastoreConfigConstants.IMPLEMENTATION_CLASS
config property value.Holds metastore table metadata information, including table information, exists status,
last modified time and metastore version.
MethodAnalyzer<V extends org.objectweb.asm.tree.analysis.Value>
Analyzer that allows us to inject additional functionality into ASMs basic analysis.
Interface that defines a metric.
MinorFragmentEndpoint represents fragment's MinorFragmentId and Drillbit endpoint to which the fragment is
assigned for execution.
Builds the handler which provides values for columns in
an explicit project list but for which
the reader provides no values.
Each storage plugin requires a unique config class to allow
config --> impl lookups to be unique.
Describes a "group" scan of a (logical) mock table.
Describes a physical scan operation for the mock data source.
Structure of a mock table definition file.
Meta-data description of the columns we wish to create during a simulated
scan.
Describes one simulated file (or block) within the logical file scan
described by this group scan.
A tiny wrapper class to add required DrillTableSelection behaviour to
the entries list.
Drill Metastore Modify interface contains methods to be implemented in order
to provide modify functionality in the Metastore component.
Generates a mock money field as a double over the range 0
to 1 million.
Parsers a binary.
Drill Mongo Metastore configuration which is defined
in
MetastoreConfigConstants.MODULE_RESOURCE_FILE_NAME
file.Parses a Mongo date in the
V1 format:
Mongo delete operation: deletes documents based on given row filter.
Implementation of
Metadata
interface.Mongo Drill Metastore implementation.
Provides Mongo Metastore component tools to transform, read or write data from / into Mongo collections.
Specific Mongo Drill Metastore runtime exception to indicate exceptions thrown
during Mongo Drill Metastore code execution.
Implementation of
Modify
interface based on AbstractModify
parent class.Mongo operation
Implementation of
PluginImplementor
for Mongo.Implementation of
Read
interface based on AbstractRead
parent class.Metastore Tables component which stores tables metadata in mongo collection
Provides methods to read and modify tables metadata.
In-memory sorter.
Simple port of MurmurHash with some state management.
MurmurHash3 was written by Austin Appleby, and is placed in the public
domain.
A mutable form of a tuple schema.
Holder for a column to allow inserting and replacing columns within
the top-level project list.
ThreadFactory
for ExecutorService
s that names threads sequentially.Interface for the nested loop join operator.
This function retunrs the number of IP addresses in the input CIDR block.
This function returns the broadcast address of a given CIDR block.
This function gets the numerically highest IP address in an input CIDR block.
This function converts an IPv4 address into a BigInt.
This function converts a BigInt IPv4 into dotted decimal notation.
This function takes two arguments, an input IPv4 and a CIDR, and returns true if the IP is in the given CIDR block
This function returns true if a given IPv4 address is private, false if not.
Returns true if the input string is a valid IP address
Returns true if the input string is a valid IPv4 address
Returns true if the input string is a valid IP address
This function gets the numerically lowest IP address in an input CIDR block.
This function gets the netmask of the input CIDR block.
This function decodes URL strings.
This function encodes URL strings.
Creates an AM-side inventory of cluster nodes.
This class abstracts the resources like cpu and memory used up by the operators.
Provides resources for a node in cluster.
WindowFramer implementation that doesn't support the FRAME clause (will
assume the default frame).
Generate a non-covering index plan that is equivalent to the original plan.
Represents a metadata for the non-interesting columns.
Indicates NonNullable nature
This managers determines when to run a non-root fragment node.
Implementation of
AliasRegistry
that does nothing.An injector that does not inject any controls, useful when not testing (i.e.
Do-nothing implementation of the metadata manager.
Simple selector whose value is another Simple or Complex Selectors.
NullableBigInt implements a vector of values which could be null.
NullableBit implements a vector of values which could be null.
NullableDate implements a vector of values which could be null.
Deprecated.
NullableDecimal18 implements a vector of values which could be null.
Deprecated.
NullableDecimal28Dense implements a vector of values which could be null.
Deprecated.
NullableDecimal28Sparse implements a vector of values which could be null.
Deprecated.
NullableDecimal38Dense implements a vector of values which could be null.
Deprecated.
NullableDecimal38Sparse implements a vector of values which could be null.
Deprecated.
NullableDecimal9 implements a vector of values which could be null.
Old versions of Drill were writing a non-standard format for date.
Old versions of Drill were writing a non-standard format for date.
NullableFloat4 implements a vector of values which could be null.
NullableFloat8 implements a vector of values which could be null.
NullableIntervalDay implements a vector of values which could be null.
NullableInterval implements a vector of values which could be null.
NullableIntervalYear implements a vector of values which could be null.
NullableInt implements a vector of values which could be null.
NullableSmallInt implements a vector of values which could be null.
NullableTimeStamp implements a vector of values which could be null.
NullableTime implements a vector of values which could be null.
NullableTinyInt implements a vector of values which could be null.
NullableUInt1 implements a vector of values which could be null.
NullableUInt2 implements a vector of values which could be null.
NullableUInt4 implements a vector of values which could be null.
NullableUInt8 implements a vector of values which could be null.
NullableVar16Char implements a vector of values which could be null.
NullableVarBinary implements a vector of values which could be null.
NullableVarChar implements a vector of values which could be null.
NullableVarDecimal implements a vector of values which could be null.
Manages null columns by creating a null column loader for each
set of non-empty null columns.
Create and populate null columns for the case in which a SELECT statement
refers to columns that do not exist in the actual table.
Parser for a field that contains only nulls.
A vector cache implementation which does not actually cache.
Internal mechanism to detect if a value is null.
Handle the awkward situation with complex types.
Holder for the NullableVector wrapper around a bits vector and a
data vector.
Null state that handles the strange union semantics that both
the union and the values can be null.
Holder for the NullableVector wrapper around a bits vector and a
data vector.
Dummy implementation of a null state reader for cases in which the
value is never null.
Extract null state from the union vector's type vector.
Parses nulls.
Do-nothing vector state for a map column which has no actual vector
associated with it.
Near-do-nothing state for a vector that requires no work to
allocate or roll-over, but where we do want to at least track
the vector itself.
A column specific equi-depth histogram which is meant for numeric data types
This class enables Drill to access file systems which use OAuth 2.0 for
authorization.
While a builder may seem like overkill for a class that is little more than small struct,
it allows us to wrap new instances in an Optional while using contructors does not.
Class for managing oauth tokens.
Writer for an array of either a map or another array.
The implementation represents the writer as an array writer
with special dict entry writer as its element writer.
Deprecated.
Parses a JSON object:
{ name : value ...
Defines a reader to get values for value vectors using
a simple, uniform interface modeled after a JSON object.
Type of writer.
Represents a column within a tuple.
Reader for an offset vector.
Interface for specialized operations on an offset vector.
Specialized column writer for the (hidden) offset vector used
with variable-length or repeated vectors.
Interface to track opening and closing of files.
Client for API requests to openTSDB
Types in openTSDB records,
used for converting openTSDB data to Sql representation
Each Metastore component must provide mechanisms which return
component metadata, allow reading / writing data from / into Metastore.
Base class to transforms given input into
IcebergOperation
implementations.Base class to transforms given input into
MongoOperation
implementations.Per-operator services available for operator implementations.
State machine that drives the operator executable.
Core protocol for a Drill operator execution.
Visitor to renumber operators - needed after materialization is done as some operators may be removed
using @ExtendedMaterializerVisitor
Registry of operator metrics.
Modular implementation of the standard Drill record batch iterator
protocol.
Interface for updating a statistic.
Utility methods, formerly on the OperatorContext class, that work with
operators.
Wrapper class for profiles of ALL operator instances of the same operator type within a major fragment.
This holds all the information about an option.
Wrapper class for OptionValue to add Status
Manager for Drill
options
.Contains information about the scopes in which an option can be set, and an option's visibility.
Immutable set of options accessible by name or validator.
Validates the values provided to Drill options.
An
option value
is used internally by an
OptionManager
to store a run-time setting.Defines where an option can be configured.
This defines where an option was actually configured.
Representation of a SQL <sort specification>.
OrderedMuxExchange is a version of MuxExchange where the incoming batches are sorted
merge operation is performed to produced a sorted stream as output.
OrderedMuxExchangePrel is mux exchange created to multiplex the streams for a MergeReceiver.
Generates an ordered partition, rather than a random hash partition.
Class implementing OrderedPrel interface guarantees to provide ordered
output on certain columns.
Complex selector whose value is list of other Simple or Complex Selectors.
This is thrown in various cases when Drill cannot allocate Direct Memory.
Describes the field will provide output from the given function.
Builds an output batch based on an output schema and one or more input
schemas.
Describes an input batch with a schema and a vector container.
Source map as a map schema and map vector.
Base class to convert list of
Record
into Metastore component units for the given list of column names.Base class to convert list of
Document
into Metastore component units for the given list of column names.Interface that allows a record reader to modify the current schema.
Return type calculation interface for functions that have return type set as with enum
FunctionTemplate.ReturnType
.Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.CONCAT
.OutputWidthExpressions are used to capture the information required to calculate the width of the output
produced by a variable-width expression.
Used to represent fixed-width values used in an expression.
FunctionCallExpr captures the details required to calculate the width of the output produced by a function
that produces variable-width output.
IfElseWidthExpr is uded to capture an
IfExpression
.VarLenReadExpr captures the inputColumnName and the readExpression used to read a variable length column.
An exception that is used to signal that allocation request in bytes is greater than the maximum allowed by
allocator
.Iceberg overwrite operation: overwrites data with given data file based on given row filter.
Mongo overwrite operation: overwrites data with given document based on given row filter.
Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.PAD
.Base class for a tree of web pages (or REST resources) represented
as POJOs.
This class is the abstraction for the Paginator class.
Implement
UserAuthenticator
based on Pluggable Authentication
Module (PAM) configuration.Implement
UserAuthenticator
based on Pluggable Authentication
Module (PAM) configuration.Captures parallelization parameters for a given operator/fragments.
Collects/merges one or more ParallelizationInfo instances.
Interface to implement for passing parameters to
FragmentParallelizer
.Marker annotation to determine which fields should be included as parameters
for the function.
a parent class and its implementations that was specifically searched for during scanning
Represents a single column read from the Parquet file by the record reader.
ByteBufferAllocator
implementation that uses Drill's BufferAllocator
to allocate and release
ByteBuffer
objects.To properly release an allocated
DrillBuf
, this class keeps track of it's corresponding ByteBuffer
that was passed to the Parquet library.Parquet File Writer implementation.
Internal implementation of the Parquet file writer as a block container
Note: this is temporary Drill-Parquet class needed to write empty parquet files.
Note: this is temporary Drill-Parquet class needed to write empty parquet files.
Holds common statistics about data in parquet group scan,
including information about total row count, columns counts, partition columns.
Interface for providing table, partition, file etc.
Base interface for builders of
ParquetMetadataProvider
.Abstract implementation of RecordWriter interface which exposes interface:
#writeHeader(List)
#addField(int,String)
to output the data in string format instead of implementing addField for each type holder.PartitionDescriptor that describes partitions based on column names instead of directory structure
Stores consolidated parquet reading configuration.
Utility class where we can capture common logic between the two parquet readers
For most recently created parquet files, we can determine if we have corrupted dates (see DRILL-4203)
based on the file metadata.
Utilities for converting from parquet INT96 binary (impala, hive timestamp)
to date time value.
Creates file system only if it was not created before, otherwise returns already created instance.
Mapping from the schema of the Parquet file to that of the record reader
to the schema that Drill and the Parquet reader uses.
Interface for providing table, partition, file etc.
Builder for
ParquetTableMetadataProvider
.Utility class for converting parquet metadata classes to Metastore metadata classes.
This exception is thrown when parse errors are encountered.
The
parse_query
function splits up a query string and returns a map of the key-value pairs.The
parse_url
function takes an URL and returns a map of components of the URL.Used internally to keep track of partitions and frames.
Represents a partition column (usually dir0, dir1, ...).
Interface used to describe partitions.
Decorator class to hide multiple Partitioner existence from the caller since
this class involves multithreaded processing of incoming batches as well as
flushing it needs special handling of OperatorStats - stats since stats are
not suitable for use in multithreaded environment The algorithm to figure out
processing versus wait time is based on following formula: totalWaitTime =
totalAllPartitionersProcessingTime - max(sum(processingTime) by partitioner)
Helper interface to generalize functionality executed in the thread
since it is absolutely the same for partitionBatch and flushOutgoingBatches
protected is for testing purposes
Exposes partition information to UDFs to allow queries to limit reading
partitions dynamically.
Helps to perform limit in a partition within a record batch.
Interface to define a partition.
Represents a metadata for the table part, which corresponds to the specific partition key.
This class is generated by jOOQ.
This class is generated by jOOQ.
Class PathInExpr is to recursively analyze a expression trees with a map of
indexed expression collected from indexDescriptor, e.g.
Is used to provide schema using given schema file name and path.
Used to represent path to nested field within schema as a chain of path segments.
Path serializer to simple String path.
A convenience class used to expedite zookeeper paths manipulations.
Injection for a single pause.
Deprecated.
Deprecated.
This represents a persisted
OptionValue
.This deserializer only fetches the relevant information we care about from a store, which is the
value of an option.
Implementation of aliases table that updates its version in persistent store after modifications.
Implementation of
AliasRegistry
that persists aliases tables
to the pre-configured persistent store.Stores and retrieve instances of given value type.
An abstraction for configurations that are used to create a
store
.Defines operation mode of a
PersistentStore
instance.A factory used to create
store
instances.Abstract base class for schedulers that work with persistent
(long-running) tasks.
Implementation of
TokenRegistry
that persists token tables
to the preconfigured persistent store.Implementation of tokens table that updates its version in persistent store after modifications.
Phoenix’s Connection objects are different from most other JDBC Connections
due to the underlying HBase connection.
The Caverphone function is a phonetic matching function.
The Caverphone function is a phonetic matching function.
Encodes a string into a Cologne Phonetic value.
Encodes a string into a Daitch-Mokotoff Soundex value.
Implements the Double Metaphone phonetic algorithm (https://en.wikipedia.org/wiki/Metaphone),
and calculates a given string's Double Metaphone value.
Match Rating Approach Phonetic Algorithm Developed by Western Airlines in 1977.
Implements the Metaphone phonetic algorithm (https://en.wikipedia.org/wiki/Metaphone),
and calculates a given string's Metaphone value.
The New York State Identification and Intelligence System Phonetic Code, commonly known as NYSIIS, is a phonetic algorithm devised in 1970 as part of the New York State Identification and Intelligence System (now a part of the New York State Division of Criminal Justice Services).
Encodes a string into a Refined Soundex value.
Encodes a string into a Soundex value.
Visitor class designed to traversal of a operator tree.
Implementation of
CredentialsProvider
that holds credentials provided by user.Returns RuleSet for concrete planner phase.
Logical plan meta properties.
Helper class to return PlanFragments based on the query plan
or based on split query plan
As of now it is only invoked once per query and therefore cheap to create PlanSplitter object
on heap.
Builds a string in Drill's "plan string" format: that shown in the
text version of
EXPLAIN PLAN FOR
output.Aggregate implementation for Drill plugins.
The rule that converts provided aggregate operator to plugin-specific implementation.
Generalized interface for bootstraping or upgrading the plugin persistent
store.
Loads the set of bootstrap plugin configurations for new systems.
Abstract base class for a rule that converts provided operator to plugin-specific implementation.
PluginCost describes the cost factors to be used when costing for the specific storage/format plugin
An interface to check if a parameter provided by user is valid or not.
Class which checks whether the provided parameter value is greater than
or equals to a minimum limit.
Table implementation based on
DynamicDrillTable
to be used by Drill plugins.Filter implementation for Drill plugins.
The rule that converts provided filter operator to plugin-specific implementation.
Represents a storage plugin, defined by a (name, config) pair.
Callback for the implementation process that checks whether a specific operator
can be converted and converts a tree of
PluginRel
nodes into expressions
that can be consumed by the storage plugin.Prel used to represent a Plugin Conversion within an expression tree.
Join implementation for Drill plugins.
The rule that converts provided join operator to plugin-specific implementation.
Limit implementation for Drill plugins.
Represents a plugin-specific plan once children nodes have been pushed down into group scan.
Project implementation for Drill plugins.
The rule that converts provided project operator to plugin-specific implementation.
Provides a loose coupling of the plugin registry to the resources it needs
from elsewhere.
Relational expression that uses specific plugin calling convention.
Provides rules required for adding support of specific operator pushdown for storage plugin.
Sort implementation for Drill plugins.
The rule that converts provided sort operator to plugin-specific implementation.
Union implementation for Drill plugins.
The rule that converts provided union operator to plugin-specific implementation.
This class uses reflection of a Java class to construct a
RecordDataType
.Reads values from the given list of pojo instances.
Pojo writer interface for writers based on types supported for pojo.
Pojo writer for boolean.
Pojo writer for decimal.
Pojo writer for double.
Pojo writer for Enum.
Pojo writer for float.
Pojo writer for int.
Pojo writer for long.
Pojo writer for Long.
Pojo writer for Boolean.
Pojo writer for Double.
Pojo writer for Float.
Pojo writer for Integer.
Pojo writer for Timestamp.
Pojo writer for String.
Interface for objects that are polled on each
controller clock tick in order to perform
time-based tasks.
The base allocator that we use for all of Drill's memory management.
Cost estimates per physical relation.
A marker interface that means that this node should be finalized before execution planning.
Debug-time class that prints a PRel tree to the console for
inspection.
Contains worker
Runnable
for creating a prepared statement and helper methods.Runnable that creates a prepared statement for given
UserProtos.CreatePreparedStatementReq
and
sends the response at the end.This class rewrites all the project expression that contain convert_to/ convert_from
to actual implementations.
Primitive (non-map) column.
Manages a
PriorityQueueCopier
instance produced from code generation.We've gathered a set of batches, each of which has been sorted.
Indicates private plugins which will be excluded from automatic plugin
discovery.
System table listing completed profiles
Base class for Profile Iterators
System table listing completed profiles as JSON documents
Wrapper class for a
query profile
, so it to be presented through web UI.Implements callbacks to build the physical vectors for the project
record batch.
Enhanced form of a dynamic column which records all information from
the project list.
Utility class to check if a column is consistent with the projection
requested for a query.
Provides a variety of ways to filter columns: no filtering, filter
by (parsed) projection list, or filter by projection list and
provided schema.
Schema-based projection.
Compound filter for combining direct and provided schema projections.
Projection filter based on the (parsed) projection list.
Implied projection: either project all or project none.
Projection filter in which a schema exactly defines the set of allowed
columns, and their types.
Projection based on a non-strict provided schema which enforces the type of known
columns, but has no opinion about additional columns.
Converts a projection list passed to an operator into a scan projection list,
coalescing multiple references to the same column into a single reference.
Schema tracker for the "normal" case in which schema starts from a simple
projection list of column names, optionally with a provided schema.
ProjectMemoryManager(PMM) is used to estimate the size of rows produced by
ProjectRecordBatch.
A physical Prel node for Project operator.
This FieldWriter implementation delegates all FieldWriter API calls to an inner FieldWriter.
Interface for an object that defines properties.
Utilities to get/set typed values within a propertied object
Modified version of
ProtobufVarint32FrameDecoder
that avoids bytebuf copy.Create a Drill field listener based on a provided schema.
Iterates over the set of batches in a result set, providing
a row set reader to iterate over the rows within each batch.
Protocol
Clock driver that calls a callback once each pulse period.
Interface implemented to receive calls on each clock "tick."
Push-based result set reader, in which the caller obtains batches
and registers them with the implementation.
Represents one level of qualifier for a column.
Query is an abstraction of openTSDB subQuery
and it is integral part of DBQuery
Indicates that an external source has cancelled the query.
Per-compilation unit class loader that holds both caching and compilation
steps.
Packages a batch from the Screen operator to send to its
user connection.
Represents a batch of data with a schema.
Package that contains only a query ID.
Provides SQL queries executor configured based on given data source and SQL dialect.
Each Foreman holds its own QueryManager.
OptionManager
that holds options within QueryContext
.Parallelizes the query plan.
Interface which defines a queue implementation for query queues.
Exception thrown for all non-timeout error conditions.
The opaque lease returned once a query is admitted
for execution.
Exception thrown if a query exceeds the configured wait time
in the query queue.
Interface which defines an implementation for managing queue configuration of a leaf
ResourcePool
Parses and initialize QueueConfiguration for a
ResourcePool
.Manages resources for an individual query in conjunction with the
global
ResourceManager
.Extends a
QueryResourceAllocator
to provide queueing support.Model class for Query page
Model class for Results page
Encapsulates the future management of query submissions.
Query runner for streaming JSON results.
Is responsible for query transition from one state to another,
incrementing / decrementing query statuses counters.
Definition of a minor fragment that contains the (unserialized) fragment operator
tree and the (partially built) fragment.
Used to keep track of selected leaf and all rejected
ResourcePool
for the provided query.Paralellizer specialized for managing resources for a query based on Queues.
Used in case of error while selecting a queue for a given query
Interface that defines all the implementation of a QueueSelectionPolicy supported by ResourceManagement
Factory to return an instance of
QueueSelectionPolicy
based on the configured policy name.Randomly selects a queue from the list of all the provided queues.
A RangePartitionExchange provides the ability to divide up the rows into separate ranges or 'buckets'
based on the values of a set of columns (the range partitioning columns).
Provides the ability to divide up the input rows into a fixed number of
separate ranges or 'buckets' based on the values of a set of columns (the
range partitioning columns).
A batch buffer is responsible for queuing incoming batches until a consumer is ready to receive them.
Drill RDBMS Metastore configuration which is defined
in
MetastoreConfigConstants.MODULE_RESOURCE_FILE_NAME
file.Visits
FilterExpression
implementations and transforms them into JOOQ Condition
.Implementation of
Metadata
interface.RDBMS Drill Metastore implementation that creates necessary tables using Liquibase,
initializes data source using provided config.
Provides RDBMS Metastore component tools to transform, read or write data from / into RDBMS tables.
Specific RDBMS Drill Metastore runtime exception to indicate exceptions thrown
during RDBMS Drill Metastore code execution.
Implementation of
Modify
interface based on AbstractModify
parent class.RDBMS operation main goal is to execute SQL code using provided query executor.
Executes delete operation steps for the given table.
Executes overwrite operation steps for the given table.
Implementation of
Read
interface based on AbstractRead
parent class.Metastore Tables component which stores tables metadata in the corresponding RDBMS tables:
TABLES, SEGMENTS, FILES, ROW_GROUPS, PARTITIONS.
Drill Metastore Read interface contains methods to be implemented in order
to provide read functionality from the Metastore component.
Internal operations to wire up a set of readers.
Creates a batch reader on demand.
Row set index base class used when indexing rows within a row
set for a row set reader.
Computes the full output schema given a table (or batch)
schema.
Reader-level projection is customizable.
Manages the schema and batch construction for a managed reader.
Orchestrates projection tasks for a single reader within the set that the
scan operator manages.
Factory for creation of Hive record readers used by
HiveScanBatchCreator
.Holds all system / session options that are used during data read from Kafka.
Internal state for reading from a Parquet file.
A receiver is one half of an exchange operator.
Manager all connections between two particular bits.
A record batch contains a set of field values for a particular range of
records.
Describes the outcome of incrementing RecordBatch forward by a call to
RecordBatch.next()
.Holds the data for a particular record batch for later manipulation.
Holds record batch loaded from record batch message.
Logic for handling batch record overflow; this class essentially serializes overflow vector data in a
compact manner so that it is reused for building the next record batch.
Builder class to construct a
RecordBatchOverflow
objectField overflow definition
Record batch definition
Given a record batch or vector container, determines the actual memory
consumed by each column, the average row, and the entire record batch.
This class is tasked with managing all aspects of flat Parquet reader record batch sizing logic.
Field memory quota
An abstraction to allow column readers attach custom field overflow state
Container object to hold current field overflow state
Container object to supply variable columns statistics to the batch sizer
Utility class to capture key record batch statistics.
Indicates whether a record batch is Input or Output
Helper class which loads contextual record batch logging options
Provides methods to collect various information_schema data.
Provides information_schema data based on information stored in
AbstractSchema
.Provides information_schema data based on information stored in Drill Metastore.
Defines names and data types of columns in a static drill table.
RecordIterator iterates over incoming record batches one record at a time.
For new implementations please use new
ManagedReader
Pojo object for a record in INFORMATION_SCHEMA.CATALOGS
Pojo object for a record in INFORMATION_SCHEMA.COLUMNS
Pojo object for a record in INFORMATION_SCHEMA.FILES
Pojo object for a record in INFORMATION_SCHEMA.PARTITIONS
Pojo object for a record in INFORMATION_SCHEMA.SCHEMATA
Pojo object for a record in INFORMATION_SCHEMA.TABLES
Pojo object for a record in INFORMATION_SCHEMA.VIEWS
Wrapper around VectorAccessible to iterate over the records and fetch fields within a record.
RecordWriter interface.
Utilities for converting SQL
LIKE
and SIMILAR
operators
to regular expressions.Callback from the ZooKeeper registry to announce events
related to Drillbit registration.
Fixed set of Drill relational operators, using well-defined
names.
Is responsible for remote function registry management.
It is necessary to start Drillbit.
RepeatedBigInt implements a vector with multiple values per row (e.g.
RepeatedBit implements a vector with multiple values per row (e.g.
RepeatedDate implements a vector with multiple values per row (e.g.
Deprecated.
RepeatedDecimal18 implements a vector with multiple values per row (e.g.
Deprecated.
RepeatedDecimal28Dense implements a vector with multiple values per row (e.g.
Deprecated.
RepeatedDecimal28Sparse implements a vector with multiple values per row (e.g.
Deprecated.
RepeatedDecimal38Dense implements a vector with multiple values per row (e.g.
Deprecated.
RepeatedDecimal38Sparse implements a vector with multiple values per row (e.g.
Deprecated.
RepeatedDecimal9 implements a vector with multiple values per row (e.g.
A
ValueVector
mix-in that can be used in conjunction with
RepeatedValueVector
subtypes.RepeatedFloat4 implements a vector with multiple values per row (e.g.
RepeatedFloat8 implements a vector with multiple values per row (e.g.
RepeatedIntervalDay implements a vector with multiple values per row (e.g.
RepeatedInterval implements a vector with multiple values per row (e.g.
RepeatedIntervalYear implements a vector with multiple values per row (e.g.
RepeatedInt implements a vector with multiple values per row (e.g.
Builder for a repeated list.
Represents the internal state of a RepeatedList vector.
Repeated list column state.
Track the repeated list vector.
Implements a writer for a repeated list.
RepeatedSmallInt implements a vector with multiple values per row (e.g.
RepeatedTimeStamp implements a vector with multiple values per row (e.g.
RepeatedTime implements a vector with multiple values per row (e.g.
RepeatedTinyInt implements a vector with multiple values per row (e.g.
RepeatedUInt1 implements a vector with multiple values per row (e.g.
RepeatedUInt2 implements a vector with multiple values per row (e.g.
RepeatedUInt4 implements a vector with multiple values per row (e.g.
RepeatedUInt8 implements a vector with multiple values per row (e.g.
Represents repeated (AKA "array") value vectors.
RepeatedVar16Char implements a vector with multiple values per row (e.g.
RepeatedVarBinary implements a vector with multiple values per row (e.g.
Class is responsible for generating record batches for text file inputs.
RepeatedVarChar implements a vector with multiple values per row (e.g.
RepeatedVarDecimal implements a vector with multiple values per row (e.g.
Vector state for a scalar array (repeated scalar) vector.
BasicValue with additional tracking information used to determine
the replaceability of the value (a holder, or boxed value) for scalar replacement purposes.
Plan-time properties of a requested column.
Represents one name element.
Represents the set of columns projected for a tuple (row or map.)
Each column may have structure: a set of referenced names or
array indices.
Represents an explicit projection at some tuple level.
Note that if a handler maintains any internal state, the state will be disposed if the handler on the connection
changes.
Converts a
SqlNode
representing: "ALTER ..Modified implementation of countdown latch that allows a barrier to be unilaterally opened and closed.
Aliases table that fallback user aliases calls to public aliases
if alias not found in user aliases table.
A resolved column has a name, and a specification for how to project
data from a source vector to a vector in the final output container.
Represents a column which is implicitly a map (because it has children
in the project list), but which does not match any column in the table.
Projected column that serves as both a resolved column (provides projection
mapping) and a null column spec (provides the information needed to create
the required null vectors.)
Column that matches one provided by the table.
Drill rows are made up of a tree of tuples, with the row being the root
tuple.
Represents a map implied by the project list, whether or not the map
actually appears in the table schema.
Represents a map tuple (not the map column, rather the value of the
map column.) When projecting, we create a new repeated map vector,
but share the offsets vector from input to output.
Represents the top-level tuple which is projected to a
vector container.
Drillbit-wide resource manager shared by all queries.
Builds the proper resource manager and queue implementation for the configured
system options.
Interface which defines an implementation of ResourcePool configuration for
ResourcePoolTree
Parses and initializes all the provided configuration for a ResourcePool defined in RM configuration.
Interface that defines implementation for selectors assigned to a ResourcePool.
Interface which defines the implementation of a hierarchical configuration for all the ResourcePool that will be
used for ResourceManagement
Parses and initializes configuration for ResourceManagement in Drill.
Responsible for setting configured
ExecConstants.HTTP_JETTY_SERVER_RESPONSE_HEADERS
to HttpServletResponse
object.Copies rows from an input batch to an output batch.
Builds a result set (series of zero or more row sets) based on a defined
schema which may
evolve (expand) over time.
Implementation of the result set loader.
Read-only set of options for the result set loader.
Builder for the options for the row set loader.
Interface for a cache that implements "vector persistence" across
multiple result set loaders.
Manages an inventory of value vectors used across row batch readers.
An ASM ClassVisitor that allows for a late-bound delegate.
Special exception to be caught by caller, who is supposed to free memory by
spilling and try again
Return type calculation interface for functions that have return type set as with enum
FunctionTemplate.ReturnType
.Rewrites an expression tree, replacing OR and AND operators with more than 2 operands with a chained operators
each with only 2 operands.
Rewrites an expression tree, replacing chained OR and AND operators with a single N-ary operator
e.g.
This class is an enhanced version for DrillOptiq,
1, it can convert expressions in one more layer(project that have expression) above project-scan,
while DrillOptiq can only convert expressions directly reference scan's row type,
2, it can record the generated LogicalExpression of each rexNode in the rexNode tree for future reference
this result can serve future rewrite that need to locate particular LogicalExpressions
Defines all the default values used for the optional configurations for ResourceManagement
Used in cases of any error with the ResourceManagement configuration
Marker interface describe the root of a query plan.
The root allocator for using direct memory inside a Drillbit.
Node which is the last processing node in a query plan.
Provides services needed by the
FragmentExecutor
.This managers determines when to run a root fragment node.
The root parsers are special: they must detect EOF.
Parser for data embedded within a message structure which is
encoded as an array of objects.
Parser for data embedded within a message structure which is encoded
as a single JSON object.
Parser for a compliant JSON data set which consists of an
array at the top level, where each element of the array is a
JSON object that represents a data record.
Parser for a jsonlines-style
data set which consists of a series of objects.
Extended version of a record reader used by the revised
scan batch operator.
Implementation of Calcite's ROW(col1, col2, ..., colN) constructor function.
Metadata which corresponds to the row group level of table.
This class is generated by jOOQ.
This class is generated by jOOQ.
Interface for a row key join
Enum for RowKeyJoin internal state.
A row set is a collection of rows stored as value vectors.
Single row set which is empty and allows writing.
Row set comprised of multiple single row sets, along with
an indirection vector (SV4).
Row set that manages a single batch of rows.
Fluent builder to quickly build up an row set (record batch)
programmatically.
Helper class to obtain string representation of RowSet.
Interface for writing values to a row set.
Implementation of the row set loader.
Reader for all types of row sets: those with or without
a selection vector.
Reader implementation for a row set.
Interface for writing values to a row set.
Implementation of a row set writer.
Define the validity of a row group against a filter
ALL : all rows match the filter (can not drop the row group and can prune the filter)
NONE : no row matches the filter (can drop the row group)
SOME : some rows only match the filter or the filter can not be applied (can not drop the row group nor the filter)
The Rpc Bus deals with incoming and outgoing communication and is used on both the server and the client side of a
system.
RpcCommand<T extends com.google.protobuf.MessageLite,C extends RemoteConnection,E extends com.google.protobuf.Internal.EnumLite,M extends com.google.protobuf.MessageLite>
RpcConfig.RpcMessageType<SEND extends com.google.protobuf.MessageLite,RECEIVE extends com.google.protobuf.MessageLite,T extends com.google.protobuf.Internal.EnumLite>
Parent class for all rpc exceptions.
Holder interface for all the metrics used in RPC layer
Contains rule instances which use custom RelBuilder.
A RuntimeFilterRecordBatch steps over the ScanBatch.
A reporter to send out the bloom filters to their receivers.
This class manages the RuntimeFilter routing information of the pushed down join predicate
of the partitioned exchange HashJoin.
This sink receives the RuntimeFilters from the netty thread,
aggregates them in an async thread, broadcast the final aggregated
one to the RuntimeFilterRecordBatch.
This visitor does two major things:
1) Find the possible HashJoinPrel to add a RuntimeFilterDef to it.
A binary wire transferable representation of the RuntimeFilter which contains
the runtime filter definition and its corresponding data.
Describes the field will provide output from the given function.
Utility to scan classpath at runtime
Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.SAME_IN_OUT_LENGTH
.Writer for a column that holds an array of scalars.
Base class for scalar field listeners
Defines a reader to obtain values from value vectors using
a simple, uniform interface.
Reference list of classes we will perform scalar replacement on.
Parses
and simply passes the value token on to the listener.
true | false | null | integer | float | string |
embedded-object
and simply passes the value token on to the listener.
Parses
true | false | null | integer | float | string |
embedded-object
Represents a scalar value: a required column, a nullable column,
or one element within an array of scalars.
Record batch used for a particular scan.
Row set mutator implementation provided to record readers created by
this scan batch.
Binds the scan lifeycle to the scan operator.
This visitor will walk a logical plan and record in a map the list of field references associated to each scan.
Parses and analyzes the projection list passed to the scanner.
Interface for add-on parsers, avoids the need to create
a single, tightly-coupled parser for all types of columns.
Identifies the kind of projection done for this scan.
/**
Basic scan framework for a set of "managed" readers and which uses the
scan schema tracker to evolve the scan output schema.
Gathers options for the
ScanLifecycle
then builds a scan lifecycle
instance.Interface to the set of readers, and reader schema, that the scan operator
manages.
Implementation of the revised scan operator that uses a mutator aware of
batch sizes.
Parse the projection list into a dynamic tuple schema.
The root doc of the scan result
Builds the configuration given to the
ScanSchemaTracker
.Performs projection of a record reader, along with a set of static
columns, to produce the final "public" result set (record batch)
for the scan operator.
Resolves a schema against the existing scan schema.
Indicates the source of the schema to be analyzed.
Computes scan output schema from a variety of sources.
Cost estimate for a scan.
The scheduler describes the set of tasks to run.
The cluster state for tasks managed by a scheduler.
Represents the set of commands called by the cluster controller to manage the
state of tasks within a task group.
Manages a the set of tasks associated with a scheduler.
Abstraction for representing structure of openTSDB table
Simple "tracker" based on a defined, fixed schema.
Builder of a row set schema expressed as a list of materialized
fields.
A reusable builder that supports the creation of BatchSchemas.
Contains information needed by
AbstractSchema
implementations.Interface to implement to provide required info for
SchemaConfig
Holder class that contains table name, schema definition and current schema container version.
Schema container version holder contains version in int representation.
Protobuf enum
exec.ValueMode
Storage plugins implements this interface to register the schemas they provide.
Aggregate function which accepts VarChar column with string representations of
TupleMetadata
and returns string representation of TupleMetadata
with merged schema.Aggregate function which infers schema from incoming data and returns string representation of
TupleMetadata
with incoming schema.Parent class for CREATE / DROP / DESCRIBE / ALTER SCHEMA handlers.
ALTER SCHEMA ADD command handler.
CREATE SCHEMA command handler.
DESCRIBE SCHEMA FOR TABLE command handler.
Wrapper to output schema in a form of table with one column named `schema`.
DROP SCHEMA command handler.
ALTER SCHEMA REMOVE command handler.
Empty batch without schema and data.
The operator for creating
SchemalessBatch
instancesThe type of scan operator, which allows to scan schemaless tables (
DynamicDrillTable
with null selection)Negotiates the table schema with the scanner framework and provides
context information for the reader.
Negotiates the table schema with the scanner framework and provides
context information for the reader.
Implementation of the schema negotiation between scan operator and
batch reader.
Implementation of the schema negotiation between scan operator and
batch reader.
This class provides an empty implementation of
SchemaParserVisitor
,
which can be extended to create a visitor which only needs to handle a subset
of the available methods.This interface defines a complete generic visitor for a parse tree produced
by
SchemaParser
.Is thrown when parsing schema using ANTLR4 parser.
Exposes partition information for a particular schema.
This is the path for the column in the table
Provides mechanisms to manage schema: store / read / delete.
Factory class responsible for creating different instances of schema provider based on given parameters.
Implements a "schema smoothing" algorithm.
Exception thrown if the prior schema is not compatible with the
new table schema.
Tracks changes to schemas via "snapshots" over time.
Creates new schema trees.
Utility class for dealing with changing schemas
Set of schema utilities that don't fit well as methods on the column
or tuple classes.
Visits schema and stores metadata about its columns into
TupleMetadata
class.Visits column definition, adds column properties to
ColumnMetadata
if present.Visits various types of columns (primitive, struct, map, array) and stores their metadata
into
ColumnMetadata
class.Visits schema or column properties.
Transfer batches to a user connection.
A ByteArrayInputStream that supports the HDFS Seekable API.
Metadata which corresponds to the segment level of table.
This class is generated by jOOQ.
This class is generated by jOOQ.
This class extends from
TableScan
.A selection vector that fronts, at most, 64K values.
A wrapper for Runnables that provides a hook to do cleanup.
A sender is one half of an exchange node operations.
Account for whether all messages sent have been completed.
This helper class holds any custom Jackson serializers used when outputing
the data in JSON format.
Serializes execution of multiple submissions to a single target, while still
using a thread pool to execute those submissions.
ServerAuthenticationHandler<S extends ServerConnection<S>,T extends com.google.protobuf.Internal.EnumLite>
Handles SASL exchange, on the server-side.
Contains worker
Runnable
for returning server meta informationRunnable that creates server meta information for given
ServerMetaReq
and
sends the response at the end.A enumeration of server methods, and the version they were introduced
it allows to introduce new methods without changing the protocol, with client
being able to gracefully handle cases were method is not handled by the server.
OptionManager
that holds options within UserSession
context.Converts a
SqlNode
representing: "ALTER ..Representation of "SET property.name" query result.
Represents a layer of row batch reader that works with a
result set loader and schema manager to structure the data
read by the actual row batch reader.
Original show files command result holder is used as wrapper over new
Records.File
holder
to maintain backward compatibility with ODBC driver etc.Format plugin config for shapefile data files.
Describes the field will provide output from the given function.
Base class for scalar and object arrays.
An implementation of interface CharStream, where the stream is assumed to
contain only ASCII characters (without unicode processing).
Representation of a millisecond duration in a human-readable format
Parsers a Mongo extended type of the form:
Metadata provider which supplies only table schema and / or table statistics if available.
Performs the actual HTTP requests for the HTTP Storage Plugin.
Intercepts requests and adds authentication headers to the request
This interceptor is used in pagination situations or elsewhere when APIs have burst throttling.
Shim for a list that holds a single type, but may eventually become a
list of variants.
A message parser which accepts a path to the data encoded as a
slash-separated string.
The simple parallelizer determines the level of parallelization of a plan
based on the cost of the underlying operations.
Designed to setup initial values for arriving fragment accounting.
Abstract class for simple partition.
Indicates simple predicate implementations which have column and one value.
Indicates
FilterExpression.Operator.EQUAL
operator expression:
storagePlugin = 'dfs'.Indicates
FilterExpression.Operator.GREATER_THAN
operator expression:
index > 1.Indicates
FilterExpression.Operator.GREATER_THAN_OR_EQUAL
operator expression:
index >= 1.Indicates
FilterExpression.Operator.LESS_THAN
operator expression:
index < 1.Indicates
FilterExpression.Operator.LESS_THAN_OR_EQUAL
operator expression:
index <= 1.Indicates
FilterExpression.Operator.NOT_EQUAL
operator expression:
storagePlugin != 'dfs'.Builds a set of readers for a single (non-hyper) batch.
Wrap a VectorContainer into a record batch.
Rewrite RexNode with these policies:
1) field renamed.
This class go through the RexNode, collect all the fieldNames, mark starting positions(RexNode) of fields
so this information can be used later e,.g.
Indicates single expression predicate implementations.
Indicates
FilterExpression.Operator.NOT
operator expression:
not(storagePlugin = 'dfs').SimpleOperator is an operator that has one inputs at most.
Produce a metadata schema from a vector container.
Sender that pushes all data to a single destination node.
Base class for a single vector.
State for a scalar value vector.
Special case for an offset vector.
State for a scalar value vector.
An operator that cannot be subscribed to.
Uses this class to keep track # of Drill Logical Expressions that are
put to JBlock.
To implement skip footer logic this records inspector will buffer N number of incoming read records in queue
and make sure they are skipped when input is fully processed.
SmallInt implements a vector of fixed width values.
Resolve a table schema against the prior schema.
Implementation of
FragmentParallelizer
where fragment has zero or
more endpoints with affinities.Rule that converts an
Sort
to a physical SortPrel
, implemented by a Drill "order" operation.Single-batch sorter using a generated implementation based on the
schema and sort specification.
Implementation of the external sort which is wrapped into the Drill
"next" protocol by the
ExternalSortBatch
class.Return results for a single input batch.
Iterates over the final sorted results.
Computes the memory needs for input batches, spill batches and merge
batches.
Rule that converts a logical
DrillSortRel
to a physical sort.An operator that produces data without any parents.
This interface represents the metadata for a spilled partition.
Replaces "incoming" - instead scanning a spilled partition file
Holds a set of spilled batches, represented by a file on disk.
Represents the set of spilled batches, including methods to spill and/or
merge a set of batches to produce a new spill file.
Manages the spilling information for an operator.
This is a class that is used to do updates of the spilled state.
Generates the set of spill files for this sort session.
Wrapper around an input stream to collect the total bytes
read through the stream for use in reporting performance
metrics.
Wrapper around an output stream to collect the total bytes
written through the stream for use in reporting performance
metrics.
SimpleParallelizerMultiPlans class is an extension to SimpleParallelizer to
help with getting PlanFragments for split plan.
There are two known time columns in Splunk, the _time and _indextime.
This class wraps the functionality of the Splunk connection for Drill.
The Splunk storage plugin accepts filters which are:
A single column = value expression
An AND'ed set of such expressions,
If the value is one with an unambiguous conversion to
a string.
These are special fields that alter the queries sent to Splunk.
This implementation of RawBatchBuffer starts writing incoming batches to disk once the buffer size reaches a threshold.
Column-data accessor that implements JDBC's Java-null--when--SQL-NULL mapping.
SQL tree for ANALYZE statement.
Class responsible for managing:
parsing -
SqlConverter.parse(String)
validation - SqlConverter.validate(SqlNode)
conversion to rel - SqlConverter.toRel(SqlNode)
(String)}
Enum which indicates type of CREATE statement.
Enum for metadata types to drop.
SQL Pattern Contains implementation
Sql parse tree node to represent statement:
REFRESH TABLE METADATA tblname
Parent class for CREATE, DROP, DESCRIBE, ALTER SCHEMA commands.
CREATE SCHEMA sql call.
DESCRIBE SCHEMA FOR TABLE sql call.
Enum which specifies format of DESCRIBE SCHEMA FOR table output.
DROP SCHEMA sql call.
Sql parse tree node to represent statement:
SHOW FILES [{FROM | IN} db_name] [LIKE 'pattern' | WHERE expr]
Sql parse tree node to represent statement:
SHOW {DATABASES | SCHEMAS} [LIKE 'pattern' | WHERE expr]
Sql parse tree node to represent statement:
SHOW TABLES [{FROM | IN} db_name] [LIKE 'pattern' | WHERE expr]
Sql parser tree node to represent
USE SCHEMA
statement.Configures
SslContextFactory
when https is enabled for Web UIConvenient way of obtaining and manipulating stack traces for debugging.
Factory for standard conversions as outlined in the package header.
Definition of a conversion including conversion type and the standard
conversion class (if available.)
Indicates the type of conversion needed.
Launches a drill cluster by uploading the Drill archive then launching the
Drill Application Master (AM).
Base class for columns that take values based on the
reader, not individual rows.
Base class for columns that take values based on the
reader, not individual rows.
A
DrillTable
with a defined schema
Currently, this is a wrapper class for SystemTable
.The log2m parameter defines the accuracy of the counter.
The log2m parameter defines the accuracy of the counter.
The log2m parameter defines the accuracy of the counter.
The log2m parameter defines the accuracy of the counter.
Class-holder for statistics kind and its value.
Class represents kind of statistics or metadata, for example it may be min value for column,
or row count for table.
Example input and output:
Schema of incoming batch:
Interface for collecting and obtaining statistics.
Reads records from the RecordValueAccessor and writes into StatisticsRecordCollector.
Visitor to collect stats such as cost and parallelization info of operators within a fragment.
Listener that keeps track of the status of batches sent, and updates the SendingAccountor when status is received
for each batch
Data Model for rendering /options on webUI
Returns a geometry that represents all points whose distance from this Geometry
is less than or equal to radius
Returns true if and only if no points of B lie in the exterior of A,
and at least one point of the interior of B lies in the interior of A.
Returns TRUE if the supplied geometries have some, but not all, interior points in common
Given geometries A and B, this function returns a geometry that represents
the part of geometry A that does not intersect with geometry B
Returns TRUE if two Geometries do not "spatially intersect" - if they do not share any space
For geometry type Returns the 2D Cartesian distance between two geometries in projected units (based on spatial ref).
Returns a geometry representing the double precision (float8) bounding box of the supplied geometry.
Returns true if the given geometries represent the same geometry.
Returns TRUE if the Geometries/Geography "spatially intersect in 2D" - (share any portion of space) and FALSE if they don't (they are Disjoint)
Perform a semi-graceful shutdown of the Drill-on-YARN AM.
Interface for all implementations of the storage plugins.
The standardised authentication modes that storage plugins may offer.
Indicates an error when decoding a plugin from JSON.
Indicates the requested plugin was not found.
Plugin registry.
Helper class that can be used to obtain rules required for pushing down operators
that specific plugin supports configured using
StoragePluginRulesSupplier.StoragePluginRulesSupplierBuilder
.Map of storage plugin *configurations* indexed by name.
Interface to the storage mechanism used to store storage plugin
configurations, typically in JSON format.
Concrete storage plugin (configuration) store based on the
PersistentStore
abstraction.Storage plugin table scan rel implementation.
Holds storage properties used when writing schema container.
Model class for Storage Plugin and Credentials page.
Contains list of parameters that will be used to store path / files on file system.
An interface which supports storing a record stream.
A Store interface used to store and retrieve instances of given value type.
Returns TRUE if the Geometries share space, are of the same dimension, but are not completely contained by each other
The Aggregator can return one of the following outcomes:
Stream the results of a query to a REST client as JSON, following the schema
defined by
QueryResult
to maintain backward compatibility.Returns true if this Geometry is spatially related to anotherGeometry, by testing for intersections between
the Interior, Boundary and Exterior of the two geometries as specified by the values in the intersectionMatrixPattern.
64-bit integer (BIGINT) listener with conversions only from
numbers and strings.
32-bit integer (INT) listener with conversions only from
numbers and strings.
Return type calculation implementation for functions with return type set as
FunctionTemplate.ReturnType.STRING_CAST
.This function calculates the cosine distance between two strings.
This function calculates the cosine distance between two strings.
The hamming distance between two strings of equal length is the number of
positions at which the corresponding symbols are different.
Measures the Jaccard distance of two sets of character sequence.
A similarity algorithm indicating the percentage of matched characters between two character sequences.
An algorithm for measuring the difference between two character sequences.
The Longest common subsequence algorithm returns the length of the longest subsequence that two strings have in common.
Convert string to ASCII from another encoding input.
Returns the ASCII code of the first character of input string
Returns the char corresponding to ASCII code input.
Remove the longest string containing only characters from "from" from the start of "text"
Remove the longest string containing only character " " from the start of "text"
Remove the longest string containing only characters from "from" from the start of "text"
Remove the longest string containing only character " " from the start of "text"
Returns the input char sequences repeated nTimes.
Returns the reverse string for given input.
Fill up the string to length "length" by appending the characters 'fill' at the end of 'text'
If the string is already longer than length then it is truncated.
Fill up the string to length "length" by appending the characters ' ' at the end of 'text'
If the string is already longer than length then it is truncated.
Remove the longest string containing only characters from "from" from the end of "text"
Remove the longest string containing only character " " from the end of "text"
Return the string part at index after splitting the input string using the
specified delimiter.
Return the string part from start to end after splitting the input string
using the specified delimiter.
Generates a mock string field of the given length.
Abstract implementation of RecordWriter interface which exposes interface:
StringOutputRecordWriter.startNewSchema(BatchSchema)
StringOutputRecordWriter.addField(int,String)
to output the data in string format instead of implementing addField for each type holder.Returns TRUE if the geometries have at least one point in common, but their interiors do not intersect
Return a new geometry with its coordinates transformed to a different spatial reference
Returns a geometry that represents the point set union of the Geometries
Returns a geometry that represents the point set union of the Geometries
Return the X coordinate of the point, or NaN if not available
Returns X maxima of a bounding box 2d or 3d or a geometry
Returns X minima of a bounding box 2d or 3d or a geometry
Return the Y coordinate of the point, or NaN if not available
Returns Y maxima of a bounding box 2d or 3d or a geometry
Returns Y minima of a bounding box 2d or 3d or a geometry
A SubScan operator represents the data scanned by a particular major/minor
fragment.
Removes
RelSubset
nodes from the plan.Visit Prel tree.
OptionManager
that holds options within
DrillbitContext
.Indicates system plugins which will be dynamically initialized during storage
plugin registry init stage.
Locates system storage plugins.
A collection of utility methods to retrieve and parse the values of Java system properties.
An enumeration of all tables in Drill's system ("sys") schema.
This class creates batches based on the the type of
SystemTable
.A "storage" plugin for system tables.
A namesake plugin configuration for system tables.
General table information.
Is used to uniquely identify Drill table in Metastore Tables component
based on storage plugin, workspace and table name.
Metadata which corresponds to the table level.
Base interface for providing table, partition, file etc.
Base interface for builders of
TableMetadataProvider
.Class that represents one row in Drill Metastore Tables which is a generic representation of metastore metadata
suitable to any metastore table metadata type (table, segment, file, row group, partition).
Contains schema metadata, including lists of columns which belong to table, segment, file, row group
or partition.
Definition of table parameters, contains parameter name, class type, type status (optional / required).
Metastore Tables component implementation which allows
to read / write tables metadata.
Convenience access to all tables in
This class is generated by jOOQ.
Describes table and parameters that can be used during table initialization and usage.
TablesMetadataMapper<R extends org.jooq.Record>
Abstract implementation of
AbstractMetadataMapper
for RDBMS Metastore tables component.TablesMetadataMapper
implementation for Tables.FILES
table.TablesMetadataMapper
implementation for Tables.PARTITIONS
table.TablesMetadataMapper
implementation for Tables.ROW_GROUPS
table.TablesMetadataMapper
implementation for Tables.SEGMENTS
table.TablesMetadataMapper
implementation for Tables.TABLES
table.Implementation of
MetadataTypeValidator
interface which provides
list of supported metadata types for Metastore Tables component.Metastore Tables component operations transformer that provides mechanism
to convert
TableMetadataUnit
data to Metastore overwrite / delete operations.Metastore Tables component operations transformer that provides mechanism
to convert
TableMetadataUnit
data to Metastore overwrite / delete operations.Metastore Tables component output data transformer that transforms
Record
into TableMetadataUnit
.Metastore Tables component output data transformer that transforms
Document
into TableMetadataUnit
.This class is generated by jOOQ.
Implementation of
CollectableColumnStatisticsKind
which contain base
table statistics kinds with implemented mergeStatistics()
method.Computes size of each region for given table.
Metastore Tables component filter, data and operations transformer.
Metastore Tables component filter, data and operations transformer.
Transformer implementation for RDBMS Metastore tables component.
Simple selector whose value is a string representing a tag.
AM-side state of individual containers.
Tracking plugin state.
Represents the behaviors associated with each state in the lifecycle
of a task.
Task for which a termination request has been sent to the Drill-bit, but
confirmation has not yet been received from the Node Manager.
Task for which a forced termination request has been sent to the Node
Manager, but a stop message has not yet been received.
This class is used to record the status of the TCP Handshake.
This class is the representation of a TCP session.
Defines a code generation "template" which consist of:
An interface that defines the generated class.
A template class which implements the interface to provide
"generic" methods that need not be generated.
A signature that lists the methods and vector holders used
by the template.
A simplified byte wrapper similar to Hadoop's Text class without all the dependencies.
Text format plugin for CSV and other delimited text formats.
A byte-based Text parser implementation.
This function calculates the Shannon Entropy of a given string of text, normed for the string length.
Punctuation pattern is useful for comparing log entries.
This function calculates the Shannon Entropy of a given string of text.
Global resource manager that provides basic admission control (AC) via a
configured queue: either the Zookeeper-based distributed queue or the
in-process embedded Drillbit queue.
Per-query resource manager.
Searches a fragment operator tree to find buffered within that fragment.
This function is used for facilitating time series analysis by creating buckets of time intervals.
This function is used for facilitating time series analysis by creating buckets of time intervals.
This function is used for facilitating time series analysis by creating buckets of time intervals.
This function is used for facilitating time series analysis by creating buckets of time intervals.
Allows parallel executions of tasks in a simplified way.
Drill-flavored version of a timestamp parser.
TimeStamp implements a vector of fixed width values.
Drill-specific extension to allow times only.
Time implements a vector of fixed width values.
TinyInt implements a vector of fixed width values.
Describes the input token stream.
Internal exception to unwind the stack when a syntax
error is detected within a record.
Token Manager Error.
Persistent Registry for OAuth Tokens
Operator Batch which implements the TopN functionality.
Adds non-trivial top project to ensure the final output field names are preserved.
Contains value vectors which are exactly the same
as the incoming record batch's value vectors.
Proxy driver for tracing calls to a JDBC driver.
Provides various mechanism implementations to transform filters, data and operations.
Provides various mechanism implementations to transform filters, data and operations.
Provides various methods for RDBMS Metastore data, filters, operations transformation.
An abstraction for storing, retrieving and observing transient (key, value) pairs in a distributed environment.
Represents an event created as a result of an operation over a particular (key, value) entry in a
store
instance.Types of store events.
Factory that is used to obtain a
store
instance.A listener used for observing
transient store
events
.TransportCheck decides whether or not to use the native EPOLL mechanism for communication.
Internal tuple builder shared by the schema and map builders.
Metadata description of the schema of a row or a map.
Common interface to access a tuple backed by a vector container or a
map vector.
Common interface to access a column vector, its metadata, and its
tuple definition (for maps.) Provides a visitor interface for common
vector tasks.
Tuple-model interface for the top-level row (tuple) structure.
Implementation of a tuple name space.
Accepts { name : value ...
Interface for reading from tuples (rows or maps).
Defines the schema of a tuple: either the top-level row or a nested
"map" (really structure).
Represents the loader state for a tuple: a row or a map.
Represents a map column (either single or repeated).
Represents a tuple defined as a Drill map: single or repeated.
State for a map vector.
Handles the details of the top-level tuple, the data row itself.
Writer for a tuple.
Unchecked exception thrown when attempting to access a column writer by
name for an undefined columns.
represents the type of a field
Declares a value vector field, providing metadata about the field.
Type functions for all types.
Protobuf enum
common.DataMode
Protobuf type
common.MajorType
Protobuf type
common.MajorType
Protobuf enum
common.MinorType
Unless explicitly changed by the user previously, the admin user
groups can only be determined at runtime
Unless explicitly changed by the user previously, the admin user
can only be determined at runtime
Validator that checks if the given DateTime format template is valid.
Validator that checks if the given value is included in a list of acceptable values.
Max width is a special validator which computes and validates
the maxwidth.
This class attempts to infer the data type of an unknown data type.
Defines the query state and shared resources available to UDFs through
injectables.
UInt1 implements a vector of fixed width values.
Helper class to buffer container mutation as a means to optimize native memory copy operations.
UInt2 implements a vector of fixed width values.
UInt4 implements a vector of fixed width values.
Helper class to buffer container mutation as a means to optimize native memory copy operations.
UInt8 implements a vector of fixed width values.
Wrapper store that delegates operations to PersistentStore.
Builds unions or (non-repeated) lists (which implicitly contain
unions.)
Contains additional functions for union types in addition to those in
GUnionFunctions
Returns zero if the inputs have equivalent types.
Reader for a union vector.
Unions are overly complex.
Represents the contents of a union vector (or a pseudo-union for lists).
Union or list (repeated union) column state.
Vector wrapper for a union vector.
Union vector writer for writing list of union-type values
Lists can operate in three modes: no type, one type or many
types (that is, a list of unions.) This shim implements the
variant writer when the backing vector is a union or a list
backed by a union.
ListWriter-like writer with only difference that it acts only as a factory
for concrete type writers for UnionVector data vector.
Writer to a union vector.
Placeholder for future unnest implementation that may require code generation.
Contains the actual unnest operation.
UnorderedDeMuxExchange is a version of DeMuxExchange where the incoming batches are not sorted.
UnorderedMuxExchange is a version of MuxExchange where the incoming batches are not sorted.
Unpivot maps.
The underlying class we use for little-endian access to memory.
Raised when a column accessor reads or writes the value using the wrong
Java type (which may indicate an data inconsistency in the input data.)
UntypedNullVector is to represent a value vector with
TypeProtos.MinorType.NULL
All values in the vector represent two semantic implications: 1) the value is unknown, 2) the type is unknown.Is used for case-insensitive lexing.
Interface to provide various username/password based implementations for authentication.
Factory class which provides
UserAuthenticator
implementation based on the BOOT options.Annotation for
UserAuthenticator
implementation to identify the
implementation type.Protobuf type
exec.shared.DrillPBError
Protobuf type
exec.shared.DrillPBError
Protobuf enum
exec.shared.DrillPBError.ErrorType
Protobuf type
exec.shared.ExceptionWrapper
Protobuf type
exec.shared.ExceptionWrapper
Protobuf enum
exec.shared.FragmentState
Jar contains jar name and list of function signatures.
Jar contains jar name and list of function signatures.
Protobuf type
exec.shared.MajorFragmentProfile
Protobuf type
exec.shared.MajorFragmentProfile
Protobuf type
exec.shared.MetricValue
Protobuf type
exec.shared.MetricValue
Protobuf type
exec.shared.MinorFragmentProfile
Protobuf type
exec.shared.MinorFragmentProfile
Protobuf type
exec.shared.NamePart
Protobuf type
exec.shared.NamePart
Protobuf enum
exec.shared.NamePart.Type
Protobuf type
exec.shared.NodeStatus
Protobuf type
exec.shared.NodeStatus
Protobuf type
exec.shared.OperatorProfile
Protobuf type
exec.shared.OperatorProfile
Protobuf type
exec.shared.ParsingError
Protobuf type
exec.shared.ParsingError
Used by the server when sending query result data batches to the client
Used by the server when sending query result data batches to the client
Protobuf type
exec.shared.QueryId
Protobuf type
exec.shared.QueryId
Protobuf type
exec.shared.QueryInfo
Protobuf type
exec.shared.QueryInfo
Protobuf type
exec.shared.QueryProfile
Protobuf type
exec.shared.QueryProfile
Used by the server to report informations about the query state to the client
Used by the server to report informations about the query state to the client
Protobuf enum
exec.shared.QueryResult.QueryState
Protobuf enum
exec.shared.QueryType
Protobuf type
exec.shared.RecordBatchDef
Protobuf type
exec.shared.RecordBatchDef
Registry that contains list of jars, each jar contains its name and list of function signatures.
Registry that contains list of jars, each jar contains its name and list of function signatures.
Protobuf enum
exec.shared.RpcChannel
Protobuf type
exec.shared.SaslMessage
Protobuf type
exec.shared.SaslMessage
Protobuf enum
exec.shared.SaslStatus
Protobuf type
exec.shared.SerializedField
Protobuf type
exec.shared.SerializedField
Protobuf type
exec.shared.StackTraceElementWrapper
Protobuf type
exec.shared.StackTraceElementWrapper
Protobuf type
exec.shared.StreamProfile
Protobuf type
exec.shared.StreamProfile
Protobuf type
exec.shared.UserCredentials
Protobuf type
exec.shared.UserCredentials
Interface for getting user session properties and interacting with user
connection.
Base class for all user exception.
Builder class for DrillUserException.
Provides utilities (such as retrieving hints) to add more context to UserExceptions.
While a builder may seem like overkill for a class that is little more than small struct,
it allows us to wrap new instances in an Optional while using contructors does not.
While a builder may seem like overkill for a class that is little more than small struct,
it allows us to wrap new instances in an Optional while using contructors does not.
Protobuf type
exec.user.BitToUserHandshake
Protobuf type
exec.user.BitToUserHandshake
Message encapsulating metadata for a Catalog.
Message encapsulating metadata for a Catalog.
Protobuf enum
exec.user.CollateSupport
Message encapsulating metadata for a Column.
Message encapsulating metadata for a Column.
How a column can be used in WHERE clause
Whether a column can be updatable.
Protobuf type
exec.user.ConvertSupport
Protobuf type
exec.user.ConvertSupport
Protobuf enum
exec.user.CorrelationNamesSupport
Request message to create a prepared statement.
Request message to create a prepared statement.
Response message for CreatePreparedStatementReq.
Response message for CreatePreparedStatementReq.
Protobuf enum
exec.user.DateTimeLiteralsSupport
Request message for getting the metadata for catalogs satisfying the given optional filter.
Request message for getting the metadata for catalogs satisfying the given optional filter.
Response message for GetCatalogReq.
Response message for GetCatalogReq.
Request message for getting the metadata for columns satisfying the given optional filters.
Request message for getting the metadata for columns satisfying the given optional filters.
Response message for GetColumnsReq.
Response message for GetColumnsReq.
Protobuf type
exec.user.GetQueryPlanFragments
Protobuf type
exec.user.GetQueryPlanFragments
Request message for getting the metadata for schemas satisfying the given optional filters.
Request message for getting the metadata for schemas satisfying the given optional filters.
Response message for GetSchemasReq.
Response message for GetSchemasReq.
Request message for getting server metadata
Request message for getting server metadata
Response message for GetServerMetaReq
Response message for GetServerMetaReq
Request message for getting the metadata for tables satisfying the given optional filters.
Request message for getting the metadata for tables satisfying the given optional filters.
Response message for GetTablesReq.
Response message for GetTablesReq.
Protobuf enum
exec.user.GroupBySupport
Protobuf enum
exec.user.HandshakeStatus
Protobuf enum
exec.user.IdentifierCasing
Simple filter which encapsulates the SQL LIKE ...
Simple filter which encapsulates the SQL LIKE ...
Protobuf enum
exec.user.NullCollation
Protobuf enum
exec.user.OrderBySupport
Protobuf enum
exec.user.OuterJoinSupport
Prepared statement.
Prepared statement.
Server state of prepared statement.
Server state of prepared statement.
Protobuf type
exec.user.Property
Protobuf type
exec.user.Property
Protobuf type
exec.user.QueryPlanFragments
Protobuf type
exec.user.QueryPlanFragments
Protobuf enum
exec.user.QueryResultsMode
Protobuf type
exec.user.RequestResults
Protobuf type
exec.user.RequestResults
Enum indicating the request status.
Metadata of a column in query result set
Metadata of a column in query result set
Protobuf type
exec.user.RpcEndpointInfos
Protobuf type
exec.user.RpcEndpointInfos
//// User <-> Bit RPC ///////
Request message for running a query.
Request message for running a query.
Protobuf enum
exec.user.SaslSupport
Message encapsulating metadata for a Schema.
Message encapsulating metadata for a Schema.
Protobuf type
exec.user.ServerMeta
Protobuf type
exec.user.ServerMeta
Protobuf enum
exec.user.SubQuerySupport
Message encapsulating metadata for a Table.
Message encapsulating metadata for a Table.
Protobuf enum
exec.user.UnionSupport
Protobuf type
exec.user.UserProperties
Protobuf type
exec.user.UserProperties
Protobuf type
exec.user.UserToBitHandshake
Protobuf type
exec.user.UserToBitHandshake
Wraps a DrillPBError object so we don't need to rebuilt it multiple times
when sending it to the client.
Holds metrics related to bit user rpc layer
Utility class for User RPC
Implementations of this interface are allowed to increment queryCount.
Drill-specific extension to allow dates only, expressed in UTC
to be consistent with Mongo timestamps.
Per the
V1 docs:
In Strict mode, <date>
is an ISO-8601 date format with a mandatory time zone field
following the template YYYY-MM-DDTHH:mm:ss.mmm<+/-Offset>.
Description of a JSON value as inferred from looking ahead in
the JSON stream.
Description of JSON types as derived from JSON tokens.
Constructs a
ValueDef
by looking ahead on the input stream.Identifies method parameter based on given name and type.
Wrapper object for an individual value in Drill.
Represents a JSON scalar value, either a direct object field, or level
within an array.
Parses a JSON value.
Represents a declared variable (parameter) in a Drill function.
Physical Values implementation in Drill.
Represents the primitive types supported to read and write data
from value vectors.
An abstraction that is used to store a sequence of values in an individual
column.
Reads from this vector instance.
Writes into this vector instance.
This class is responsible for formatting ValueVector elements.
Wraps a value vector field to be read, providing metadata about the field.
Writer for a scalar value.
Var16CharVector implements a vector of variable width values.
Helper class to buffer container mutation as a means to optimize native memory copy operations.
VarBinaryVector implements a vector of variable width values.
Helper class to buffer container mutation as a means to optimize native memory copy operations.
Value listener for JSON string values.
VarCharVector implements a vector of variable width values.
Helper class to buffer container mutation as a means to optimize native memory copy operations.
VarDecimalVector implements a vector of variable width values.
Helper class to buffer container mutation as a means to optimize native memory copy operations.
Describes the contents of a list or union field.
Parser which accepts all JSON values and converts them to actions on a
UNION vector writer.
Reader for a Drill "union vector." The union vector is presented
as a reader over a set of variants.
Writer for a Drill "union vector." The union vector is presented
as a writer over a set of variants.
Class which handles reading a batch of rows from a set of variable columns
A bulk input entry enables us to process potentially multiple VL values in one shot (especially for very
small values); please refer to
org.apache.drill.exec.vector.VarBinaryVector.BulkInput
.Allows caller to provide input in a bulk manner while abstracting the underlying data structure
to provide performance optimizations opportunities.
Enables caller (such as wrapper vector objects) to include more processing logic as the data is being
streamed.
Implements the
VarLenBulkInput
interface to optimize data copyThis class is responsible for processing serialized overflow data (generated in a previous batch); this way
overflow data becomes an input source and is thus a) efficiently re-loaded into the current
batch ValueVector and b) subjected to the same batching constraints rules.
Implementation of
CredentialsProvider
that obtains credential values from
Vault
.Implement
UserAuthenticator
based on HashiCorp Vault.A wrapper around a VectorAccessible.
VectorAccessible is an interface.
Collection of vector accessors.
Vector accessor for RepeatedVector → data vector
Vector accessor for RepeatedVector → offsets vector
Vector accessor used by the column accessors to obtain the vector for
each column value.
Vector accessor for ListVector → bits vector
Vector accessor for AbstractMapVector → member vector
Vector accessor for NullableVector → bits vector
Vector accessor for NullableVector → values vector
Vector accessor for UnionVector → data vector
Vector accessor for UnionVector → type vector
Vector accessor for VariableWidthVector → offsets vector
Given a vector container, and a metadata schema that matches the container,
walk the schema tree to allocate new vectors according to a given
row count and the size information provided in column metadata.
Wraps a vector container and optional selection vector in an interface
simpler than the entire
RecordBatch
.Deprecated.
Prototype mechanism to allocate vectors based on expected
data sizes.
Indicates that an attempt to write to a vector overflowed the vector
bounds: either the limit on values or the size of the buffer backing
the vector.
Handy tool to visualize string and offset vectors for
debugging.
Serializes vector containers to an output stream or from
an input stream.
Read one or more vector containers from an input stream.
Writes multiple VectorAccessible or VectorContainer
objects to an output stream.
Generic mechanism for retrieving vectors from a source tuple when
projecting columns to the output tuple.
Handles batch and overflow operation for a (possibly compound) vector.
Encapsulates version information and provides ordering
Versioned store that delegates operations to PersistentStore and keeps versioning,
incrementing version each time write / delete operation is triggered.
Extension to the Store interface that supports versions
The vertex simply holds the child nodes but contains its own traits.
Overrides
Viewable
to create a model which contains additional info of what control to display in menubar.Contains context information about view expansion(s) in a query.
Handler for Create View DDL command
Handler for Drop View [If Exists] DDL command.
Class that represents one row in Drill Metastore Views
which is a representation of metastore view metadata.
Metastore Views component implementation which allows
to read / write views metadata.
Marker annotation to determine which fields should be included as parameters for the function.
Marker annotation to determine which fields should be included as parameters for the function.
Holds various constants used by WebServer components.
Wrapper class around jetty based web server.
Wrapper around the Jetty web server.
Holds various constants used by WebServer components.
Holds the resources required for Web User Session.
The Drill AM web UI.
Passes information to the acknowledgement page.
Displays a warning page to ask the user if they want to cancel
a Drillbit.
Display the configuration page which displays the contents of
DoY and selected Drill config as name/value pairs.
Passes information to the confirmation page.
Displays the list of Drillbits showing details for each Drillbit.
Displays a history of completed tasks which indicates failed or cancelled
Drillbits.
Pages, adapted from Drill, that display the login and logout pages.
Page that lets the admin change the cluster size or shut down the cluster.
DoY provides a link to YARN to display the AM UI.
Confirm that the user wants to resize the cluster.
Main DoY page that displays cluster status, and the status of
the resource groups.
Confirmation page when the admin asks to stop the cluster.
WebUserConnectionWrapper
which represents the UserClientConnection
between
WebServer and Foreman, for the WebUser submitting the query.Perform a wildcard projection.
Perform a wildcard projection with an associated output schema.
support for OVER(PARTITION BY expression1,expression2,...
Implementation of a table macro that generates a table based on parameters.
Manages the running fragments in a Drillbit.
Describes the field will provide output from the given function.
Stores the workspace related config.
Wrapper class that allows us to add additional information to each fragment
node for planning purposes.
A specialized version of record batch that can moves out buffers and preps
them for writing.
Provides records in format used to store them in Iceberg and their partition information.
Writer physical operator
Internal interface used to control the behavior
of writers.
Listener (callback) for vector overflow events.
Tracks the write state of a tuple or variant to allow applying the correct
operations to newly-added columns to synchronize them with the rest
of the writers.
Position information about a writer used during vector overflow.
Write the RecordBatch to the given RecordWriter.
Encapsulates the information needed to handle implicit type conversions
for scalar fields.
Exceptions thrown from the YARN facade: the wrapper around the YARN AM
interfaces.
YARN resource manager client implementation for Drill.
ZIP codec implementation which cna read or create single entry.
Defines the functions required by
ZKACLProviderDelegate
to access ZK-ACL related informationThis class defines the methods that are required to specify
ACLs on Drill ZK nodes
This class hides the
ZKACLProvider
from Curator-specific functions
This is done so that ACL Providers have to be aware only about ZK ACLs and the Drill ZKACLProvider
interface
ACL Providers should not be concerned with the framework (Curator) used by Drill to access ZK.This factory returns a
ZKACLProviderDelegate
which will be used to set ACLs on Drill ZK nodes
If secure ACLs are required, the ZKACLProviderFactory
looks up and instantiates a ZKACLProviderDelegate
specified in the config file.Annotation for
ZKACLProviderDelegate
implementation to identify the
implementation type.Manages cluster coordination utilizing zookeeper.
Manages cluster coordination utilizing zookeeper.
Driver class for the ZooKeeper cluster coordinator.
ZKDefaultACLProvider provides the ACLs for znodes created in a un-secure installation.
Deprecated.
will be removed in 1.7
use
ZookeeperPersistentStoreProvider
instead.AM-specific implementation of a Drillbit registry backed by ZooKeeper.
State of each Drillbit that we've discovered through ZK or launched via the
AM.
A Drillbit can be in one of four states.
ZKSecureACLProvider restricts access to znodes created by Drill in a secure installation.
A namespace aware Zookeeper client.
Zookeeper based implementation of
PersistentStore
.