Interface which defines an implementation for managing queue configuration of a leaf
Interface which defines an implementation of ResourcePool configuration for
Interface which defines the implementation of a hierarchical configuration for all the ResourcePool that will be used for ResourceManagement
Parses and initialize QueueConfiguration for a
Used to keep track of selected leaf and all rejected
Parses and initializes all the provided configuration for a ResourcePool defined in RM configuration.
Parses and initializes configuration for ResourceManagement in Drill.
Defines all the default values used for the optional configurations for ResourceManagement
ConfigConstants. However whether the feature is enabled/disabled is still controlled by a configuration
ExecConstants.RM_ENABLEDavailable in the Drill's main configuration file. The rm config files will be parsed and loaded only when the feature is enabled. The configuration is a hierarchical tree
ResourcePool. At the top will be the root pool which represents the entire resources (only memory in version 1) which is available to ResourceManager to use for admitting queries. It is assumed that all the nodes in the Drill cluster is homogeneous and given same amount of memory resources. The root pool can be further divided into child ResourcePools to divide the resources among multiple child pools. Each child pool get's a resource share from it's parent resource pool. In theory there is no limit on the number of ResourcePools that can be configured to divide the cluster resources.
In addition to other parameters defined later root ResourcePool also supports a configuration
helps to select exactly one leaf pool out of all the possible options available for a query. For details please
see package-info.java of
org.apache.drill.exec.resourcemgr.NodeResources) method is used by parallelizer to get a queue which will be used
to admit a query. The selected queue resource constraints are used by parallelizer to allocate proper resources
to a query so that it remains within the bounds.
The ResourcePools falls under 2 category:
ResourcePoolImpl.POOL_MEMORY_SHARE_KEY: Percentage of memory share of parent ResourcePool assigned to this pool
ResourcePoolImpl.POOL_SELECTOR_KEY: A selector assigned to this pool. For details please see package-info.java of
ResourcePoolImpl.POOL_QUEUE_KEY: Queue configuration associated with this pool. It should always be configured for a leaf pool only. If configured with an intermediate pool then it will be ignored.
A queue always have 1:1 relationship with a leaf pool. Queries are admitted and executed with a resource slice from the queue. It supports following configurations:
QueryQueueConfigImpl.MAX_ADMISSIBLE_KEY: Upper bound on the total number of queries that can be admitted inside a queue. After this limit is reached all the queries will be moved to waiting state.
QueryQueueConfigImpl.MAX_WAITING_KEY: Limits the total number of queries that can be in waiting state inside a queue. After this limit is reached all the new queries will be failed immediately.
QueryQueueConfigImpl.MAX_QUERY_MEMORY_PER_NODE_KEY: Limits the maximum memory any query in this queue can consume on any node in the cluster. This is to limit a query from a queue to consume all the resources on a node so that other queues query can also have some resources available for it. Ideally it's advised that sum of value of this parameter for all queues should not exceed the total memory on a node.
QueryQueueConfigImpl.WAIT_FOR_PREFERRED_NODES_KEY: This configuration helps to decide if an admitted query in a queue should wait until it has available resources on all the nodes assigned to it by planner for its execution. By default it's true. When set to false then for the nodes which doesn't have available resources for a query will be replaced with another node with enough resources.
QueueAssignmentResult. Later the selected pools are passed to configured QueueSelectionPolicy to select one queue for the query. Planner uses that selected queue's max query memory per node parameter to limit resource assignment to all the fragments of a query on a node. After a query is planned with resource constraints it is sent to leader of that queue to ask for admission. If admitted the query required resources are reserved in global state store and query is executed on the cluster. For details please see the design document and functional spec linked in DRILL-7026
Copyright © 1970 The Apache Software Foundation. All rights reserved.