public class HiveUtilities extends Object
Constructor and Description |
---|
HiveUtilities() |
Modifier and Type | Method and Description |
---|---|
static void |
addConfToJob(org.apache.hadoop.mapred.JobConf job,
Properties properties)
Utility method which adds give configs to
JobConf object. |
static Object |
convertPartitionType(org.apache.hadoop.hive.serde2.typeinfo.TypeInfo typeInfo,
String value,
String defaultPartitionValue)
Partition value is received in string format.
|
static HiveTableWrapper.HivePartitionWrapper |
createPartitionWithSpecColumns(HiveTableWithColumnCache table,
org.apache.hadoop.hive.metastore.api.Partition partition)
Helper method which stores partition columns in table columnListCache.
|
static org.apache.hadoop.hive.conf.HiveConf |
generateHiveConf(org.apache.hadoop.hive.conf.HiveConf hiveConf,
Map<String,String> properties)
Creates HiveConf based on properties in given HiveConf and configuration properties.
|
static org.apache.hadoop.hive.conf.HiveConf |
generateHiveConf(Map<String,String> properties)
Creates HiveConf based on given list of configuration properties.
|
static ColumnMetadata |
getColumnMetadata(HiveToRelDataTypeConverter dataTypeConverter,
org.apache.hadoop.hive.metastore.api.FieldSchema column)
Converts specified
FieldSchema column into ColumnMetadata . |
static ColumnMetadata |
getColumnMetadata(String name,
org.apache.calcite.rel.type.RelDataType relDataType)
Converts specified
RelDataType relDataType into ColumnMetadata . |
static Class<? extends org.apache.hadoop.mapred.InputFormat<?,?>> |
getInputFormatClass(org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.hive.metastore.api.StorageDescriptor sd,
org.apache.hadoop.hive.metastore.api.Table table)
Utility method which gets table or partition
InputFormat class. |
static TypeProtos.MajorType |
getMajorTypeFromHiveTypeInfo(org.apache.hadoop.hive.serde2.typeinfo.TypeInfo typeInfo,
OptionSet options)
Obtains major type from given type info holder.
|
static TypeProtos.MinorType |
getMinorTypeFromHivePrimitiveTypeInfo(org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo primitiveTypeInfo,
OptionSet options)
Obtains minor type from given primitive type info holder.
|
static Properties |
getPartitionMetadata(HivePartition partition,
HiveTableWithColumnCache table)
Wrapper around
MetaStoreUtils#getPartitionMetadata(org.apache.hadoop.hive.metastore.api.Partition, Table)
which also adds parameters from table to properties returned by that method. |
static Properties |
getTableMetadata(HiveTableWithColumnCache table)
Wrapper around
MetaStoreUtils#getSchema(StorageDescriptor, StorageDescriptor, Map, String, String, List)
which also sets columns from table cache to table and returns properties returned by
MetaStoreUtils#getSchema(StorageDescriptor, StorageDescriptor, Map, String, String, List) . |
static boolean |
hasHeaderOrFooter(HiveTableWithColumnCache table)
Checks if given table has header or footer.
|
static boolean |
nativeReadersRuleMatches(org.apache.calcite.plan.RelOptRuleCall call,
Class tableInputFormatClass)
|
static void |
populateVector(ValueVector vector,
DrillBuf managedBuffer,
Object val,
int start,
int end)
Populates vector with given value based on its type.
|
static void |
restoreColumns(HiveTableWithColumnCache table,
HivePartition partition)
Sets columns from table cache to table and partition.
|
static int |
retrieveIntProperty(Properties tableProperties,
String propertyName,
int defaultValue)
Returns property value.
|
static void |
throwUnsupportedHiveDataTypeError(String unsupportedType)
Generates unsupported types exception message with list of supported types
and throws user exception.
|
static void |
verifyAndAddTransactionalProperties(org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.hive.metastore.api.StorageDescriptor sd)
This method checks whether the table is transactional and set necessary properties in
JobConf .If schema evolution properties aren't set in job conf for the input format, method sets the column names and types from table/partition properties or storage descriptor. |
public static Object convertPartitionType(org.apache.hadoop.hive.serde2.typeinfo.TypeInfo typeInfo, String value, String defaultPartitionValue)
typeInfo
- type infovalue
- partition valuesdefaultPartitionValue
- default partition valuepublic static void populateVector(ValueVector vector, DrillBuf managedBuffer, Object val, int start, int end)
vector
- vector instancemanagedBuffer
- Drill dufferval
- valuestart
- start positionend
- end positionpublic static TypeProtos.MajorType getMajorTypeFromHiveTypeInfo(org.apache.hadoop.hive.serde2.typeinfo.TypeInfo typeInfo, OptionSet options)
typeInfo
- type info holderoptions
- session optionspublic static TypeProtos.MinorType getMinorTypeFromHivePrimitiveTypeInfo(org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo primitiveTypeInfo, OptionSet options)
primitiveTypeInfo
- primitive type info holderoptions
- session optionspublic static Class<? extends org.apache.hadoop.mapred.InputFormat<?,?>> getInputFormatClass(org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.hive.metastore.api.StorageDescriptor sd, org.apache.hadoop.hive.metastore.api.Table table) throws Exception
InputFormat
class. First it
tries to get the class name from given StorageDescriptor object. If it doesn't contain it tries to get it from
StorageHandler class set in table properties. If not found throws an exception.job
- JobConf
instance needed incase the table is StorageHandler based table.sd
- StorageDescriptor
instance of currently reading partition or table (for non-partitioned tables).table
- Table objectException
public static void addConfToJob(org.apache.hadoop.mapred.JobConf job, Properties properties)
JobConf
object.job
- JobConf
instance.properties
- New config propertiespublic static Properties getPartitionMetadata(HivePartition partition, HiveTableWithColumnCache table)
MetaStoreUtils#getPartitionMetadata(org.apache.hadoop.hive.metastore.api.Partition, Table)
which also adds parameters from table to properties returned by that method.partition
- the source of partition level parameterstable
- the source of table level parameterspublic static void restoreColumns(HiveTableWithColumnCache table, HivePartition partition)
table
- the source of column lists cachepartition
- partition which will set column listpublic static Properties getTableMetadata(HiveTableWithColumnCache table)
MetaStoreUtils#getSchema(StorageDescriptor, StorageDescriptor, Map, String, String, List)
which also sets columns from table cache to table and returns properties returned by
MetaStoreUtils#getSchema(StorageDescriptor, StorageDescriptor, Map, String, String, List)
.table
- Hive table with cached columnspublic static void throwUnsupportedHiveDataTypeError(String unsupportedType)
unsupportedType
- unsupported typepublic static int retrieveIntProperty(Properties tableProperties, String propertyName, int defaultValue)
tableProperties
- table propertiespropertyName
- property namedefaultValue
- default value used in case if property is absentNumberFormatException
- if property value is not numericpublic static boolean hasHeaderOrFooter(HiveTableWithColumnCache table)
table
- table with column cache instancepublic static void verifyAndAddTransactionalProperties(org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.hive.metastore.api.StorageDescriptor sd)
JobConf
.job
- the job to updatesd
- storage descriptorpublic static boolean nativeReadersRuleMatches(org.apache.calcite.plan.RelOptRuleCall call, Class tableInputFormatClass)
call
- rule callpublic static org.apache.hadoop.hive.conf.HiveConf generateHiveConf(Map<String,String> properties)
properties
- config propertiespublic static org.apache.hadoop.hive.conf.HiveConf generateHiveConf(org.apache.hadoop.hive.conf.HiveConf hiveConf, Map<String,String> properties)
hiveConf
- hive confproperties
- config propertiespublic static HiveTableWrapper.HivePartitionWrapper createPartitionWithSpecColumns(HiveTableWithColumnCache table, org.apache.hadoop.hive.metastore.api.Partition partition)
table
- hive table instancepartition
- partition instancepublic static ColumnMetadata getColumnMetadata(String name, org.apache.calcite.rel.type.RelDataType relDataType)
RelDataType relDataType
into ColumnMetadata
.
For the case when specified relDataType is struct, map with recursively converted children
will be created.name
- filed namerelDataType
- filed typeColumnMetadata
which corresponds to specified RelDataType relDataType
public static ColumnMetadata getColumnMetadata(HiveToRelDataTypeConverter dataTypeConverter, org.apache.hadoop.hive.metastore.api.FieldSchema column)
FieldSchema column
into ColumnMetadata
.
For the case when specified relDataType is struct, map with recursively converted children
will be created.dataTypeConverter
- converter to obtain Calcite's types from Hive's onescolumn
- column to convertColumnMetadata
which corresponds to specified FieldSchema column
Copyright © 1970 The Apache Software Foundation. All rights reserved.