-
ClassDescriptionit is never used. So can be removed in Drill 1.21.0will be removed in 1.7 use
ZookeeperPersistentStoreProvider
instead.
-
FieldDescriptionUse
ExprLexer.VOCABULARY
instead.UseExprParser.VOCABULARY
instead.option. It will not take any effect. The option added as part of DRILL-4577, was used to mark that hive tables should be loaded for all table names at once. Then as part of DRILL-4826 was added option to regulate bulk size, because big amount of views was causing performance degradation. After last improvements for DRILL-7115 both options (ExecConstants.ENABLE_BULK_LOAD_TABLE_LIST_KEY
andExecConstants.BULK_LOAD_TABLE_LIST_BULK_SIZE_KEY
) became obsolete and may be removed in future releases.UseSchemaLexer.VOCABULARY
instead.UseSchemaParser.VOCABULARY
instead.No longer populated in order to achieve reproducible buildsNo longer populated in order to achieve reproducible builds
-
MethodDescriptionUse
UserException.Builder.build(Logger)
instead. If the error is a system error, the error message is logged to thisUserException.logger
.useDrillClient#getServerVersion()
UseGroupScan.getMinParallelizationWidth()
to determine whether this GroupScan spans more than one fragment.use the method with DELETE requestStorageResources.deletePlugin(String)
insteaduse the method with POST requestStorageResources.enablePlugin(java.lang.String, java.lang.Boolean)
insteaduseStoragePluginRegistry.resolveFormat(StoragePluginConfig, FormatPluginConfig, Class)
which provides type safety. Retained for compatibility with older pluginsuseStoragePluginRegistry.resolve(StoragePluginConfig, Class)
which provides type safety. Retained for compatibility with older pluginsprefer usingRepeatedListVector.addOrGetVector(org.apache.drill.exec.vector.VectorDescriptor)
instead.UseBaseWriter.MapOrListWriter.varBinary(String)
instead.this has nothing to do with value vector abstraction and should be removed.will be removed in 2.0.0; useParquetFileWriter.appendFile(InputFile)
insteadwill be removed in 2.0.0; useParquetFileWriter.appendRowGroup(SeekableInputStream,BlockMetaData,boolean)
insteadwill be removed in 2.0.0; useParquetFileWriter.appendRowGroups(SeekableInputStream,List,boolean)
insteadmetadata files are not recommended and will be removed in 2.0.0metadata files are not recommended and will be removed in 2.0.0this method does not support writing column indexes; UseParquetFileWriter.writeDataPage(int, int, BytesInput, Statistics, long, Encoding, Encoding, Encoding)
insteadorg.apache.parquet.hadoop.ParquetFileWriter.writeMergedMetadataFile(List<Path>, Path, Configuration) metadata files are not recommended and will be removed in 2.0.0metadata files are not recommended and will be removed in 2.0.0metadata files are not recommended and will be removed in 2.0.0
-
ConstructorDescriptionUse
RowSetBuilder(BufferAllocator, TupleMetadata)
instead.will be removed in 2.0.0will be removed in 2.0.0will be removed in 2.0.0will be removed in 2.0.0
HBasePersistentStoreProvider
instead.