Export
语法文件
export_stmt ::= KW_EXPORT KW_TABLE base_table_ref:tblRef where_clause:whereExpr KW_TO STRING_LITERAL:path opt_properties:properties opt_broker:broker {: RESULT = new ExportStmt(tblRef, whereExpr, path, properties, broker); :} ;
export有类似于Load的语法,export语法分为五段:
● KW_EXPORT KW_TABLE base_table_ref:tblRef
● where_clause
● KW_TO STRING_LITERAL:path
● opt_properties - PROPERTIES (xxx)
● opt_broker - WITH xxx可以是Broker或者是HDFS,S3,最终都转成BrokerDesc
EXPORT TABLE test_export TO "hdfs://ctyunns/tmp/doris/" WITH BROKER "hdfs_broker"( "hadoop.security.authentication"="kerberos", "kerberos_principal"="[email protected]", "kerberos_keytab"="/etc/security/keytabs/hdfs_export.keytab", 'dfs.nameservices'='ctyunns', 'dfs.ha.namenodes.ctyunns'='nn1,nn2', 'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310', 'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310', 'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider' );
Repository
语法文件
Repository也有类似的语法,但细看实则不同
CREATE REPOSITORY `default_repo` WITH BROKER `hdfs_broker` ON LOCATION "hdfs://ctyunns/dorisRepo" PROPERTIES ( 'dfs.nameservices'='ctyunns', 'dfs.ha.namenodes.ctyunns'='nn1,nn2', 'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310', 'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310', 'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider', "hadoop.security.authentication"="kerberos", "kerberos_principal"="[email protected]", "kerberos_keytab"="/etc/security/keytabs/hdfs_export.keytab" );
XXX on LOCATION xxx PROPERTIES是一个整体组成StorageBackend
| KW_CREATE opt_read_only:isReadOnly KW_REPOSITORY ident:repoName KW_WITH storage_backend:storage {: RESULT = new CreateRepositoryStmt(isReadOnly, repoName, storage); :} storage_backend ::= | KW_BROKER ident:brokerName KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties {: RESULT = new StorageBackend(brokerName, location, StorageBackend.StorageType.BROKER, properties); :} | KW_S3 KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties {: RESULT = new StorageBackend("", location, StorageBackend.StorageType.S3, properties); :} | KW_HDFS KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties {: RESULT = new StorageBackend("", location, StorageBackend.StorageType.HDFS, properties); :} | KW_LOCAL KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties {: RESULT = new StorageBackend("", location, StorageBackend.StorageType.LOCAL, properties); :} ;
StorageBackend -> BlobStorage
BrokerDesc -> BlobStorage
Resource
语法文件
Resource可以理解为一系列配置的集合
CREATE EXTERNAL RESOURCE "spark0" PROPERTIES ( "type" = "spark", "spark.master" = "yarn", "spark.submit.deployMode" = "cluster", "spark.jars" = "xxx.jar,yyy.jar", "spark.files" = "/tmp/aaa,/tmp/bbb", "spark.executor.memory" = "1g", "spark.yarn.queue" = "queue0", "spark.hadoop.yarn.resourcemanager.address" = "127.0.0.1:9999", "spark.hadoop.fs.defaultFS" = "hdfs://127.0.0.1:10000", "working_dir" = "hdfs://127.0.0.1:10000/tmp/doris", "broker" = "broker0", "broker.username" = "user0", "broker.password" = "password0" );
语法文件如下:
KW_CREATE opt_external:isExternal KW_RESOURCE opt_if_not_exists:ifNotExists ident_or_text:resourceName opt_properties:properties {: RESULT = new CreateResourceStmt(isExternal, ifNotExists, resourceName, properties); :}
创建过程
创建过程如下:
● Resource.fromStmt
○ 创建各种不同类型的Resource子类。如SparkResource,JdbcResource
○ 调用Resource.setProperties将语法定义时的properties设置进对应的Resource子类中
● createResource - 将Resource置入内存存储
支持的Resource类型见Resource.getResourceInstance
标签:opt,hdfs,阅读,StorageBackend,ctyunns,源码,KW,其他,properties From: https://www.cnblogs.com/xutaoustc/p/17503753.html