首页 > 其他分享 >Geomesa 安装(HBase+Hadoop单节点)

Geomesa 安装(HBase+Hadoop单节点)

时间:2022-09-28 10:58:06浏览次数:57  
标签:bin Geomesa 2dquickstart Hadoop hadoop geomesa hbase gdelt HBase

安全JDK及工具包

卸载JDK相关文件
卸载openjdk
[root@iZ4zeaehxxqhrn553tblkkZ /]# yum -y remove java-1.8.0-openjdk*

卸载tzdata-java
[root@iZ4zeaehxxqhrn553tblkkZ /]# yum -y remove tzdata-java.noarch  

在线查看java的安装包列表
[root@iZ4zeaehxxqhrn553tblkkZ /]# yum -y list java*

安装选择的java版本ram包
[root@iZ4zeaehxxqhrn553tblkkZ /]# yum -y install java-1.8.0-openjdk

安装完成,查看java信息
[root@iZ4zeaehxxqhrn553tblkkZ /]#  java -version

安装工具包
yum list | grep jdk-devel
yum -y install java-1.8.0-openjdk-devel.x86_64

vim /etc/profile

export JAVA_HOME=/data/jdk1.8.0_281
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

ln -s /data/jdk1.8.0_281/bin/java /usr/bin/java

一、Hadoop安装

解压及设置环境变量

1、tar -zxvf hadoop-3.1.2.tar.gz -C /hadoop/
2、在/etc/profile中配置hadoop环境变量

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-2.el8_5.x86_64/jre/
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/hadoop/hadoop-3.1.2
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
 

设置公钥

ssh-keygen -t rsa
进行公钥复制
cd /root/.ssh
cat ./id_rsa.pub >> ./authorized_keys

验证Hadoop

hadoop version

修改core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/hadoop/hadoop-3.1.2/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

5、修改hdfs-site.xml
<configuration>
<property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/hadoop/hadoop-3.1.2/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/hadoop/hadoop-3.1.2/tmp/dfs/data</value>
    </property>
</configuration>

修改yarn.site
<configuration>

<property>
   <name>yarn.resourcemanager.hostname</name>
   <value>VM-0-15-centos</value>
</property>

<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>

<property>  
        <name>yarn.resourcemanager.webapp.address</name>  
        <value>0.0.0.0:8088</value>  
</property> 

</configuration>

修改cp mapred-site.xml.template mapred-site.xml

<configuration>
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
</configuration>

修改 hadoop-env.sh

将内部 export JAVA_HOME=${JAVA_HOME} 修改为
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-2.el8_5.x86_64/jre/

修改start-dfs.sh和stop-dfs.sh

添加
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root 

对于start-yarn.sh和stop-yarn.sh文件,添加下列参数

#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

格式化namenode

cd /hadoop/hadoop-3.1.2
./bin/hdfs namenode -format

访问

http://localhost:50070

测试

vim test.txt
hello world.
i am a master!
hello Hadoop

hdfs dfs -mkdir /input
hdfs dfs -put test.txt /input
hdfs dfs -ls /input

cd /hadoop/hadoop-3.1.2/share/hadoop/mapreduce 		#进入mapreduce目录
hadoop jar hadoop-mapreduce-examples-3.1.2.jar wordcount /input/test.txt  /output

hdfs dfs -ls /output/
hdfs dfs -cat /output/part-r-00000

二、zookeeper安装

tar -zxvf apache-zookeeper-3.5.7-bin.tar.gz -C /zookeeper
mv apache-zookeeper-3.5.7-bin zookeeper-3.5.7
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
maxClientCnxns=600
dataDir=/zk_data
dataLogDir=/zk_data

三、hbase安装

tar -zxvf phoenix-hbase-2.2-5.1.2-bin.tar.gz -C /phoenix

tar -zxvf hbase-2.2.0-bin.tar.gz -C /hbase/

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-2.el8_5.x86_64/jre/

export HBASE_CLASSPATH=/hbase/hbase-2.2.0/conf

export HBASE_MANAGES_ZK=false

export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP=true

sudo ln -s /usr/bin/python3 /usr/bin/python

<configuration>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
</property>
 <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/zk_data</value>
 </property>
<property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
  </property>

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>

<property>
      <name>hbase.zookeeper.quorum</name>
     <value>localhost:2181</value>
</property>

<property>
  <name>phoenix.schema.isNamespaceMappingEnabled</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.schema.mapSystemTablesToNamespace</name>
  <value>true</value>
</property>

</configuration>

添加host文件(客户端和服务器均要):
1.116.64.236 VM-0-15-centos (客户端)
本机IP VM-0-15-centos (服务器端)

// 表名 rowkey 列族:字段名 值
create 'test','user','userInfo'
put 'test','001','user:name','入门小站'
put 'test','001','user:type','1'

四、安装geomesa

tar -zxvf geomesa-hbase_2.11-3.4.1-bin.tar.gz -C /geomesa

cd /geomesa/geomesa-hbase_2.11-3.4.1

hadoop fs -put /geomesa/geomesa-hbase_2.11-3.4.1/dist/hbase/geomesa-hbase-distributed-runtime-hbase1_2.11-3.4.1.jar /hbase/hbase-2.2.0/lib/

cp /geomesa/geomesa-hbase_2.11-3.4.1/dist/hbase/geomesa-hbase-distributed-runtime-hbase1_2.11-3.4.1.jar /hbase/hbase-2.2.0/lib/


增加环境变量:
export HADOOP_HOME=/hadoop/hadoop-3.1.2
export HBASE_HOME=/hbase/hbase-2.2.0
export GEOMESA_HBASE_HOME=/geomesa/geomesa-hbase_2.11-3.4.1
export PATH="${PATH}:${GEOMESA_HOME}/bin"

修改hbase-site.xml增加内容:

<property>
    <name>hbase.coprocessor.user.region.classes</name>
    <value>org.locationtech.geomesa.hbase.coprocessor.GeoMesaCoprocessor</value>
</property>

将hbase-site.xml打包进jar中
zip -r /hbase/hbase-2.2.0/lib/geomesa-hbase-datastore_2.11-$VERSION.jar /hbase/hbase-2.2.0/conf/hbase-site.xml

cd /geomesa/geomesa-hbase_2.11-3.4.1

由于许可限制,必须单独安装形状文件支持的依赖项。使用以下命令执行此操作:
./bin/install-shapefile-support.sh

/geomesa/geomesa-hbase_2.11-3.4.1/bin/geomesa-hbase

五、启动集群

/hadoop/hadoop-3.1.2/sbin/start-all.sh
/zookeeper/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
/hbase/hbase-2.2.0/bin/start-hbase.sh
/hbase/hbase-2.2.0/bin/hbase shell

六、HBase常用命令


// 表名 列族1 列族2
> create 'myuser','user','userInfo'

// 表名 rowkey 列族:字段名 值
> put 'myuser','001','user:name','入门小站'
> put 'myuser','001','user:type','1'

// 表名 Row key
> get 'myuser','001'

// 表名 Row key 列族
> get 'myuser','001','user'

> scan 'myuser',{COLUMN=>'user:name'}

// STARTROW开始行,LIMIT=>1 取一条数据,VERSIONS=1 查询最新版本
> scan 'myuser',{STARTROW=>'0001',ENDROW=>'0003'}


显示中文:

scan 'myuser',{FORMATTER => 'toString'}


describe 'geomesa005'

describe 'geomesa005_gdelt_2dquickstart_attr_EventCode_geom_dtg_v8'

# truncate会把表分区也清除掉
truncate 'myuser'

# truncate_preserve只清除数据
truncate_preserve 'namespace:tableName


drop view "myuser"
create view "myuser"("pk" varchar primary key,"f1"."id" unsigned_int,"f1"."age" unsigned_int,"f1"."name" varchar,"f2"."sex" varchar,"f2"."address" varchar,"f2"."phone" varchar,"f2"."say" varchar);

########geomesa 测试

describe "geomesa"
scan "geomesa"

describe "geomesa_gdelt_2dquickstart_attr_EventCode_geom_dtg_v8"
scan ""geomesa_gdelt_2dquickstart_attr_EventCode_geom_dtg_v8""
scan 'geomesa_gdelt_2dquickstart_attr_EventCode_geom_dtg_v8',{STARTROW=>'010',ENDROW=>'050'}

describe "geomesa_gdelt_2dquickstart_id_v4"
scan "geomesa_gdelt_2dquickstart_id_v4"
scan "geomesa_gdelt_2dquickstart_id_v4",{FORMATTER => 'toString',STARTROW=>'719027270',ENDROW=>'719027288'}

describe "geomesa_gdelt_2dquickstart_z2_geom_v5"
scan "geomesa_gdelt_2dquickstart_z2_geom_v5"
scan "geomesa_gdelt_2dquickstart_z2_geom_v5",{FORMATTER => 'toString'}

describe "geomesa_gdelt_2dquickstart_z3_geom_dtg_v7"
scan "geomesa_gdelt_2dquickstart_z3_geom_dtg_v7"
scan "geomesa_gdelt_2dquickstart_z3_geom_dtg_v7"


drop table "geomesa"
drop view "geomesa"
create view "geomesa"("pk" varchar primary key,"m"."v" varchar);

drop view "geomesa_gdelt_2dquickstart_attr_EventCode_geom_dtg_v8"
create view "geomesa_gdelt_2dquickstart_attr_EventCode_geom_dtg_v8"("pk" varchar primary key,"m" varchar);

drop view "geomesa_gdelt_2dquickstart_id_v4"
create view "geomesa_gdelt_2dquickstart_id_v4"("pk" varchar primary key,"d" varchar);

drop view "geomesa_gdelt_2dquickstart_z2_geom_v5"
create view "geomesa_gdelt_2dquickstart_z2_geom_v5"("pk" varchar primary key,"d" varchar);

drop view "geomesa_gdelt_2dquickstart_z3_geom_dtg_v7"
create view "geomesa_gdelt_2dquickstart_z3_geom_dtg_v7"("pk" varchar primary key,"d" varchar);

Creating schema: GLOBALEVENTID:String,Actor1Name:String,Actor1CountryCode:String,Actor2Name:String,Actor2CountryCode:String,EventCode:String,NumMentions:Integer,NumSources:Integer,NumArticles:Integer,ActionGeo_Type:Integer,ActionGeo_FullName:String,ActionGeo_CountryCode:String,dtg:Date,geom:Point:srid=4326


标签:bin,Geomesa,2dquickstart,Hadoop,hadoop,geomesa,hbase,gdelt,HBase
From: https://www.cnblogs.com/littlewrong/p/16737214.html

相关文章

  • 108-12-HBase-2.4.4 集群启动源码剖析_ev
                         ......
  • 107-11- HBase-2.4.4 架构设计和架构原理
                                                    重要部分 ......
  • 【云原生】Hadoop HA on k8s 环境部署
    目录一、概述二、开始部署1)添加journalNode编排1、控制器Statefulset2、service2)修改配置1、修改values.yaml2、修改hadoop/templates/hadoop-configmap.yaml3)开始安装4)......
  • 传统部署HadoopSpark与容器化参考
    hadoop-spark搭建过程参考网上的文档与解决问题的文章地址https://www.cnblogs.com/luo630/p/13271637.htmlhttps://www.cnblogs.com/dintalk/p/12234718.htmlhttps:......
  • Hadoop集群的webUI监控界面设置Simple安全机制
    问题:​ hive集群配置完成,web监控界面的8808和8042和10002端口不需用户验证即可访问,对生产环境是不容许的,需要加上安全机制。方案:1.修改hadop配置1.修改core-site.xml,......
  • 使用hbase时报错ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is ini
    ERROR:org.apache.hadoop.hbase.PleaseHoldException:Masterisinitializingatorg.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2829)ator......
  • 【云原生】Hadoop on k8s 环境部署
    目录一、概述二、开始部署1)添加源2)构建镜像Dockerfile2)修改配置3)开始安装5)测试验证6)卸载一、概述Hadoop是Apache软件基金会下一个开源分布式计算平台,以HDFS(HadoopDist......
  • HADOOP的几个版本
    一、流行的Hadoop版本(1)ApacheHadoop(2)Cloudera(Cloudera’sDistributionIncludingApacheHadoop,简称CDH),普遍选择此产品。(3)Hortonworks(HortonworksDataPlat......
  • hbase
    hbase.rootdirhdfs://master:9000/hbasehbase.zookeeper.property.dataDir/home/hadoop/zookeeperhbase.cluster.distributedtruehbase.zookeeper.quorummas......
  • HBase建命名空间建表
    publicclassHBaseDDL{//声明一个静态属性staticpublicConnectionconn=HBaseConnection2.conn;/***创建命名空间**@paramnamespcae......