首页 > 其他分享 >Hadoop集群启动与关闭

Hadoop集群启动与关闭

时间:2024-05-22 14:08:54浏览次数:27  
标签:java Hadoop hive hadoop 集群 关闭 apache org root

启动集群

点击查看代码
[root@master ~]# start-all.sh

Starting namenodes on [master]
Starting datanodes
Starting secondary namenodes [master]
Starting resourcemanager
Starting nodemanagers


!可以在master执行hive命令直接启动hive,无需在提供服务的clone1上首先启动服务

[root@master ~]# hive

which: no hbase in (/root/perl5/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/jdk1.8.0_401/bin:/root/hadoop-3.4.0/bin:/root/hadoop-3.4.0/sbin:/root/hadoop-3.4.0/libexec:/root/apache-hive-3.1.3-bin/bin:/root/bin)

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 0a50dc28-c396-4a92-af5e-7d28f5994fed

Logging initialized using configuration in jar:file:/root/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Hive Session ID = 8ad70ef5-c8b5-4d47-a90d-ce4fc03831b6
hive> exit;
[root@master ~]# 

停止集群

点击查看代码
[root@master ~]# stop-all.sh

Stopping namenodes on [master]
Stopping datanodes
Stopping secondary namenodes [master]
Stopping nodemanagers
Stopping resourcemanager
[root@master ~]# hive
which: no hbase in (/root/perl5/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/jdk1.8.0_401/bin:/root/hadoop-3.4.0/bin:/root/hadoop-3.4.0/sbin:/root/hadoop-3.4.0/libexec:/root/apache-hive-3.1.3-bin/bin:/root/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 808c1a17-4f74-4470-918b-687521f0985c

Logging initialized using configuration in jar:file:/root/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call From master/192.168.10.10 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:651)
	at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:591)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:330)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:245)
Caused by: java.net.ConnectException: Call From master/192.168.10.10 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:948)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:863)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1588)
	at org.apache.hadoop.ipc.Client.call(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1426)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
	at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.lambda$getFileInfo$41(ClientNamenodeProtocolTranslatorPB.java:820)
	at org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:820)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366)
	at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1770)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1828)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1825)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1840)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1860)
	at org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:4486)
	at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:760)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:701)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:627)
	... 9 more
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:205)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:601)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:668)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:789)
	at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:364)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1649)
	at org.apache.hadoop.ipc.Client.call(Client.java:1473)
	... 36 more
[root@master ~]# 

标签:java,Hadoop,hive,hadoop,集群,关闭,apache,org,root
From: https://www.cnblogs.com/used-conduit-onion/p/18183485

相关文章

  • 关于线程池优雅关闭
    使用线程池的问题程序关闭时(eg.上线),线程池中的任务会丢失(内存中)。线程池优雅关闭利用Spring中ContextClosedEvent:关闭程序触发的事件,在使用线程池的地方,可以将线程池注册到ThreadPoolShutdownListener中,然后在程序关闭时,ThreadPoolShutdownListener会监听ContextClosedEvent事......
  • es集群、索引压缩以及相关度评分计算
    es的集群需要有n/2+1的票数才能当选主节点最好采用2+1部署方案:即3节点集群有一个节点设置为投票节点,这样可以更高效率的选出主节点 1.es的选举,选举过程可以看一下源码首先查找存活的节点,包括自己,然后对节点进行过滤,找出具有投票权的节点进行投票,记录票数,选出临时master(还不是......
  • Hadoop集群模式的搭建之四:运行Hadoop集群
    格式化NameNode当第一次启动HDFS时要进行格式化,将NameNode上的数据清零,否则会缺失DataNode。以后启动无需再格式化,只要运行过Hadoop集群,其工作目录(/usr/local/src/hadoop/tmp)中就会有数据。如果需要重新格式化,则在重新格式化之前一定要先删除工作目录下的数据,否则格式化时会出问......
  • 自动化部署elasticsearch三节点集群
    什么是Elasticsearch?Elasticsearch是一个开源的分布式搜索和分析引擎,构建在ApacheLucene的基础上。它提供了一个分布式多租户的全文搜索引擎,具有实时分析功能。Elasticsearch最初是用于构建全文搜索引擎,但它的功能已经扩展到包括日志分析、应用程序性能监控、地理信息系统等......
  • 使用私有云搭建ceph集群(一)
    环境背景本次ceph集群的搭建过程,利用学校数据中心服务器上部署的openstack私有云来进行实验学习[登录账户]一、初始配置(两张网卡)network1配置点击+创建网络进行网络的创建,首先对网络进行命名对网络中的子网进行配置,设置子网名称subnet1以及网络地址192.168.1.0/24,网关......
  • Hadoop集群模式的搭建之三:搭建Hadoop完全分布式集群
    Hadoop可以按如下3种模式进行安装和运行。(1)单机模式:Hadoop的默认模式,安装时不需要修改配置文件(2)伪分布式模式:Hadoop安装在一台计算机上,需要修改相应的配置文件,用一台计算机模拟多台主机的集群。(3)完全分布式模式:在多台计算机上安装JDK和Hadoop,组成相互连通的集群,需要修改相应的......
  • redis实现分片集群
     为什么要使用分片集群?主从和哨兵可以解决高可用、高并发读的问题。但是仍存在海量数据存储、高并发写问题分片集群特征:集群中有多个master,每个master保存不同数据。为master置备了后备隐藏能源:多个slave节点优化了sentinel,master互相ping检测彼此状态结构复杂,难弄 ......
  • Linux集群管理
    1.ssh密钥认证全过程 2.Ansible自动化运维:批量管理,批量分发,批量执行,维护。。Ansible是python写的.  3.Ansible管理架构Inventory主机清单:被管理主机的ip列表,分类.ad-hoc模式:命令行批量管理(使用ans模块),临时任务.playbook剧本模式:类似于把操作写出脚......
  • 如何定时关闭程序
    首先,需要用到的这个工具:度娘网盘提取码:qwu2蓝奏云提取码:2r1z 前几步的流程参考之前发过的文章:《快捷自由定时重启、注销、关机》 只不过最后的地方,选择关闭程序,再填写程序名称即可补充:如何找出程序名称常规方法:从C、D、E等盘里面去找从任务管理器找:1、任务栏右键......
  • Vue关闭eslintrc校验
    Vue关闭eslintrc校验vue.config.js文件添加lintOnSave:falseeslintrc.js文件①注释掉@vue/standard②添加‘vue/multi-word-component-names’:‘off’ ......