Shell操作
一、基本语法
hadoop fs 具体命令
hdfs dfs 具体命令
二、命令大全
[user@hadoop102 ~]$ hadoop fs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
[-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] [-v] [-x] <path> ...]
[-expunge]
[-find <path> ... <expression> ...]
[-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
[-head <file>]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] [-s <sleep interval>] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touch [-a] [-m] [-t TIMESTAMP ] [-c] <path> ...]
[-touchz <path> ...]
[-truncate [-w] <length> <path> ...]
[-usage [cmd ...]]
三、常用命令
1、准备工作
(1)启动Hadoop集群
[user@hadoop102 ~]$ myhadoop.sh start
(2)创建/sanguo文件夹
[user@hadoop102 ~]$ hadoop fs -mkdir /sanguo
2、常用上传命令
命令 | 说明 |
---|---|
-moveFromLocal | 从本地剪切粘贴到HDFS |
-copyFromLocal | 从本地文件系统中拷贝文件到HDFS路径去 |
-put | 等同于-copyFromLocal |
-appendToFile | 追加一个文件到已经存在的文件末尾 |
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -moveFromLocal ./shuguo.txt /sanguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -copyFromLocal ./weiguo.txt /sanguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -put ./wuguo.txt /sanguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -appendToFile ./liubei.txt /sanguo/shuguo.txt
- 追加文件结果:
3、常用下载命令
命令 | 说明 |
---|---|
-copyToLocal | 从HDFS拷贝到本地 |
-get | 等同于-copyToLocal |
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -copyToLocal /sanguo/shuguo.txt ./
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -get /sanguo/shuguo.txt ./shuguo2.txt
4、常用直接操作
命令 | 说明 |
---|---|
-ls | 显示目录信息 |
-cat | 显示文件内容 |
-chgrp、-chmod、-chown | 与linux文件系统用法一样,修改文件所属权限 |
-mkdir | 创建路径 |
-cp | 从HDFS的一个路径拷贝到HDFS的另一个路径 |
-mv | 在HDFS目录中移动文件 |
-tail | 显示一个文件的末尾1kb的数据(最末尾的文件是最新的信息) |
-rm | 删除文件或文件夹 |
-rm -r | 递归删除目录及目录里面的内容 |
-du | 统计文件夹的大小信息,在其后添加'-s''-h',-s表示汇总指定目录的大小,不显示子目录或文件的大小,-h选项表示以易读的格式显示大小 |
-setrep | 设置HDFS中文件的副本数量 |
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -ls /sanguo
Found 3 items
-rw-r--r-- 3 user supergroup 15 2024-09-13 18:49 /sanguo/shuguo.txt
-rw-r--r-- 3 user supergroup 7 2024-09-13 18:43 /sanguo/weiguo.txt
-rw-r--r-- 3 user supergroup 6 2024-09-13 18:47 /sanguo/wuguo.txt
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -cat /sanguo/shuguo.txt
2024-09-13 19:04:38,127 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
shuoguo
liubei
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -chown user:user /sanguo/shuguo.txt
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -mkdir /jinguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -cp /sanguo/shuguo.txt /jinguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -mv /sanguo/weiguo.txt /jinguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -mv /sanguo/wuguo.txt /jinguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -tail /jinguo/shuguo.txt
2024-09-13 19:21:04,653 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
shuoguo
liubei
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -rm /sanguo/shuguo.txt
Deleted /sanguo/shuguo.txt
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -rm -r /sanguo
Deleted /sanguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -du -s -h /jinguo
28 84 /jinguo
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -du -h /jinguo
15 45 /jinguo/shuguo.txt
7 21 /jinguo/weiguo.txt
6 18 /jinguo/wuguo.txt
[user@hadoop102 hadoop-3.1.3]$ hadoop fs -setrep 10 /jinguo/shuguo.txt
Replication 10 set: /jinguo/shuguo.txt
- 关于设置副本数量
这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量,因为目前只有3台设备,最多也就3个副本,只有节点数的增加到10台时,副本数才能达到10