首页 > 系统相关 >Linux一键部署ELK日志平台自动化脚本

Linux一键部署ELK日志平台自动化脚本

时间:2022-12-30 16:38:04浏览次数:38  
标签:ELK Zookeeper 一键 nginx Elasticsearch File Linux 033 Dir

此脚本是Linux一键部署ELK日志平台自动化脚本,有需要朋友可以参考,脚本内容如下:

环境准备

操作系统:CentOS Linux release 7.8.2003

软件版本

elasticsearch:elasticsearch-7.5.1-linux-x86_64.tar.gz

kibana:kibana-7.5.1-linux-x86_64.tar.gz

logstash:logstash-7.5.1.tar.gz

filebeat:filebeat-7.5.1-linux-x86_64.tar.gz

JDK:jdk-11.0.1_linux-x64_bin.tar.gz

Nginx:nginx-1.18.0.tar.gz

Zookeeper:zookeeper-3.4.10.tar.gz

Kafka:kafka_2.12-2.5.0.tgz

脚本功能

1)一键安装Elasticsearch、Kibana、Logstash、Filebeat

2)一键安装Zookeeper

3)一键安装Kafka

4)一键安装Nginx

5)自动添加nginx_access、nginx_error索引

6)自动配置Elasticsearch用户密码

[root@localhost ~]# vim install_elk_filebeat_kafka.sh


  1. #!/bin/bash
  2. #Date:2019-5-20 13:14:00
  3. #Author Blog:
  4. # https://www.yangxingzhen.com
  5. # https://www.i7ti.cn
  6. #Author WeChat:
  7. # 微信公众号:小柒博客
  8. #Author mirrors site:
  9. # https://mirrors.yangxingzhen.com
  10. #About the Author
  11. # BY:YangXingZhen
  12. # Mail:[email protected]
  13. User="elk"
  14. Elasticsearch_User="elastic"
  15. Elasticsearch_Passwd="www.yangxingzhen.com"
  16. IPADDR=$(hostname -I |awk '{print $1}')
  17. Elasticsearch_DIR="/data/elasticsearch"
  18. Kafka_IP=$(hostname -I |awk '{print $1}')
  19. Zookeeper_IP=$(hostname -I |awk '{print $1}')
  20. Elasticsearch_IP=$(hostname -I |awk '{print $1}')

  21. # Define JDK path variables
  22. JDK_URL=https://mirrors.yangxingzhen.com/jdk
  23. JDK_File=jdk-11.0.1_linux-x64_bin.tar.gz
  24. JDK_File_Dir=jdk-11.0.1
  25. JDK_Dir=/usr/local/jdk-11.0.1

  26. # Define Zookeeper path variables
  27. Zookeeper_URL=http://archive.apache.org/dist/zookeeper/zookeeper-3.4.10
  28. Zookeeper_File=zookeeper-3.4.10.tar.gz
  29. Zookeeper_File_Dir=zookeeper-3.4.10
  30. Zookeeper_PREFIX=/usr/local/zookeeper

  31. # Define Kafka path variables
  32. Kafka_URL=https://archive.apache.org/dist/kafka/2.5.0
  33. Kafka_File=kafka_2.12-2.5.0.tgz
  34. Kafka_File_Dir=kafka_2.12-2.5.0
  35. Kafka_Dir=/usr/local/kafka

  36. # Define Nginx path variables
  37. Nginx_URL=http://nginx.org/download
  38. Nginx_File=nginx-1.18.0.tar.gz
  39. Nginx_File_Dir=nginx-1.18.0
  40. Nginx_Dir=/usr/local/nginx

  41. # Define Elasticsearch path variables
  42. Elasticsearch_URL=https://artifacts.elastic.co/downloads/elasticsearch
  43. Elasticsearch_File=elasticsearch-7.5.1-linux-x86_64.tar.gz
  44. Elasticsearch_File_Dir=elasticsearch-7.5.1
  45. Elasticsearch_Dir=/usr/local/elasticsearch

  46. # Define Logstash path variables
  47. Filebeat_URL=https://artifacts.elastic.co/downloads/beats/filebeat
  48. Filebeat_File=filebeat-7.5.1-linux-x86_64.tar.gz
  49. Filebeat_File_Dir=filebeat-7.5.1-linux-x86_64
  50. Filebeat_Dir=/usr/local/filebeat

  51. # Define Kafka path variables
  52. Logstash_URL=https://artifacts.elastic.co/downloads/logstash
  53. Logstash_File=logstash-7.5.1.tar.gz
  54. Logstash_File_Dir=logstash-7.5.1
  55. Logstash_Dir=/usr/local/logstash

  56. # Define Kibana path variables
  57. Kibana_URL=https://artifacts.elastic.co/downloads/kibana
  58. Kibana_File=kibana-7.5.1-linux-x86_64.tar.gz
  59. Kibana_File_Dir=kibana-7.5.1-linux-x86_64
  60. Kibana_Dir=/usr/local/kibana

  61. # 配置内核参数
  62. cat >>/etc/security/limits.conf <<EOF
  63. * soft nofile 65537
  64. * hard nofile 65537
  65. * soft nproc 65537
  66. * hard nproc 65537
  67. EOF

  68. if [ $(grep -wc "4096" /etc/security/limits.d/20-nproc.conf) -eq 0 ];then
  69. cat >>/etc/security/limits.d/20-nproc.conf <<EOF
  70. * soft nproc 4096
  71. EOF
  72. fi

  73. cat >/etc/sysctl.conf <<EOF
  74. net.ipv4.tcp_max_syn_backlog = 65536
  75. net.core.netdev_max_backlog = 32768
  76. net.core.somaxconn = 32768
  77. net.core.wmem_default = 8388608
  78. net.core.rmem_default = 8388608
  79. net.core.rmem_max = 16777216
  80. net.core.wmem_max = 16777216
  81. net.ipv4.tcp_timestamps = 0
  82. net.ipv4.tcp_synack_retries = 2
  83. net.ipv4.tcp_syn_retries = 2
  84. net.ipv4.tcp_tw_recycle = 1
  85. net.ipv4.tcp_tw_reuse = 1
  86. net.ipv4.tcp_mem = 94500000 915000000 927000000
  87. net.ipv4.tcp_max_orphans = 3276800
  88. net.ipv4.tcp_fin_timeout = 120
  89. net.ipv4.tcp_keepalive_time = 120
  90. net.ipv4.ip_local_port_range = 1024 65535
  91. net.ipv4.tcp_max_tw_buckets = 30000
  92. fs.file-max=655350
  93. vm.max_map_count = 262144
  94. net.core.somaxconn= 65535
  95. net.ipv4.ip_forward = 1
  96. net.ipv6.conf.all.disable_ipv6=1
  97. EOF

  98. # sysctl -p使其配置生效
  99. sysctl -p >/dev/null

  100. # 创建elk用户
  101. [ $(grep -wc "elk" /etc/passwd) -eq 0 ] && useradd elk >/dev/null

  102. # 安装JDK环境
  103. java -version >/dev/null 2>&1
  104. if [ $? -ne 0 ];then
  105. # Install Package
  106. [ -f /usr/bin/wget ] || yum -y install wget >/dev/null
  107. wget -c ${JDK_URL}/${JDK_File}
  108. tar xf ${JDK_File}
  109. mv ${JDK_File_Dir} ${JDK_Dir}
  110. cat >>/etc/profile <<EOF
  111. export JAVA_HOME=${JDK_Dir}
  112. export CLASSPATH=\$CLASSPATH:\$JAVA_HOME/lib:\$JAVA_HOME/jre/lib
  113. export PATH=\$JAVA_HOME/bin:\$JAVA_HOME/jre/bin:\$PATH:\$HOMR/bin
  114. EOF
  115. fi

  116. # 加载环境变量
  117. source /etc/profile >/dev/null

  118. # Install Zookeeper
  119. if [ -d ${Zookeeper_PREFIX} ];then
  120. echo -e "\033[31mThe Zookeeper Already Install...\033[0m"
  121. exit 1
  122. else
  123. wget -c ${Zookeeper_URL}/${Zookeeper_File}
  124. tar xf ${Zookeeper_File}
  125. \mv ${Zookeeper_File_Dir} ${Zookeeper_PREFIX}
  126. chown -R root.root ${Zookeeper_PREFIX}
  127. mkdir -p ${Zookeeper_PREFIX}/{data,logs}
  128. \cp ${Zookeeper_PREFIX}/conf/zoo_sample.cfg ${Zookeeper_PREFIX}/conf/zoo.cfg
  129. cat >${Zookeeper_PREFIX}/conf/zoo.cfg <<EOF
  130. #服务器之间或客户端与服务器之间的单次心跳检测时间间隔,单位为毫秒
  131. tickTime=2000
  132. #集群中的follower服务器(F)与leader服务器(L)之间初始连接时能容忍的最多心跳数(tickTime的数量)
  133. initLimit=10
  134. #集群中flower服务器(F)跟leader(L)服务器之间的请求和答应最多能容忍的心跳数
  135. syncLimit=5
  136. #客户端连接Zookeeper服务器的端口,Zookeeper会监听这个端口,接受客户端的访问请求
  137. clientPort=2181
  138. #存放数据文件
  139. dataDir=${Zookeeper_PREFIX}/data
  140. #存放日志文件
  141. dataLogDir=${Zookeeper_PREFIX}/logs
  142. #Zookeeper cluster,2888为选举端口,3888为心跳端口
  143. #服务器编号=服务器IP:LF数据同步端口:LF选举端口
  144. server.1=${IPADDR}:2888:3888
  145. EOF

  146. # 写入服务ID编号
  147. echo "1" > ${Zookeeper_PREFIX}/data/myid

  148. # Add power on self start And Start Zookeeper
  149. source /etc/profile >/dev/null && ${Zookeeper_PREFIX}/bin/zkServer.sh start
  150. fi

  151. # Install Kafka Soft
  152. if [ ! -d ${Kafka_Dir} ];then
  153. wget -c ${Kafka_URL}/${Kafka_File}
  154. tar xf ${Kafka_File}
  155. mv ${Kafka_File_Dir} ${Kafka_Dir}
  156. # 编辑配置文件
  157. cat >${Kafka_Dir}/config/server.properties <<EOF
  158. listeners=PLAINTEXT://${IPADDR}:9092
  159. num.network.threads=3
  160. num.io.threads=8
  161. socket.send.buffer.bytes=102400
  162. socket.receive.buffer.bytes=102400
  163. socket.request.max.bytes=104857600
  164. log.dirs=/tmp/kafka-logs
  165. num.partitions=10
  166. num.recovery.threads.per.data.dir=1
  167. offsets.topic.replication.factor=1
  168. transaction.state.log.replication.factor=1
  169. transaction.state.log.min.isr=1
  170. log.retention.hours=168
  171. log.segment.bytes=1073741824
  172. log.retention.check.interval.ms=300000
  173. zookeeper.connect=${IPADDR}:2181
  174. zookeeper.connection.timeout.ms=60000
  175. group.initial.rebalance.delay.ms=0
  176. EOF

  177. # 判断Zookeeper服务是否启动,启动成功才执行以下操作
  178. Code=""
  179. while sleep 10
  180. do
  181. echo -e "\033[32m$(date +'%F %T') 等待Zookeeper服务启动...\033[0m"
  182. # 获取Zookeeper服务端口
  183. [ -f /usr/bin/netstat ] || yum -y install net-tools >/dev/null
  184. netstat -lntup |grep "2181" >/dev/null
  185. if [ $? -eq 0 ];then
  186. Code="break"
  187. fi
  188. ${Code}
  189. done

  190. # 启动Kafka服务
  191. source /etc/profile >/dev/null && ${Kafka_Dir}/bin/kafka-server-start.sh -daemon ${Kafka_Dir}/config/server.properties

  192. # 判断Kafka服务是否启动,启动成功才执行以下操作
  193. Code=""
  194. while sleep 10
  195. do
  196. echo -e "\033[32m$(date +'%F %T') 等待Kafka服务启动...\033[0m"
  197. # 获取Kafka服务端口
  198. netstat -lntup |grep "9092" >/dev/null
  199. if [ $? -eq 0 ];then
  200. Code="break"
  201. fi
  202. ${Code}
  203. done

  204. else
  205. echo -e "\033[31mThe Kafka Already Install...\033[0m"
  206. exit 1
  207. fi

  208. # Install Elasticsearch
  209. if [ ! -d ${Elasticsearch_Dir} ];then
  210. # Install Package
  211. [ -f /usr/bin/wget ] || yum -y install wget >/dev/null
  212. wget -c ${Elasticsearch_URL}/${Elasticsearch_File}
  213. tar xf ${Elasticsearch_File}
  214. mv ${Elasticsearch_File_Dir} ${Elasticsearch_Dir}
  215. else
  216. echo -e "\033[31mThe Elasticsearch Already Install...\033[0m"
  217. exit 1
  218. fi

  219. # Install Kibana
  220. if [ ! -d ${Kibana_Dir} ];then
  221. # Install Package
  222. [ -f /usr/bin/wget ] || yum -y install wget >/dev/null
  223. wget -c ${Kibana_URL}/${Kibana_File}
  224. tar xf ${Kibana_File}
  225. mv ${Kibana_File_Dir} ${Kibana_Dir}
  226. else
  227. echo -e "\033[31mThe Kibana Already Install...\033[0m"
  228. exit 1
  229. fi

  230. # 配置Elasticsearch
  231. mkdir -p ${Elasticsearch_DIR}/{data,logs}
  232. cat >${Elasticsearch_Dir}/config/elasticsearch.yml <<EOF
  233. # 节点名称
  234. node.name: es-master
  235. # 存放数据目录,先创建该目录
  236. path.data: ${Elasticsearch_DIR}/data
  237. # 存放日志目录,先创建该目录
  238. path.logs: ${Elasticsearch_DIR}/logs
  239. # 节点IP
  240. network.host: ${Elasticsearch_IP}
  241. # tcp端口
  242. transport.tcp.port: 9300
  243. # http端口
  244. http.port: 9200
  245. # 主合格节点列表,若有多个主节点,则主节点进行对应的配置
  246. cluster.initial_master_nodes: ["${Elasticsearch_IP}:9300"]
  247. # 是否允许作为主节点
  248. node.master: true
  249. # 是否保存数据
  250. node.data: true
  251. node.ingest: false
  252. node.ml: false
  253. cluster.remote.connect: false
  254. # 跨域
  255. http.cors.enabled: true
  256. http.cors.allow-origin: "*"
  257. # 配置X-Pack
  258. http.cors.allow-headers: Authorization
  259. xpack.security.enabled: true
  260. xpack.security.transport.ssl.enabled: true
  261. EOF

  262. # 配置Kibana
  263. cat >${Kibana_Dir}/config/kibana.yml <<EOF
  264. server.port: 5601
  265. server.host: "${Elasticsearch_IP}"
  266. elasticsearch.hosts: ["http://${Elasticsearch_IP}:9200"]
  267. elasticsearch.username: "${Elasticsearch_User}"
  268. elasticsearch.password: "${Elasticsearch_Passwd}"
  269. logging.dest: ${Kibana_Dir}/logs/kibana.log
  270. i18n.locale: "zh-CN"
  271. EOF

  272. # 创建Kibana日志目录
  273. [ -d ${Kibana_Dir}/logs ] || mkdir ${Kibana_Dir}/logs

  274. # 授权ELK用户管理Elasticsearch、Kibana
  275. chown -R ${User}.${User} ${Elasticsearch_Dir}
  276. chown -R ${User}.${User} ${Elasticsearch_DIR}
  277. chown -R root.root ${Kibana_Dir}

  278. # 启动Elasticsearch
  279. #su ${User} -c "source /etc/profile >/dev/null && ${Elasticsearch_Dir}/bin/elasticsearch -d"

  280. # 创建systemctl管理配置文件
  281. cat >/usr/lib/systemd/system/elasticsearch.service <<EOF
  282. [Unit]
  283. Description=elasticsearch
  284. After=network-online.target remote-fs.target nss-lookup.target
  285. Wants=network-online.target

  286. [Service]
  287. LimitCORE=infinity
  288. LimitNOFILE=655360
  289. LimitNPROC=655360
  290. User=${User}
  291. Group=${User}
  292. PIDFile=${Elasticsearch_Dir}/logs/elasticsearch.pid
  293. ExecStart=${Elasticsearch_Dir}/bin/elasticsearch
  294. ExecReload=/bin/kill -s HUP $MAINPID
  295. ExecStop=/bin/kill -s TERM $MAINPID
  296. RestartSec=30
  297. Restart=always
  298. PrivateTmp=true

  299. [Install]
  300. WantedBy=multi-user.target
  301. EOF

  302. # 启动Elasticsearch服务
  303. systemctl daemon-reload
  304. systemctl enable elasticsearch
  305. systemctl start elasticsearch

  306. # 判断Elasticsearch服务是否启动,启动成功才执行以下操作
  307. Code=""
  308. while sleep 10
  309. do
  310. echo -e "\033[32m$(date +'%F %T') 等待Elasticsearch服务启动...\033[0m"
  311. # 获取Elasticsearch服务端口
  312. netstat -lntup |egrep "9200|9300" >/dev/null
  313. if [ $? -eq 0 ];then
  314. Code="break"
  315. fi
  316. ${Code}
  317. done

  318. # 生成Elasticsearch密码
  319. cat >/tmp/config_elasticsearch_passwd.exp <<EOF
  320. spawn su ${User} -c "source /etc/profile >/dev/null && ${Elasticsearch_Dir}/bin/elasticsearch-setup-passwords interactive"
  321. set timeout 60
  322. expect {
  323. -timeout 20
  324. "y/N" {
  325. send "y\n"
  326. exp_continue
  327. }
  328. "Enter password *:" {
  329. send "${Elasticsearch_Passwd}\n"
  330. exp_continue
  331. }
  332. "Reenter password *:" {
  333. send "${Elasticsearch_Passwd}\n"
  334. exp_continue
  335. }
  336. "Enter password *:" {
  337. send "${Elasticsearch_Passwd}\n"
  338. exp_continue
  339. }
  340. "Reenter password *:" {
  341. send "${Elasticsearch_Passwd}\n"
  342. exp_continue
  343. }
  344. "Enter password *:" {
  345. send "${Elasticsearch_Passwd}\n"
  346. exp_continue
  347. }
  348. "Reenter password *:" {
  349. send "${Elasticsearch_Passwd}\n"
  350. exp_continue
  351. }
  352. "Enter password *:" {
  353. send "${Elasticsearch_Passwd}\n"
  354. exp_continue
  355. }
  356. "Reenter password *:" {
  357. send "${Elasticsearch_Passwd}\n"
  358. exp_continue
  359. }
  360. "Enter password *:" {
  361. send "${Elasticsearch_Passwd}\n"
  362. exp_continue
  363. }
  364. "Reenter password *:" {
  365. send "${Elasticsearch_Passwd}\n"
  366. exp_continue
  367. }
  368. "Enter password *:" {
  369. send "${Elasticsearch_Passwd}\n"
  370. exp_continue
  371. }
  372. "Reenter password *:" {
  373. send "${Elasticsearch_Passwd}\n"
  374. exp_continue
  375. }
  376. }
  377. EOF

  378. [ -f /usr/bin/expect ] || yum -y install expect >/dev/null
  379. expect /tmp/config_elasticsearch_passwd.exp

  380. # 创建systemctl管理配置文件
  381. cat >/usr/lib/systemd/system/kibana.service <<EOF
  382. [Unit]
  383. Description=kibana
  384. After=network-online.target remote-fs.target nss-lookup.target
  385. Wants=network-online.target

  386. [Service]
  387. PIDFile=/var/run/kibana.pid
  388. ExecStart=/usr/local/kibana/bin/kibana --allow-root
  389. ExecReload=/bin/kill -s HUP $MAINPID
  390. ExecStop=/bin/kill -s TERM $MAINPID
  391. PrivateTmp=false

  392. [Install]
  393. WantedBy=multi-user.target
  394. EOF

  395. # 启动Kibana
  396. systemctl daemon-reload
  397. systemctl enable kibana
  398. systemctl start kibana

  399. # 判断Kibana服务是否启动,启动成功才执行以下操作
  400. Code=""
  401. while sleep 10
  402. do
  403. echo -e "\033[32m$(date +'%F %T') 等待Kibana服务启动...\033[0m"
  404. # 获取Kibana服务端口
  405. CODE=$(curl -s -w "%{http_code}" -o /dev/null http://${IPADDR}:5601/login)
  406. if [ ${CODE} -eq 200 ];then
  407. Code="break"
  408. fi
  409. ${Code}
  410. done

  411. # Install Filebeat
  412. if [ ! -d ${Filebeat_Dir} ];then
  413. wget -c ${Filebeat_URL}/${Filebeat_File}
  414. tar xf ${Filebeat_File}
  415. mv ${Filebeat_File_Dir} ${Filebeat_Dir}
  416. else
  417. echo -e "\033[31mThe Filebeat Already Install...\033[0m"
  418. exit 1
  419. fi

  420. # Install Logstash
  421. if [ ! -d ${Logstash_Dir} ];then
  422. wget -c ${Logstash_URL}/${Logstash_File}
  423. tar xf ${Logstash_File}
  424. mv ${Logstash_File_Dir} ${Logstash_Dir}
  425. else
  426. echo -e "\033[31mThe Logstash Already Install...\033[0m"
  427. exit 1
  428. fi

  429. # Install Nginx Soft
  430. if [ ! -d ${Nginx_Dir} ];then
  431. # Install Package
  432. yum -y install pcre pcre-devel openssl openssl-devel gcc gcc-c++
  433. wget -c ${Nginx_URL}/${Nginx_File}
  434. tar zxf ${Nginx_File}
  435. cd ${Nginx_File_Dir}
  436. sed -i 's/1.18.0/ /;s/nginx\//nginx/' src/core/nginx.h
  437. useradd -s /sbin/nologin www
  438. ./configure --prefix=${Nginx_Dir} \
  439. --user=www \
  440. --group=www \
  441. --with-http_ssl_module \
  442. --with-http_stub_status_module \
  443. --with-stream
  444. if [ $? -eq 0 ];then
  445. make -j$(nproc) && make install
  446. echo -e "\033[32mThe Nginx Install Success...\033[0m"
  447. else
  448. echo -e "\033[31mThe Nginx Install Failed...\033[0m"
  449. exit 1
  450. fi
  451. else
  452. echo -e "\033[31mThe Nginx already Install...\033[0m"
  453. exit 1
  454. fi

  455. #Config Nginx
  456. ln -sf ${Nginx_Dir}/sbin/nginx /usr/sbin
  457. cat >${Nginx_Dir}/conf/nginx.conf <<EOF
  458. user www www;
  459. worker_processes auto;
  460. pid /usr/local/nginx/logs/nginx.pid;
  461. events {
  462. use epoll;
  463. worker_connections 10240;
  464. multi_accept on;
  465. }
  466. http {
  467. include mime.types;
  468. default_type application/octet-stream;
  469. log_format json '{"@timestamp":"\$time_iso8601",'
  470. '"host":"\$server_addr",'
  471. '"clientip":"\$remote_addr",'
  472. '"remote_user":"\$remote_user",'
  473. '"request":"\$request",'
  474. '"http_user_agent":"\$http_user_agent",'
  475. '"size":\$body_bytes_sent,'
  476. '"responsetime":\$request_time,'
  477. '"upstreamtime":"\$upstream_response_time",'
  478. '"upstreamhost":"\$upstream_addr",'
  479. '"http_host":"\$host",'
  480. '"requesturi":"\$request_uri",'
  481. '"url":"\$uri",'
  482. '"domain":"\$host",'
  483. '"xff":"\$http_x_forwarded_for",'
  484. '"referer":"\$http_referer",'
  485. '"status":"\$status"}';
  486. access_log logs/access.log json;
  487. error_log logs/error.log warn;
  488. sendfile on;
  489. tcp_nopush on;
  490. keepalive_timeout 120;
  491. tcp_nodelay on;
  492. server_tokens off;
  493. gzip on;
  494. gzip_min_length 1k;
  495. gzip_buffers 4 64k;
  496. gzip_http_version 1.1;
  497. gzip_comp_level 4;
  498. gzip_types text/plain application/x-javascript text/css application/xml;
  499. gzip_vary on;
  500. client_max_body_size 10m;
  501. client_body_buffer_size 128k;
  502. proxy_connect_timeout 90;
  503. proxy_send_timeout 90;
  504. proxy_buffer_size 4k;
  505. proxy_buffers 4 32k;
  506. proxy_busy_buffers_size 64k;
  507. large_client_header_buffers 4 4k;
  508. client_header_buffer_size 4k;
  509. open_file_cache_valid 30s;
  510. open_file_cache_min_uses 1;
  511. server {
  512. listen 80;
  513. server_name localhost;
  514. location / {
  515. proxy_pass http://${IPADDR}:5601;
  516. proxy_set_header Host \$host;
  517. proxy_set_header X-Real-IP \$remote_addr;
  518. proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
  519. }
  520. }
  521. }
  522. EOF

  523. # 创建systemctl管理配置文件
  524. cat >/usr/lib/systemd/system/nginx.service <<EOF
  525. [Unit]
  526. Description=Nginx Server
  527. Documentation=http://nginx.org/en/docs/
  528. After=network-online.target remote-fs.target nss-lookup.target
  529. Wants=network-online.target

  530. [Service]
  531. Type=forking
  532. PIDFile=${Nginx_Dir}/logs/nginx.pid
  533. ExecStart=${Nginx_Dir}/sbin/nginx -c ${Nginx_Dir}/conf/nginx.conf
  534. ExecReload=/bin/kill -s HUP $MAINPID
  535. ExecStop=/bin/kill -s TERM $MAINPID

  536. [Install]
  537. WantedBy=multi-user.target
  538. EOF

  539. # Start Nginx
  540. systemctl daemon-reload
  541. systemctl enable nginx
  542. systemctl start nginx

  543. # 配置Filebeat
  544. cat >${Filebeat_Dir}/filebeat.yml <<EOF
  545. filebeat.inputs:
  546. - type: log
  547. enabled: true
  548. paths:
  549. - ${Nginx_Dir}/logs/access.log
  550. multiline:
  551. pattern: '^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}'
  552. negate: true
  553. match: after
  554. fields:
  555. log_topics: nginx_access-log
  556. logtype: nginx_access
  557. - type: log
  558. enabled: true
  559. paths:
  560. - ${Nginx_Dir}/logs/error.log
  561. multiline:
  562. pattern: '^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}'
  563. negate: true
  564. match: after
  565. fields:
  566. log_topics: nginx_error-log
  567. logtype: nginx_error
  568. output.kafka:
  569. enabled: true
  570. hosts: ["${Kafka_IP}:9092"]
  571. topic: '%{[fields][log_topics]}'
  572. EOF

  573. # 配置Logstash
  574. cat >${Logstash_Dir}/config/nginx.conf <<EOF
  575. input {
  576. kafka {
  577. bootstrap_servers => "${Kafka_IP}:9092"
  578. group_id => "logstash-group"
  579. topics => ["nginx_access-log","nginx_error-log"]
  580. auto_offset_reset => "latest"
  581. consumer_threads => 5
  582. decorate_events => true
  583. codec => json
  584. }
  585. }

  586. filter {
  587. if [fields][logtype] == "nginx_access" {
  588. json {
  589. source => "message"
  590. }

  591. grok {
  592. match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}" }
  593. }

  594. date {
  595. match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS"]
  596. target => "@timestamp"
  597. }
  598. }
  599. if [fields][logtype] == "nginx_error" {
  600. json {
  601. source => "message"
  602. }

  603. grok {
  604. match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}" }
  605. }

  606. date {
  607. match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS"]
  608. target => "@timestamp"
  609. }
  610. }
  611. }

  612. output {
  613. if [fields][logtype] == "nginx_access" {
  614. elasticsearch {
  615. hosts => ["${Elasticsearch_IP}:9200"]
  616. user => "${Elasticsearch_User}"
  617. password => "${Elasticsearch_Passwd}"
  618. action => "index"
  619. index => "nginx_access.log-%{+YYYY.MM.dd}"
  620. }
  621. }
  622. if [fields][logtype] == "nginx_error" {
  623. elasticsearch {
  624. hosts => ["${Elasticsearch_IP}:9200"]
  625. user => "${Elasticsearch_User}"
  626. password => "${Elasticsearch_Passwd}"
  627. action => "index"
  628. index => "nginx_error.log-%{+YYYY.MM.dd}"
  629. }
  630. }
  631. }
  632. EOF

  633. # 创建Filebeat日志目录
  634. [ -d ${Filebeat_Dir}/logs ] || mkdir ${Filebeat_Dir}/logs

  635. # 授权ELK用户管理Filebeat、Logstash
  636. chown -R ${User}.${User} ${Filebeat_Dir}
  637. chown -R ${User}.${User} ${Logstash_Dir}

  638. # 启动Filebeat
  639. su ${User} -c "cd ${Filebeat_Dir} && nohup ./filebeat -e -c filebeat.yml >>${Filebeat_Dir}/logs/filebeat.log >/dev/null 2>&1 &"

  640. # 启动Logstash
  641. su ${User} -c "cd ${Logstash_Dir}/bin && nohup ./logstash -f ${Logstash_Dir}/config/nginx.conf >/dev/null 2>&1 &"

  642. # 判断Logstash服务是否启动,启动成功才执行以下操作
  643. Code=""
  644. while sleep 10
  645. do
  646. echo -e "\033[32m$(date +'%F %T') 等待Logstash服务启动...\033[0m"
  647. # 获取Logstash服务端口
  648. netstat -lntup |grep "9600" >/dev/null
  649. if [ $? -eq 0 ];then
  650. Code="break"
  651. fi
  652. ${Code}
  653. done

  654. echo -e "\033[32mELK日志分析平台搭建完毕... \n通过浏览器访问:http://${IPADDR}\n用户名:elastic\n密码:www.yangxingzhen.com\033[0m"

脚本执行方式:

[root@localhost ~]# sh install_elk_filebeat_kafka.sh

脚本执行过程截图如下

Linux一键部署ELK日志平台自动化脚本_Elastic

Linux一键部署ELK日志平台自动化脚本_elasticsearch_02

Linux一键部署ELK日志平台自动化脚本_Elastic_03

Linux一键部署ELK日志平台自动化脚本_User_04

Linux一键部署ELK日志平台自动化脚本_Elastic_05

Linux一键部署ELK日志平台自动化脚本_User_06

Linux一键部署ELK日志平台自动化脚本_elasticsearch_07

  • 输入编号:7533,直达文章
  • 输入m|M,直达目录列表

标签:ELK,Zookeeper,一键,nginx,Elasticsearch,File,Linux,033,Dir
From: https://blog.51cto.com/u_12018693/5980610

相关文章

  • Kubernetes自动化一键部署脚本
    Kubernetes简介Kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规......
  • ELK+Filebeat+Kafka分布式日志管理平台搭建
    ELK介绍需求背景业务发展越来越庞大,服务器越来越多各种访问日志、应用日志、错误日志量越来越多,导致运维人员无法很好的去管理日志开发人员排查问题,需要到服务器上查日志,不......
  • Linux搭建ELK+Filebeat+Nginx+Redis分布式日志管理平台
    ELK介绍需求背景业务发展越来越庞大,服务器越来越多各种访问日志、应用日志、错误日志量越来越多,导致运维人员无法很好的去管理日志开发人员排查问题,需要到服务器上查日志,不......
  • Linux搭建ELK-7.5.1分布式集群并且配置X-Pack
    ELK介绍需求背景业务发展越来越庞大,服务器越来越多各种访问日志、应用日志、错误日志量越来越多,导致运维人员无法很好的去管理日志开发人员排查问题,需要到服务器上查日志,不......
  • Linux下iostat命令详解
    一、iostat命令简介iostat是I/Ostatistics(输入/输出统计)的缩写,iostat工具将对系统的磁盘操作活动进行监视。它的特点是汇报磁盘活动统计情况,同时也会汇报出CPU使用情况。io......
  • Linux性能分析工具vmstat
    1、vmstat简介vmstat(VirtualMemoryStatistics虚拟内存统计)命令用来显示Linux系统虚拟内存状态,也可以报告关于进程、内存、I/O等系统整体运行状态。vmstat命令报告关......
  • Linux终端小工具之tldr
    在终端上使用各种命令执行重要任务是Linux桌面体验中不可或缺的一部分。Linux这个开源操作系统拥有丰富的命令,任何用户都无法全部记住所有这些命令。而使事情变得更复杂......
  • Linux命令
    date显示系统当前如期和时间ifconfig查看IP地址who查看几个用户在线ls查看当前目录文件夹systemtctl服务名称startstoprestartiptables-L-v-n查看服务器有......
  • SELinux
    selinux的工作模式参考:http://c.biancheng.net/view/3906.htmlDisable(关闭模式)Selinux被关闭,默认的DAC访问控制方式被使用。Permissve(宽容模式)SELinux被启......
  • linux存储设备识别fc模式
    1.环境存储使用fc模式链接到服务器:服务器【QLE2692光纤卡】--光纤交换机【已划zone】--存储【已映射】多路径软件:multipathyuminstalldevice-mapper-multipath/sb......