首页 > 数据库 >Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本

时间:2022-12-30 16:38:36浏览次数:50  
标签:ELK Filebeat redis Redis nginx Elasticsearch File 033 Dir

此脚本是Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本,有需要朋友可以参考,脚本内容如下:

环境准备

操作系统:CentOS Linux release 7.8.2003

软件版本

Elasticsearch:elasticsearch-7.5.1-linux-x86_64.tar.gz

Kibana:kibana-7.5.1-linux-x86_64.tar.gz

Logstash:logstash-7.5.1.tar.gz

Filebeat:filebeat-7.5.1-linux-x86_64.tar.gz

JDK:jdk-11.0.1_linux-x64_bin.tar.gz

Nginx:nginx-1.18.0.tar.gz

Redis:redis-5.0.7.tar.gz

脚本功能

1)一键安装Elasticsearch、Kibana、Logstash、Filebeat

2)一键安装Redis

3)一键安装Nginx

4)自动添加nginx_access、nginx_error索引

5)自动配置Elasticsearch用户密码

[root@localhost ~]# vim install_elk_filebeat_redis.sh


  1. #!/bin/bash
  2. #Date:2019-5-20 13:14:00
  3. #Author Blog:
  4. # https://www.yangxingzhen.com
  5. # https://www.i7ti.cn
  6. #Author WeChat:
  7. # 微信公众号:小柒博客
  8. #Author mirrors site:
  9. # https://mirrors.yangxingzhen.com
  10. #About the Author
  11. # BY:YangXingZhen
  12. # Mail:[email protected]
  13. #Auto Install ELK log analysis platform

  14. User="elk"
  15. Elasticsearch_User="elastic"
  16. Elasticsearch_Passwd="www.yangxingzhen.com"
  17. IPADDR=$(hostname -I |awk '{print $1}')
  18. Elasticsearch_DIR="/data/elasticsearch"
  19. Kafka_IP=$(hostname -I |awk '{print $1}')
  20. Zookeeper_IP=$(hostname -I |awk '{print $1}')
  21. Elasticsearch_IP=$(hostname -I |awk '{print $1}')

  22. # Define JDK path variables
  23. JDK_URL=https://mirrors.yangxingzhen.com/jdk
  24. JDK_File=jdk-11.0.1_linux-x64_bin.tar.gz
  25. JDK_File_Dir=jdk-11.0.1
  26. JDK_Dir=/usr/local/jdk-11.0.1

  27. # Define Redis path variables
  28. Redis_URL=http://download.redis.io/releases
  29. Redis_File=redis-5.0.7.tar.gz
  30. Redis_File_Dir=redis-5.0.7
  31. Redis_Prefix=/usr/local/redis

  32. # Define Nginx path variables
  33. Nginx_URL=http://nginx.org/download
  34. Nginx_File=nginx-1.18.0.tar.gz
  35. Nginx_File_Dir=nginx-1.18.0
  36. Nginx_Dir=/usr/local/nginx

  37. # Define Elasticsearch path variables
  38. Elasticsearch_URL=https://artifacts.elastic.co/downloads/elasticsearch
  39. Elasticsearch_File=elasticsearch-7.5.1-linux-x86_64.tar.gz
  40. Elasticsearch_File_Dir=elasticsearch-7.5.1
  41. Elasticsearch_Dir=/usr/local/elasticsearch

  42. # Define Logstash path variables
  43. Filebeat_URL=https://artifacts.elastic.co/downloads/beats/filebeat
  44. Filebeat_File=filebeat-7.5.1-linux-x86_64.tar.gz
  45. Filebeat_File_Dir=filebeat-7.5.1-linux-x86_64
  46. Filebeat_Dir=/usr/local/filebeat

  47. # Define Kafka path variables
  48. Logstash_URL=https://artifacts.elastic.co/downloads/logstash
  49. Logstash_File=logstash-7.5.1.tar.gz
  50. Logstash_File_Dir=logstash-7.5.1
  51. Logstash_Dir=/usr/local/logstash

  52. # Define Kibana path variables
  53. Kibana_URL=https://artifacts.elastic.co/downloads/kibana
  54. Kibana_File=kibana-7.5.1-linux-x86_64.tar.gz
  55. Kibana_File_Dir=kibana-7.5.1-linux-x86_64
  56. Kibana_Dir=/usr/local/kibana

  57. # 配置内核参数
  58. cat >>/etc/security/limits.conf <<EOF
  59. * soft nofile 65537
  60. * hard nofile 65537
  61. * soft nproc 65537
  62. * hard nproc 65537
  63. EOF

  64. if [ $(grep -wc "4096" /etc/security/limits.d/20-nproc.conf) -eq 0 ];then
  65. cat >>/etc/security/limits.d/20-nproc.conf <<EOF
  66. * soft nproc 4096
  67. EOF
  68. fi

  69. cat >/etc/sysctl.conf <<EOF
  70. net.ipv4.tcp_max_syn_backlog = 65536
  71. net.core.netdev_max_backlog = 32768
  72. net.core.somaxconn = 32768
  73. net.core.wmem_default = 8388608
  74. net.core.rmem_default = 8388608
  75. net.core.rmem_max = 16777216
  76. net.core.wmem_max = 16777216
  77. net.ipv4.tcp_timestamps = 0
  78. net.ipv4.tcp_synack_retries = 2
  79. net.ipv4.tcp_syn_retries = 2
  80. net.ipv4.tcp_tw_recycle = 1
  81. net.ipv4.tcp_tw_reuse = 1
  82. net.ipv4.tcp_mem = 94500000 915000000 927000000
  83. net.ipv4.tcp_max_orphans = 3276800
  84. net.ipv4.tcp_fin_timeout = 120
  85. net.ipv4.tcp_keepalive_time = 120
  86. net.ipv4.ip_local_port_range = 1024 65535
  87. net.ipv4.tcp_max_tw_buckets = 30000
  88. fs.file-max=655350
  89. vm.max_map_count = 262144
  90. net.core.somaxconn= 65535
  91. net.ipv4.ip_forward = 1
  92. net.ipv6.conf.all.disable_ipv6=1
  93. EOF

  94. # sysctl -p使其配置生效
  95. sysctl -p >/dev/null

  96. # 创建elk用户
  97. [ $(grep -wc "elk" /etc/passwd) -eq 0 ] && useradd elk >/dev/null

  98. # 安装JDK环境
  99. java -version >/dev/null 2>&1
  100. if [ $? -ne 0 ];then
  101. # Install Package
  102. [ -f /usr/bin/wget ] || yum -y install wget >/dev/null
  103. wget -c ${JDK_URL}/${JDK_File}
  104. tar xf ${JDK_File}
  105. mv ${JDK_File_Dir} ${JDK_Dir}
  106. cat >>/etc/profile <<EOF
  107. export JAVA_HOME=${JDK_Dir}
  108. export CLASSPATH=\$CLASSPATH:\$JAVA_HOME/lib:\$JAVA_HOME/jre/lib
  109. export PATH=\$JAVA_HOME/bin:\$JAVA_HOME/jre/bin:\$PATH:\$HOMR/bin
  110. EOF
  111. fi

  112. # 加载环境变量
  113. source /etc/profile >/dev/null

  114. # Install Redis
  115. if [ ! -d ${Redis_Prefix} ];then
  116. [ -f /usr/bin/openssl ] || yum -y install openssl openssl-devel
  117. yum -y install wget gcc gcc-c++
  118. wget -c ${Redis_URL}/${Redis_File}
  119. tar zxf ${Redis_File}
  120. \mv ${Redis_File_Dir} ${Redis_Prefix}
  121. cd ${Redis_Prefix} && make
  122. if [ $? -eq 0 ];then
  123. echo -e "\033[32mThe Redis Install Success...\033[0m"
  124. else
  125. echo -e "\033[31mThe Redis Install Failed...\033[0m"
  126. fi
  127. else
  128. echo -e "\033[31mThe Redis has been installed...\033[0m"
  129. exit 1
  130. fi

  131. # 随机生成密码
  132. Passwd=$(openssl rand -hex 12)

  133. # Config Redis
  134. ln -sf ${Redis_Prefix}/src/redis-* /usr/bin
  135. sed -i "s/127.0.0.1/0.0.0.0/g" ${Redis_Prefix}/redis.conf
  136. sed -i "/daemonize/s/no/yes/" ${Redis_Prefix}/redis.conf
  137. sed -i "s/dir .*/dir \/data\/redis/" ${Redis_Prefix}/redis.conf
  138. sed -i "s/logfile .*/logfile \/usr\/local\/redis\/redis.log/" ${Redis_Prefix}/redis.conf
  139. sed -i '/appendonly/s/no/yes/' ${Redis_Prefix}/redis.conf
  140. sed -i "s/# requirepass foobared/requirepass ${Passwd}/" ${Redis_Prefix}/redis.conf
  141. echo never > /sys/kernel/mm/transparent_hugepage/enabled
  142. sysctl vm.overcommit_memory=1

  143. # Create data directory
  144. [ -d /data/redis ] || mkdir -p /data/redis

  145. # 创建systemctl管理配置文件
  146. cat >/usr/lib/systemd/system/redis.service <<EOF
  147. [Unit]
  148. Description=Redis Server
  149. After=network-online.target remote-fs.target nss-lookup.target
  150. Wants=network-online.target

  151. [Service]
  152. Type=forking
  153. ExecStart=/usr/bin/redis-server ${Redis_Prefix}/redis.conf
  154. ExecStop=/usr/bin/redis-cli -h 127.0.0.1 -p 6379 shutdown
  155. User=root
  156. Group=root

  157. [Install]
  158. WantedBy=multi-user.target
  159. EOF

  160. # Add power on self start And Start Redis
  161. systemctl daemon-reload
  162. systemctl enable redis
  163. systemctl start redis

  164. # Install Elasticsearch
  165. if [ ! -d ${Elasticsearch_Dir} ];then
  166. # Install Package
  167. [ -f /usr/bin/wget ] || yum -y install wget >/dev/null
  168. wget -c ${Elasticsearch_URL}/${Elasticsearch_File}
  169. tar xf ${Elasticsearch_File}
  170. mv ${Elasticsearch_File_Dir} ${Elasticsearch_Dir}
  171. else
  172. echo -e "\033[31mThe Elasticsearch Already Install...\033[0m"
  173. exit 1
  174. fi

  175. # Install Kibana
  176. if [ ! -d ${Kibana_Dir} ];then
  177. # Install Package
  178. [ -f /usr/bin/wget ] || yum -y install wget >/dev/null
  179. wget -c ${Kibana_URL}/${Kibana_File}
  180. tar xf ${Kibana_File}
  181. mv ${Kibana_File_Dir} ${Kibana_Dir}
  182. else
  183. echo -e "\033[31mThe Kibana Already Install...\033[0m"
  184. exit 1
  185. fi

  186. # 配置Elasticsearch
  187. mkdir -p ${Elasticsearch_DIR}/{data,logs}
  188. cat >${Elasticsearch_Dir}/config/elasticsearch.yml <<EOF
  189. # 节点名称
  190. node.name: es-master
  191. # 存放数据目录,先创建该目录
  192. path.data: ${Elasticsearch_DIR}/data
  193. # 存放日志目录,先创建该目录
  194. path.logs: ${Elasticsearch_DIR}/logs
  195. # 节点IP
  196. network.host: ${Elasticsearch_IP}
  197. # tcp端口
  198. transport.tcp.port: 9300
  199. # http端口
  200. http.port: 9200
  201. # 主合格节点列表,若有多个主节点,则主节点进行对应的配置
  202. cluster.initial_master_nodes: ["${Elasticsearch_IP}:9300"]
  203. # 是否允许作为主节点
  204. node.master: true
  205. # 是否保存数据
  206. node.data: true
  207. node.ingest: false
  208. node.ml: false
  209. cluster.remote.connect: false
  210. # 跨域
  211. http.cors.enabled: true
  212. http.cors.allow-origin: "*"
  213. # 配置X-Pack
  214. http.cors.allow-headers: Authorization
  215. xpack.security.enabled: true
  216. xpack.security.transport.ssl.enabled: true
  217. EOF

  218. # 配置Kibana
  219. cat >${Kibana_Dir}/config/kibana.yml <<EOF
  220. server.port: 5601
  221. server.host: "${Elasticsearch_IP}"
  222. elasticsearch.hosts: ["http://${Elasticsearch_IP}:9200"]
  223. elasticsearch.username: "${Elasticsearch_User}"
  224. elasticsearch.password: "${Elasticsearch_Passwd}"
  225. logging.dest: ${Kibana_Dir}/logs/kibana.log
  226. i18n.locale: "zh-CN"
  227. EOF

  228. # 创建Kibana日志目录
  229. [ -d ${Kibana_Dir}/logs ] || mkdir ${Kibana_Dir}/logs

  230. # 授权ELK用户管理Elasticsearch、Kibana
  231. chown -R ${User}.${User} ${Elasticsearch_Dir}
  232. chown -R ${User}.${User} ${Elasticsearch_DIR}
  233. chown -R root.root ${Kibana_Dir}

  234. # 启动Elasticsearch
  235. #su ${User} -c "source /etc/profile >/dev/null && ${Elasticsearch_Dir}/bin/elasticsearch -d"

  236. # 创建systemctl管理配置文件
  237. cat >/usr/lib/systemd/system/elasticsearch.service <<EOF
  238. [Unit]
  239. Description=elasticsearch
  240. After=network-online.target remote-fs.target nss-lookup.target
  241. Wants=network-online.target

  242. [Service]
  243. LimitCORE=infinity
  244. LimitNOFILE=655360
  245. LimitNPROC=655360
  246. User=${User}
  247. Group=${User}
  248. PIDFile=${Elasticsearch_Dir}/logs/elasticsearch.pid
  249. ExecStart=${Elasticsearch_Dir}/bin/elasticsearch
  250. ExecReload=/bin/kill -s HUP $MAINPID
  251. ExecStop=/bin/kill -s TERM $MAINPID
  252. RestartSec=30
  253. Restart=always
  254. PrivateTmp=true

  255. [Install]
  256. WantedBy=multi-user.target
  257. EOF

  258. # 启动Elasticsearch服务
  259. systemctl daemon-reload
  260. systemctl enable elasticsearch
  261. systemctl start elasticsearch

  262. # 判断Elasticsearch服务是否启动,启动成功才执行以下操作
  263. Code=""
  264. while sleep 10
  265. do
  266. echo -e "\033[32m$(date +'%F %T') 等待Elasticsearch服务启动...\033[0m"
  267. # 获取Elasticsearch服务端口
  268. netstat -lntup |egrep "9200|9300" >/dev/null
  269. if [ $? -eq 0 ];then
  270. Code="break"
  271. fi
  272. ${Code}
  273. done

  274. # 生成Elasticsearch密码
  275. cat >/tmp/config_elasticsearch_passwd.exp <<EOF
  276. spawn su ${User} -c "source /etc/profile >/dev/null && ${Elasticsearch_Dir}/bin/elasticsearch-setup-passwords interactive"
  277. set timeout 60
  278. expect {
  279. -timeout 20
  280. "y/N" {
  281. send "y\n"
  282. exp_continue
  283. }
  284. "Enter password *:" {
  285. send "${Elasticsearch_Passwd}\n"
  286. exp_continue
  287. }
  288. "Reenter password *:" {
  289. send "${Elasticsearch_Passwd}\n"
  290. exp_continue
  291. }
  292. "Enter password *:" {
  293. send "${Elasticsearch_Passwd}\n"
  294. exp_continue
  295. }
  296. "Reenter password *:" {
  297. send "${Elasticsearch_Passwd}\n"
  298. exp_continue
  299. }
  300. "Enter password *:" {
  301. send "${Elasticsearch_Passwd}\n"
  302. exp_continue
  303. }
  304. "Reenter password *:" {
  305. send "${Elasticsearch_Passwd}\n"
  306. exp_continue
  307. }
  308. "Enter password *:" {
  309. send "${Elasticsearch_Passwd}\n"
  310. exp_continue
  311. }
  312. "Reenter password *:" {
  313. send "${Elasticsearch_Passwd}\n"
  314. exp_continue
  315. }
  316. "Enter password *:" {
  317. send "${Elasticsearch_Passwd}\n"
  318. exp_continue
  319. }
  320. "Reenter password *:" {
  321. send "${Elasticsearch_Passwd}\n"
  322. exp_continue
  323. }
  324. "Enter password *:" {
  325. send "${Elasticsearch_Passwd}\n"
  326. exp_continue
  327. }
  328. "Reenter password *:" {
  329. send "${Elasticsearch_Passwd}\n"
  330. exp_continue
  331. }
  332. }
  333. EOF

  334. [ -f /usr/bin/expect ] || yum -y install expect >/dev/null
  335. expect /tmp/config_elasticsearch_passwd.exp

  336. # 创建systemctl管理配置文件
  337. cat >/usr/lib/systemd/system/kibana.service <<EOF
  338. [Unit]
  339. Description=kibana
  340. After=network-online.target remote-fs.target nss-lookup.target
  341. Wants=network-online.target

  342. [Service]
  343. PIDFile=/var/run/kibana.pid
  344. ExecStart=/usr/local/kibana/bin/kibana --allow-root
  345. ExecReload=/bin/kill -s HUP $MAINPID
  346. ExecStop=/bin/kill -s TERM $MAINPID
  347. PrivateTmp=false

  348. [Install]
  349. WantedBy=multi-user.target
  350. EOF

  351. # 启动Kibana
  352. systemctl daemon-reload
  353. systemctl enable kibana
  354. systemctl start kibana

  355. # 判断Kibana服务是否启动,启动成功才执行以下操作
  356. Code=""
  357. while sleep 10
  358. do
  359. echo -e "\033[32m$(date +'%F %T') 等待Kibana服务启动...\033[0m"
  360. # 获取Kibana服务端口
  361. CODE=$(curl -s -w "%{http_code}" -o /dev/null http://${IPADDR}:5601/login)
  362. if [ ${CODE} -eq 200 ];then
  363. Code="break"
  364. fi
  365. ${Code}
  366. done

  367. # Install Filebeat
  368. if [ ! -d ${Filebeat_Dir} ];then
  369. wget -c ${Filebeat_URL}/${Filebeat_File}
  370. tar xf ${Filebeat_File}
  371. mv ${Filebeat_File_Dir} ${Filebeat_Dir}
  372. else
  373. echo -e "\033[31mThe Filebeat Already Install...\033[0m"
  374. exit 1
  375. fi

  376. # Install Logstash
  377. if [ ! -d ${Logstash_Dir} ];then
  378. wget -c ${Logstash_URL}/${Logstash_File}
  379. tar xf ${Logstash_File}
  380. mv ${Logstash_File_Dir} ${Logstash_Dir}
  381. else
  382. echo -e "\033[31mThe Logstash Already Install...\033[0m"
  383. exit 1
  384. fi

  385. # Install Nginx Soft
  386. if [ ! -d ${Nginx_Dir} ];then
  387. # Install Package
  388. yum -y install pcre pcre-devel openssl openssl-devel gcc gcc-c++
  389. wget -c ${Nginx_URL}/${Nginx_File}
  390. tar zxf ${Nginx_File}
  391. cd ${Nginx_File_Dir}
  392. sed -i 's/1.18.0/ /;s/nginx\//nginx/' src/core/nginx.h
  393. useradd -s /sbin/nologin www
  394. ./configure --prefix=${Nginx_Dir} \
  395. --user=www \
  396. --group=www \
  397. --with-http_ssl_module \
  398. --with-http_stub_status_module \
  399. --with-stream
  400. if [ $? -eq 0 ];then
  401. make -j$(nproc) && make install
  402. echo -e "\033[32mThe Nginx Install Success...\033[0m"
  403. else
  404. echo -e "\033[31mThe Nginx Install Failed...\033[0m"
  405. exit 1
  406. fi
  407. else
  408. echo -e "\033[31mThe Nginx already Install...\033[0m"
  409. exit 1
  410. fi

  411. #Config Nginx
  412. ln -sf ${Nginx_Dir}/sbin/nginx /usr/sbin
  413. cat >${Nginx_Dir}/conf/nginx.conf <<EOF
  414. user www www;
  415. worker_processes auto;
  416. pid /usr/local/nginx/logs/nginx.pid;
  417. events {
  418. use epoll;
  419. worker_connections 10240;
  420. multi_accept on;
  421. }
  422. http {
  423. include mime.types;
  424. default_type application/octet-stream;
  425. log_format json '{"@timestamp":"\$time_iso8601",'
  426. '"host":"\$server_addr",'
  427. '"clientip":"\$remote_addr",'
  428. '"remote_user":"\$remote_user",'
  429. '"request":"\$request",'
  430. '"http_user_agent":"\$http_user_agent",'
  431. '"size":\$body_bytes_sent,'
  432. '"responsetime":\$request_time,'
  433. '"upstreamtime":"\$upstream_response_time",'
  434. '"upstreamhost":"\$upstream_addr",'
  435. '"http_host":"\$host",'
  436. '"requesturi":"\$request_uri",'
  437. '"url":"\$uri",'
  438. '"domain":"\$host",'
  439. '"xff":"\$http_x_forwarded_for",'
  440. '"referer":"\$http_referer",'
  441. '"status":"\$status"}';
  442. access_log logs/access.log json;
  443. error_log logs/error.log warn;
  444. sendfile on;
  445. tcp_nopush on;
  446. keepalive_timeout 120;
  447. tcp_nodelay on;
  448. server_tokens off;
  449. gzip on;
  450. gzip_min_length 1k;
  451. gzip_buffers 4 64k;
  452. gzip_http_version 1.1;
  453. gzip_comp_level 4;
  454. gzip_types text/plain application/x-javascript text/css application/xml;
  455. gzip_vary on;
  456. client_max_body_size 10m;
  457. client_body_buffer_size 128k;
  458. proxy_connect_timeout 90;
  459. proxy_send_timeout 90;
  460. proxy_buffer_size 4k;
  461. proxy_buffers 4 32k;
  462. proxy_busy_buffers_size 64k;
  463. large_client_header_buffers 4 4k;
  464. client_header_buffer_size 4k;
  465. open_file_cache_valid 30s;
  466. open_file_cache_min_uses 1;
  467. server {
  468. listen 80;
  469. server_name localhost;
  470. location / {
  471. proxy_pass http://${IPADDR}:5601;
  472. proxy_set_header Host \$host;
  473. proxy_set_header X-Real-IP \$remote_addr;
  474. proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
  475. }
  476. }
  477. }
  478. EOF

  479. # 创建systemctl管理配置文件
  480. cat >/usr/lib/systemd/system/nginx.service <<EOF
  481. [Unit]
  482. Description=Nginx Server
  483. Documentation=http://nginx.org/en/docs/
  484. After=network-online.target remote-fs.target nss-lookup.target
  485. Wants=network-online.target

  486. [Service]
  487. Type=forking
  488. PIDFile=${Nginx_Dir}/logs/nginx.pid
  489. ExecStart=${Nginx_Dir}/sbin/nginx -c ${Nginx_Dir}/conf/nginx.conf
  490. ExecReload=/bin/kill -s HUP $MAINPID
  491. ExecStop=/bin/kill -s TERM $MAINPID

  492. [Install]
  493. WantedBy=multi-user.target
  494. EOF

  495. # Start Nginx
  496. systemctl daemon-reload
  497. systemctl enable nginx
  498. systemctl start nginx

  499. # 配置Filebeat
  500. cat >${Filebeat_Dir}/filebeat.yml <<EOF
  501. filebeat.inputs:
  502. - type: log
  503. enabled: true
  504. paths:
  505. - ${Nginx_Dir}/logs/access.log
  506. multiline:
  507. pattern: '^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}'
  508. negate: true
  509. match: after
  510. fields:
  511. logtype: nginx_access
  512. - type: log
  513. enabled: true
  514. paths:
  515. - ${Nginx_Dir}/logs/error.log
  516. multiline:
  517. pattern: '^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}'
  518. negate: true
  519. match: after
  520. fields:
  521. logtype: nginx_error
  522. output.redis:
  523. enabled: true
  524. hosts: ["${IPADDR}:6379"]
  525. password: "${Passwd}"
  526. key: "all-access-log"
  527. db: 0
  528. timeout: 10
  529. EOF

  530. # 配置Logstash
  531. cat >${Logstash_Dir}/config/nginx.conf <<EOF
  532. input {
  533. redis {
  534. host => "${IPADDR}"
  535. port => "6379"
  536. db => "0"
  537. password => "${Passwd}"
  538. data_type => "list"
  539. key => "all-access-log"
  540. codec => "json"
  541. }
  542. }

  543. filter {
  544. if [fields][logtype] == "nginx_access" {
  545. json {
  546. source => "message"
  547. }

  548. grok {
  549. match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}" }
  550. }

  551. date {
  552. match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS"]
  553. target => "@timestamp"
  554. }
  555. }
  556. if [fields][logtype] == "nginx_error" {
  557. json {
  558. source => "message"
  559. }

  560. grok {
  561. match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}" }
  562. }

  563. date {
  564. match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS"]
  565. target => "@timestamp"
  566. }
  567. }
  568. }

  569. output {
  570. if [fields][logtype] == "nginx_access" {
  571. elasticsearch {
  572. hosts => ["${Elasticsearch_IP}:9200"]
  573. user => "${Elasticsearch_User}"
  574. password => "${Elasticsearch_Passwd}"
  575. action => "index"
  576. index => "nginx_access.log-%{+YYYY.MM.dd}"
  577. }
  578. }
  579. if [fields][logtype] == "nginx_error" {
  580. elasticsearch {
  581. hosts => ["${Elasticsearch_IP}:9200"]
  582. user => "${Elasticsearch_User}"
  583. password => "${Elasticsearch_Passwd}"
  584. action => "index"
  585. index => "nginx_error.log-%{+YYYY.MM.dd}"
  586. }
  587. }
  588. }
  589. EOF

  590. # 创建Filebeat日志目录
  591. [ -d ${Filebeat_Dir}/logs ] || mkdir ${Filebeat_Dir}/logs

  592. # 授权ELK用户管理Filebeat、Logstash
  593. chown -R ${User}.${User} ${Filebeat_Dir}
  594. chown -R ${User}.${User} ${Logstash_Dir}

  595. # 启动Filebeat
  596. su ${User} -c "cd ${Filebeat_Dir} && nohup ./filebeat -e -c filebeat.yml >>${Filebeat_Dir}/logs/filebeat.log >/dev/null 2>&1 &"

  597. # 启动Logstash
  598. su ${User} -c "cd ${Logstash_Dir}/bin && nohup ./logstash -f ${Logstash_Dir}/config/nginx.conf >/dev/null 2>&1 &"

  599. # 判断Logstash服务是否启动,启动成功才执行以下操作
  600. Code=""
  601. while sleep 10
  602. do
  603. echo -e "\033[32m$(date +'%F %T') 等待Logstash服务启动...\033[0m"
  604. # 获取Logstash服务端口
  605. netstat -lntup |grep "9600" >/dev/null
  606. if [ $? -eq 0 ];then
  607. Code="break"
  608. fi
  609. ${Code}
  610. done

  611. echo -e "\033[32mELK日志分析平台搭建完毕... \n通过浏览器访问:http://${IPADDR}\n用户名:${Elasticsearch_User}\n密码:${Elasticsearch_Passwd}\033[0m"

脚本执行方式:

[root@localhost ~]# sh install_elk_filebeat_redis.sh

脚本执行过程截图如下

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本_Elastic

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本_Redis_02

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本_Redis_03

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本_redis_04

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本_redis_05

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本_Redis_06

Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本_Elastic_07

至此,Linux一键部署ELK+Filebeat+Nginx+Redis日志平台自动化脚本部署完毕。

  • 输入编号:7652,直达文章
  • 输入m|M,直达目录列表

标签:ELK,Filebeat,redis,Redis,nginx,Elasticsearch,File,033,Dir
From: https://blog.51cto.com/u_12018693/5980608

相关文章

  • Linux一键部署ELK日志平台自动化脚本
    此脚本是Linux一键部署ELK日志平台自动化脚本,有需要朋友可以参考,脚本内容如下:环境准备操作系统:CentOSLinuxrelease7.8.2003软件版本elasticsearch:elasticsearch-7.5.1-li......
  • ELK+Filebeat+Kafka分布式日志管理平台搭建
    ELK介绍需求背景业务发展越来越庞大,服务器越来越多各种访问日志、应用日志、错误日志量越来越多,导致运维人员无法很好的去管理日志开发人员排查问题,需要到服务器上查日志,不......
  • Linux搭建ELK+Filebeat+Nginx+Redis分布式日志管理平台
    ELK介绍需求背景业务发展越来越庞大,服务器越来越多各种访问日志、应用日志、错误日志量越来越多,导致运维人员无法很好的去管理日志开发人员排查问题,需要到服务器上查日志,不......
  • Linux搭建ELK-7.5.1分布式集群并且配置X-Pack
    ELK介绍需求背景业务发展越来越庞大,服务器越来越多各种访问日志、应用日志、错误日志量越来越多,导致运维人员无法很好的去管理日志开发人员排查问题,需要到服务器上查日志,不......
  • Zabbix监控Redis性能状态
    Zabbix监控Redis性能状态监控原理示意图:监控原理Zabbix-server通过agent监控中配置文件调用shell脚本。Redis中提供redis-cli命令使用info可以获得redis大部分信息。在使用......
  • Tomcat 8.x基于Redis Session会话保持
    什么是Redis?Redis是一个开源的使用ANSIC语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value数据库,并提供多种语言的API。一、与其他用户状态保存方案比较一般开......
  • Docker 安装 Redis
    Docker安装RedisRedis是一个开源的使用ANSIC语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value的NoSQL数据库,并提供多种语言的API。Redis是一个开源(BSD许......
  • 使用redisTemplate时,Lists和Sets报错
    在使用RedisUtils工具类的时候,Lists和Sets报错,原因是由于缺少谷歌的依赖,只需要添加对应的依赖即可1<dependency>2<groupId>com.google.guava</groupId>3......
  • Redis源码剖析系列博文开篇&大纲
    今年我启动了好几个比较有挑战的个人项目,比如写一门编程语言、成为一名视频UP主、写科幻小说……这些项目目前进度都很堪忧,一方面这些项目挑战都比较大,另一方面业余时间还......
  • redis 批量删除key
    以14号库为例,8号库照猫画虎即可1.首先先尝试连接redis,-h指定地址-p指定端口-n指定第几个库切记不要先执行keys*,可以先执行dbsize查看一下key的数量再决定是否执行......