一 下载Logstash
不废话了,我下载的7.17.6
二 新增配置文件
在logstash/pipeline中,添加logstash.conf
input { jdbc { # 连接 jdbc_connection_string => "jdbc:mysql://192.168.1.1:3306/kintech-cloud-bo?characterEncoding=UTF-8&useSSL=false" # 账号 jdbc_user => "root" # 密码 jdbc_password => "xxxx" # docker中的mysql驱动位置 jdbc_driver_library => "/app/mysql.jar" # 驱动 jdbc_driver_class => "com.mysql.cj.jdbc.Driver" # 查询语句,:sql_last_value为固定写法 statement => "SELECT * FROM student where update_time>:sql_last_value" # 同步间隔,每分钟 schedule => "* * * * *" # 启用字段(增量更新) use_column_value => true # 字段类型(update_time)(增量更新) tracking_column_type => "timestamp" # 字段名称(增量更新) tracking_column => "update_time" } } output { elasticsearch { #es 地址,不要用local和127.0.0.1 hosts => "192.168.1.2:9200" #索引名称 index => "bo_sop_content" } }
三 启动docker
将logstash.conf和logstash.yml(这个不用改),以及mysql.jar 添加到数据卷。
docker run -d -v /root/docker/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v /root/docker/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml -v /root/lib/mysql.jar:/app/mysql.jar --name=logstash logstash:7.17.6
四 设置es分词器
# 设置分词器 PUT 192.168.1.247:9200/default { "settings": { "analysis": { "analyzer": { "my_english_analyzer": { "type": "ik_smart", "max_token_length": 5, "stopwords": "_english_" } } } } }
五 kibana查询
GET /bo_sop_content/_search { "query": { "multi_match": { "query": "归档,hello,sop content", "fields": [ "sop_title", "sop_content" ] } } }
两条符合查询
标签:jdbc,jar,logstash,sop,发布,mysql,docker,Logstash From: https://www.cnblogs.com/hanjun0612/p/18310450