首页 > 其他分享 >ELK介绍与Spring Boot集成使用

ELK介绍与Spring Boot集成使用

时间:2024-04-02 11:45:10浏览次数:21  
标签:ELK pipeline 03 Boot 28 Logstash 2024 Spring logstash

ELK介绍与Spring Boot集成使用

Logstash

Logstash 是一个具有实时管道功能的开源数据收集引擎。 Logstash 可以动态地统一来自不同来源的数据,并将数据规范化到您选择的目的地。清理和民主化您的所有数据,以实现各种高级下游分析和可视化用例。

一般Logstash与Elasticsearch和Kibanan协同使用,扩展数据处理

官方介绍:https://www.elastic.co/guide/en/logstash/7.10/introduction.html

安装Logstash

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cd /etc/yum.repos.d/
vi logstash.repo

接着输入如下内容到logstash.repo

[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

之后就可以安装logstash

sudo yum install logstash

Logstash目录布局

以最新的7.17.19版本为例:

$ whereis logstash
logstash: /etc/logstash /usr/share/logstash
$ cd /usr/share/logstash/
$ ls
bin  CONTRIBUTORS  data  Gemfile  Gemfile.lock  jdk  lib  LICENSE.txt  logstash-core  logstash-core-plugin-api  modules  NOTICE.TXT  tools  vendor  x-pack
$ cd /etc/logstash/
$ ls
conf.d  jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options

总的来说,可以归纳为:

类型 描述 默认位置 设置
home Logstash 安装的主目录 /usr/share/logstash
bin 二进制脚本,包括用于启动 Logstash 的 logstash 和用于安装插件的 logstash-plugin /usr/share/logstash/bin
settings 配置文件,包括 logstash.yml 、 jvm.options 和 startup.options /etc/logstash path.settings
conf Logstash 管道配置文件 /etc/logstash/conf.d/*.conf /etc/logstash/pipelines.yml定义conf文件路径
logs Log files 日志文件 /var/log/logstash path.logs参数可以修改logs文件路径
plugins 本地、非 Ruby-Gem 插件文件。每个插件都包含在一个子目录中。仅推荐用于开发。 /usr/share/logstash/plugins path.plugins可以指定插件
data Logstash 及其插件使用的数据文件可满足任何持久性需求。 /var/lib/logstash path.data

Logstash启动测试

我们通过运行最简单的Logstash管道来测试您的Logstash安装。

Logstash管道有两个必需元素input和output,以及一个可选元素filter。输入插件使用来自源的数据,过滤器插件根据您的指定修改数据,输出插件将数据写入目标

image

要测试 Logstash 安装,请运行最基本的 Logstash 管道。例如:

cd /usr/share/logstash
bin/logstash -e 'input { stdin { } } output { stdout {} }'

之后,logstash启动就可以在当前控制台输入字符,接着控制台打印输出信息:

image

使用filebeat+Logstash解析日志

在上面案例中,创建了一个基本的Logstash管道来测试您的Logstash设置。但实际上,Logstash管道有点复杂: 它通常具有一个或多个输入、过滤器和输出插件。例如,某服务器A安装了Logstash\ES\Kibana,但需要收集的日志文件(nginx日志文件、apache日志文件)存放在服务器B、C、D等不同服务器,这种情况可以配置Filebeat将日志行发送到Logstash,相比Logstash,Filebeat从服务器收集日志、转发日志在主机上资源占用很少,Beats input(Logstash上也同样需要配置使用的一种插件)插件最大限度减少了Logstash实例的资源需求。

安装filebeat

要在数据源计算机上安装Filebeat,以centos为例:


$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.13.0-x86_64.rpm
$ sudo rpm -vi filebeat-8.13.0-x86_64.rpm
# 查看模块
$ filebeat modules list

安装Filebeat后,需要对其进行配置。打开位于 Filebeat 安装目录中的 filebeat.yml (/etc/filebeat目录中)文件,并将内容替换为以下行。确保 paths 指向您之前下载的示例 Apache 日志文件 logstash-tutorial.log

filebeat.inputs:
- type: log
  paths:
    - /path/to/file/logstash-tutorial.log 
output.logstash:
  hosts: ["localhost:5044"]

切换到filebeat主目录,运行Filebeat:

cd /usr/share/filebeat/bin
sudo ./filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"

此时,Logstash还没有配置和运行,因此Filebeat报错可以先忽略。如何后面Logstash配置好之后,可以重启Filebeat测试,强制 Filebeat 从头开始​​读取日志文件,请关闭 Filebeat(按 Ctrl+C),删除注册表文件,然后使用以下命令重新启动 Filebeat:

$ cd /usr/share/filebeat/bin
$ sudo rm -rf data/registry
$ sudo ./filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"

为Filebeat输入配置Logstash

接下来,创建一个Logstash配置管道,该管道使用Beats插件从Beats接受事件。

$ cd /usr/share/logstash
$ vi first-pipeline.conf

粘贴以下内容到first-pipeline.conf,注意如果es配置地址不是同一台主机,则可以修改为对应的IP:Port,


input {
    beats {
        port => "5044"
    }
}
 filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
    geoip {
        source => "clientip"
    }
}
output {
    elasticsearch {
        hosts => [ "localhost:9200" ]
    }
}

要验证配置是否生效,可以运行以下命令测试:

$ cd /usr/share/logstash
$ bin/logstash -f first-pipeline.conf --config.test_and_exit

之后看到以下测试结果,最后显示Config Validation Result: OK. 说明测试成功:

...
[WARN ] 2024-03-28 15:30:38.715 [LogStash::Runner] beats - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
Configuration OK
[INFO ] 2024-03-28 15:30:38.862 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

配置文件测试通过之后,则使用以下命令启动Logstash:

$ bin/logstash -f first-pipeline.conf --config.reload.automatic

--config.reload.automatic 选项启用自动配置重新加载,这样您就不必在每次修改配置文件时停止并重新启动 Logstash。

最后,留意控制台可以查看Logstash一些启动信息:

[root@node01 logstash]# bin/logstash -f first-pipeline.conf --config.reload.automatic
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2024-03-28 15:52:14.821 [main] runner - Starting Logstash {"logstash.version"=>"7.17.19", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.22+7 on 11.0.22+7 +indy +jit [linux-x86_64]"}
[INFO ] 2024-03-28 15:52:14.828 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[WARN ] 2024-03-28 15:52:15.055 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2024-03-28 15:52:15.878 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2024-03-28 15:52:16.621 [Converge PipelineAction::Create<main>] Reflections - Reflections took 52 ms to scan 1 urls, producing 119 keys and 419 values 
[WARN ] 2024-03-28 15:52:17.152 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2024-03-28 15:52:17.182 [Converge PipelineAction::Create<main>] beats - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2024-03-28 15:52:17.276 [Converge PipelineAction::Create<main>] geoip - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2024-03-28 15:52:17.293 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2024-03-28 15:52:17.326 [Converge PipelineAction::Create<main>] elasticsearch - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2024-03-28 15:52:17.379 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2024-03-28 15:52:17.570 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[WARN ] 2024-03-28 15:52:17.724 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"}
[INFO ] 2024-03-28 15:52:17.739 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (7.10.0) {:es_version=>7}
[WARN ] 2024-03-28 15:52:17.742 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2024-03-28 15:52:17.807 [Ruby-0-Thread-10: :1] elasticsearch - Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[INFO ] 2024-03-28 15:52:17.810 [[main]-pipeline-manager] elasticsearch - Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[INFO ] 2024-03-28 15:52:17.865 [Ruby-0-Thread-10: :1] elasticsearch - Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[INFO ] 2024-03-28 15:52:17.923 [Ruby-0-Thread-10: :1] elasticsearch - Installing Elasticsearch template {:name=>"logstash"}
[INFO ] 2024-03-28 15:52:18.403 [Ruby-0-Thread-10: :1] elasticsearch - Created rollover alias {:name=>"<logstash-{now/d}-000001>"}
[INFO ] 2024-03-28 15:52:18.422 [Ruby-0-Thread-10: :1] elasticsearch - Installing ILM policy {"policy"=>{"phases"=>{"hot"=>{"actions"=>{"rollover"=>{"max_size"=>"50gb", "max_age"=>"30d"}}}}}} {:name=>"logstash-policy"}
[INFO ] 2024-03-28 15:52:18.956 [[main]-pipeline-manager] downloadmanager - new database version detected? false
[INFO ] 2024-03-28 15:52:19.056 [[main]-pipeline-manager] databasemanager - By not manually configuring a database path with `database =>`, you accepted and agreed MaxMind EULA. For more details please visit https://www.maxmind.com/en/geolite2/eula
[INFO ] 2024-03-28 15:52:19.056 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/usr/share/logstash/data/plugins/filters/geoip/1711611578/GeoLite2-City.mmdb"}
[WARN ] 2024-03-28 15:52:19.078 [[main]-pipeline-manager] grok - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2024-03-28 15:52:19.227 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/first-pipeline.conf"], :thread=>"#<Thread:0x169d9587 run>"}
[INFO ] 2024-03-28 15:52:19.834 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.61}
[INFO ] 2024-03-28 15:52:19.860 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2024-03-28 15:52:19.881 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2024-03-28 15:52:19.974 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2024-03-28 15:52:19.984 [[main]<beats] Server - Starting server on port: 5044

查看es索引情况,可以看到类似 logstash-2024.03.28-000001,说明Logstash已经采集到日志并发送到ES中:

$ curl 'localhost:9200/_cat/indices?v'
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   logstash-2024.03.28-000001      hdm0BRwcR1meto90wLe20g   1   1          0            0       208b           208b
green  open   .apm-custom-link                SrBPEdFBRvik8_DK9NJoQQ   1   0          0            0       208b           208b
green  open   .kibana_task_manager_1          9z9EfaY2QF2VN6UD5HXFJA   1   0          5         4144    904.6kb        904.6kb
green  open   kibana_sample_data_ecommerce    g5A2oEGuS4KYJKuG1y6C2g   1   0       4675            0        4mb            4mb
green  open   .apm-agent-configuration        HqI3PsIVSqyUT-o_Qb-q6w   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.10.0-000001 Ihcvnmq4RBycyEP1TnVFmg   1   0          2            0       11kb           11kb
green  open   .async-search                   b0h1k_JST7GSCKphozwgkQ   1   0          0            0       228b           228b
green  open   .kibana_1                       b2qz4UxyQt6rMrHsG_BO3Q   1   0         84            8     11.3mb         11.3mb

查看日志内容:

curl -XGET 'localhost:9200/logstash-2024.03.28-000001/_search'

curl -XGET 'localhost:9200/logstash-2024.03.28-000001/_search?pretty&q=xxx'

Logstash的TCP输入插件

通过 TCP 套接字读取事件,与标准输入和文件输入一样,每个事件都被假定为一行文本。可以接受来自客户端的连接或连接到服务器,具体取决于 mode 。Log4j2 可以通过套接字发送 JSON,我们可以将其与 TCP 输入结合使用来接受日志。

配置如下:

$ cd /etc/logstash/conf.d
$ vi tcp-logstash.conf
input{
    tcp {
        mode => "server"
        host => "0.0.0.0"
        port => 5000
        codec => json_lines
    }
}
output{
    stdout{
       codec => rubydebug
    }
    elasticsearch {
       hosts => ["localhost:9200"]
    }
}

之后,可以重启Logstash进行测试

$ service logstash restart
# 通过向Logstash的监听端口5000发送json数据,之后就可以到es中查看
$ echo '{"message": "Hello, Logstash!", "level": "INFO"}' | nc localhost 5000

查看验证数据

$ curl -XGET 'localhost:9200/logstash-2024.03.28-000001/_search?pretty&q=connecting'
{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 2,
      "relation" : "eq"
    },
    "max_score" : 10.967979,
    "hits" : [
      {
        "_index" : "logstash-2024.03.28-000001",
        "_type" : "_doc",
        "_id" : "R352hI4Bz5Eh2WrJgx5n",
        "_score" : 10.967979,
        "_source" : {
          "@version" : "1",
          "host" : "localhost",
          "@timestamp" : "2024-03-28T09:49:05.523Z",
          "message" : "Hello,this is tcp connecting to Logstash!",
          "level" : "INFO",
          "port" : 36786,
          "tags" : [
            "_grokparsefailure",
            "_geoip_lookup_failure"
          ]
        }
      },
      {
        "_index" : "logstash-2024.03.28-000001",
        "_type" : "_doc",
        "_id" : "SH52hI4Bz5Eh2WrJgx5w",
        "_score" : 10.967979,
        "_source" : {
          "@version" : "1",
          "host" : "localhost",
          "@timestamp" : "2024-03-28T09:49:05.523Z",
          "message" : "Hello,this is tcp connecting to Logstash!",
          "level" : "INFO",
          "port" : 36786,
          "tags" : [
            "_grokparsefailure",
            "_geoip_lookup_failure"
          ]
        }
      }
    ]
  }
}

Spring Boot日志输出es

Spring Boot通过TCP与logstash组件链接,此时只需要搭建ELK环境(不需要filebeat),配置logstash如下:

[root@node01 conf.d]# cat tcp-logstash.conf
input{
    tcp {
        mode => "server"
        host => "0.0.0.0"
        port => 5000
        codec => json_lines
    }
}
output{
    stdout{
       codec => rubydebug
    }
    elasticsearch {
       hosts => ["localhost:9200"]
    }
}

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.18</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>demo-logstash</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>demo-logstash</name>
    <description>demo-logstash</description>
    <properties>
        <java.version>8</java.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <!-- Logstash Appender for Logback -->
        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>5.3</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml" />

    <property name="LOGSTASH_HOST" value="${LOGSTASH_HOST:-${DOCKER_HOST:-192.168.137.21}}" />
    <property name="LOGSTASH_PORT" value="${LOGSTASH_PORT:-5000}" />

    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>${LOGSTASH_HOST}:${LOGSTASH_PORT}</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>
    <root level="INFO">
        <appender-ref ref="LOGSTASH" />
        <appender-ref ref="CONSOLE" />
    </root>
</configuration>

application.properties

# Logstash appender 配置
logging.config=classpath:logback-spring.xml

BasicController

@RestController
@RequestMapping("/api")
public class BasicController {
    private static final Logger logger = LoggerFactory.getLogger(BasicController.class);

    @RequestMapping("/hello")
    @ResponseBody
    public String hello(@RequestParam(name= "name", defaultValue = "unknow user") String name) {
        logger.info("This is a sample log message sent to Logstash, " + name);
        return "Hello " + name;
    }
}

之后访问接口hello就可以看到es有日志

标签:ELK,pipeline,03,Boot,28,Logstash,2024,Spring,logstash
From: https://www.cnblogs.com/chq3272991/p/18110248

相关文章

  • 记一次使用spring事件机制失效排查修复
    前言在日常业务开发中过程,我们有时候为了业务解耦,会利用spring的机制,就是利用spring提供的ApplicationListener、ApplicationEventMulticaster等核心API来实现。(注:我这边列的是核心底层API接口,正常我们会用监听事件用@EventListener,发布事件用applicationContext.publishEvent()......
  • 基于springboot实现高校心理教育辅导系统项目【项目源码+论文说明】计算机毕业设计
    基于springboot实现高校心理教育辅导系统演示摘要随着Internet技术的发展,心理教育辅导系统应运而生,心理教育辅导系统为用户提供了一个更为便利的心理测试咨询平台。所以,为了充分满足高校学生心理教育辅导的需求,特开发了本高校心理教育辅导系统。本高校心理教育辅导系统的......
  • 基于springboot实现学生读书笔记共享平台系统项目【项目源码+论文说明】
    基于springboot实现学生读书笔记共享平台系统演示摘要本论文主要论述了如何使用JAVA语言开发一个读书笔记共享平台,本系统将严格按照软件开发流程进行各个阶段的工作,采用B/S架构,面向对象编程思想进行项目开发。在引言中,作者将论述读书笔记共享平台的当前背景以及系统开发......
  • 基于springboot实现校园周边美食探索及分享平台系统项目【项目源码+论文说明】
    基于springboot实现园周边美食探索及分享平台系统演示摘要美食一直是与人们日常生活息息相关的产业。传统的电话订餐或者到店消费已经不能适应市场发展的需求。随着网络的迅速崛起,互联网日益成为提供信息的最佳俱渠道和逐步走向传统的流通领域,传统的美食业进而也面临着巨......
  • 基于jsp+Spring boot+mybatis的图书管理系统设计和实现
    基于jsp+Springboot+mybatis的图书管理系统设计和实现博主介绍:多年java开发经验,专注Java开发、定制、远程、文档编写指导等,csdn特邀作者、专注于Java技术领域作者主页央顺技术团队Java毕设项目精品实战案例《1000套》欢迎点赞收藏⭐留言文末获取源码联系方式......
  • @ComponentScan注解 -【Spring底层原理
    案例已上传GitHub,欢迎star:https://github.com/oneStarLR/spring-annotation一、注解用法1.背景知识什么是组件?组件也是抽象的概念,可以理解为一些符合某种规范的类组合在一起就构成了组件,他可以提供某些特定的功能,但实际他们都是类,只不过有他们特殊的规定。组件......
  • 【异常】Spring的依赖注入(DI)系统提示异常,因为漏了一个实现类导致了错误Parameter 0 of
    一、异常内容2024-04-0111:44:39.912[main]ERRORorg.springframework.boot.diagnostics.LoggingFailureAnalysisReporter-***************************APPLICATIONFAILEDTOSTART***************************Description:Parameter0ofconstructorinc......
  • IDEA中新建SpringBoot模块,JDK版本问题解决
    问题描述IDEA中新建SpringBoot模块,使用的JAVAJDK1.8,新建模块时选项中没有JDK8: 运行时报错,JDK之类的问题解决方案,查看修改以下四个地方:(1)设置-Java编译器 (2)项目结构--依赖以及源码 ......
  • 10.Mybatis在springboot中的整合总结
    如果你是从事java开发的那数据库绝对是你离不开的东西我以mysql为例好像都是用的这个吧下载就不多bb了直接看springboot整合添加依赖jdbcmysql和mybatis的依赖然后千万别去当当敲代码了你做一步一个验证一步测试一下是否成功加载成功了properties(或者yaml)中配......
  • spring boot 3.2.x 踩坑
    1、factoryBeanObjectTypespringboot3.2.2+mybatis-plus-boot-starter3.5.5。定义了Mapper、继承com.baomidou.mybatisplus.extension.service.IService的接口、实现 com.baomidou.mybatisplus.extension.service.impl.ServiceImpl的类后,启动失败。异常信息如下:......