docker部署ruoyi
介绍:基于SpringBoot、Spring Security、Jwt、Vue的前后端分离的后台管理系统
前端:Vue
后端:Java SpringBoot(jar包)
项目代码地址:https://gitee.com/y_project/RuoYi-Vue
部署项目:Ruo-Yi
环境说明(需要提前安装docker)
主机 | IP | 说明 |
---|---|---|
frontend01 | 10.0.0.140 | 前端部署 |
frontend02 | 10.0.0.141 | 前端部署 |
backend01 | 10.0.0.142 | 后端部署 |
backend02 | 10.0.0.143 | 后端部署 |
storage | 10.0.0.144 | MySQL、Redis、NFS |
lb01 | 10.0.0.145 | keepalived+负载均衡 |
lb02 | 10.0.0.146 | keepalived+负载均衡 |
后端构建环境Maven、运行环境jdk/jre(1.8+)
前端构建环境nodejs、运行环境Nginx
1. 存储配置
1.1 部署MySQL
1.1.1 配置mysql 5.7版本的源
在storage上进行书写repo文件
cat > /etc/yum.repos.d/mysql5.7.repo << 'EOF'
[mysql-connectors-community]
name=MySQL Connectors Community
baseurl=https://mirrors.tuna.tsinghua.edu.cn/mysql/yum/mysql-connectors-community-el7-$basearch/
enabled=1
gpgcheck=1
gpgkey=https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
[mysql-tools-community]
name=MySQL Tools Community
baseurl=https://mirrors.tuna.tsinghua.edu.cn/mysql/yum/mysql-tools-community-el7-$basearch/
enabled=1
gpgcheck=1
gpgkey=https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
[mysql-5.7-community]
name=MySQL 5.7 Community Server
baseurl=https://mirrors.tuna.tsinghua.edu.cn/mysql/yum/mysql-5.7-community-el7-$basearch/
enabled=1
gpgcheck=1
gpgkey=https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
EOF
1.1.2 安装mysql 5.7
yum makecache
yum install -y mysql-community-server
1.1.3 初始化mysql
#启动mysql设置开机自启
systemctl enable mysqld
systemctl start mysqld
#获取默认生成的mysql密码
grep -i 'tempor.*password' /var/log/mysqld.log
#Mysql安全启动配置向导
[root@storage ~]# mysql_secure_installation
Securing the MySQL server deployment.
Enter password for user root: #输入刚从获取的密码
#用户现有密码已过期。请设置一个新密码。
The existing password for the user account root has expired. Please set a new password.
New password:
Re-enter new password:
The 'validate_password' plugin is installed on the server.
The subsequent steps will run with the existing configuration
of the plugin.
Using existing password for root.
Estimated strength of the password: 100 #密码强度已经100
#按y重新设置密码,按任意键为不重新修改。
Change the password for root ? ((Press y|Y for Yes, any other key for No) :
... skipping.(任意键跳过)
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.
#是否删除匿名用户
Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.
Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.
#是否不允许root远程登录
Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.
#是否删除测试数据库
Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
- Dropping test database...
Success.
#正在删除对测试数据库的权限
- Removing privileges on test database...
Success.
#重新加载权限表将确保所有更改生效
Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.
#是否立即重新加载权限表
Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.
All done!
1.1.4 创建数据库、授权用户
[root@db01 ~]# mysql -u root -p
Enter password:
#执行
create database ruoyi charset utf8mb4 collate utf8mb4_general_ci;
grant all on ruoyi.* to 'ruoyi'@'10.0.0.%' identified by 'Huawei@123';
1.1.5 导入数据库
拉取项目,导入sql目录中两个sql文件。
#安装git
yum install -y git
#进入到项目sql目录
git clone https://gitee.com/y_project/RuoYi-Vue.git
#导入
cd RuoYi-Vue/sql/
mysql -u root -p ruoyi < ry_20231130.sql
mysql -u root -p ruoyi < quartz.sql
#查看
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| ruoyi |
| sys |
+--------------------+
5 rows in set (0.00 sec)
mysql> select user,host from mysql.user;
+---------------+------------+
| user | host |
+---------------+------------+
| ruoyi | 10.0.0.% |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+---------------+------------+
4 rows in set (0.00 sec)
mysql> use ruoyi;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+--------------------------+
| Tables_in_ruoyi |
+--------------------------+
| QRTZ_BLOB_TRIGGERS |
| QRTZ_CALENDARS |
| QRTZ_CRON_TRIGGERS |
| QRTZ_FIRED_TRIGGERS |
| QRTZ_JOB_DETAILS |
| QRTZ_LOCKS |
| QRTZ_PAUSED_TRIGGER_GRPS |
| QRTZ_SCHEDULER_STATE |
| QRTZ_SIMPLE_TRIGGERS |
| QRTZ_SIMPROP_TRIGGERS |
| QRTZ_TRIGGERS |
| gen_table |
| gen_table_column |
| sys_config |
| sys_dept |
| sys_dict_data |
| sys_dict_type |
| sys_job |
| sys_job_log |
| sys_logininfor |
| sys_menu |
| sys_notice |
| sys_oper_log |
| sys_post |
| sys_role |
| sys_role_dept |
| sys_role_menu |
| sys_user |
| sys_user_post |
| sys_user_role |
+--------------------------+
30 rows in set (0.00 sec)
1.2 部署Redis
storage下进行
yum install -y redis
更改redis服务监听地址和密码
#更改服务监听地址
sed -i '61s/127.0.0.1/10.0.0.144/' /etc/redis.conf
#更改密码
sed -i 's/# requirepass foobared/requirepass Huawei@123/g' /etc/redis.conf
启动redis服务并设置开机自启
systemctl start redis
systemctl enable redis
检查服务状态
[root@storage ~]# ps -ef |grep redis
redis 12592 1 0 17:11 ? 00:00:00 /usr/bin/redis-server 10.0.0.144:6379
root 12614 1756 0 17:11 pts/0 00:00:00 grep --color=auto redis
测试可用性
[root@storage ~]# redis-cli -h 10.0.0.144
10.0.0.144:6379> AUTH Huawei@123
OK
10.0.0.144:6379> KEYS *
(empty list or set)
10.0.0.144:6379>
1.3 部署NFS
1.3.1 安装并启动NFS
yum install -y nfs-utils rpcbind
systemctl start nfs
systemctl enable nfs
1.3.2 修改NFS配置并重新加载
#修改nfs配置
cat > /etc/exports << EOF
/nfs_data 10.0.0.0/24(rw)
/nfs_log 10.0.0.0/24(rw,no_root_squash) #因为ruoyi的日志存储到/var/log下普通用户无法修改所以增加no_root_squash配置
EOF
#重新加载配置文件
systemctl reload nfs
1.3.3 创建存储目录并修改权限
mkdir -p /nfs_data
mkdir -p /nfs_logs/{backend01,backend02}
chown nfsnobody.nfsnobody /nfs_data
chown nfsnobody.nfsnobody /nfs_logs
#nfsnobody是一个特殊的系统用户,用于NFS服务。当NFS服务器需要将远程用户映射到一个本地用户时,如果没有其他更具体的映射规则,它通常会使用nfsnobody用户。
[root@storage ~]# showmount -e
Export list for storage:
/nfs_logs 10.0.0.0/24
/nfs_data 10.0.0.0/24
1.3.4 共享存储挂载到后端主机(backend01-02都要)
backend创建文件存储目录(需要被挂载的数据目录和日志目录)
mkdir -p /data/ruoyi_data
mkdir -p /var/log/ruoyi
chmod -R 777 /var/log/ruoyi #如果使用jre镜像不是root用户可能不够权限(使用jdk不需要)
挂载存储
[root@backend01 ~]# yum install -y nfs-utils rpcbind
[root@backend01 ~]# mount -t nfs 10.0.0.144:/nfs_data /data/ruoyi_data
[root@backend01 ~]# mount -t nfs 10.0.0.144:/nfs_logs/backend01 /var/log/ruoyi #backend02上就修改到/nfs_logs/backend02
[root@backend01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 894M 0 894M 0% /dev
tmpfs 910M 0 910M 0% /dev/shm
tmpfs 910M 11M 900M 2% /run
tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/mapper/centos-root 36G 4.5G 31G 13% /
/dev/nvme0n1p1 1014M 185M 830M 19% /boot
tmpfs 182M 12K 182M 1% /run/user/42
tmpfs 182M 0 182M 0% /run/user/0
172.16.1.31:/nfs_data 36G 4.4G 31G 13% /data/ruoyi_data
测试写入
#backend01上进行
echo test > /data/ruoyi_data/test.txt
#nfs01上查看
[root@backend01 ~]# ll /nfs_data/
total 4
-rw-r--r-- 1 nfsnobody nfsnobody 5 Sep 9 11:02 test.txt
2. 高可用配置
2.1 配置keepalived
#lb01、lb02安装keepalived
yum install -y keepalived
#启动服务并设置开机自启
systemctl start keepalived
systemctl enable keepalived
2.1.1 修改Keepalived配置文件
lb01修改
[root@frontend01 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id keepalived-01
}
vrrp_instance lb_test {
state MASTER
nopreempt
interface eth0
virtual_router_id 10
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
10.0.0.150/24 dev eth0 label eth0:1
}
}
frontend02修改
[root@frontend02 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id keepalived-02
}
vrrp_instance lb_test {
state BACKUP
nopreempt
interface eth0
virtual_router_id 10
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
10.0.0.150/24 dev eth0 label eth0:1
}
}
重新加载服务
systemctl reload keepalived
Tips:如需测试建议用虚拟机的断开网络连接进行模拟,否则重新ifup网卡也不会起vip。
2.1.2 keepalived监控nginx
需要书写脚本,监控nginx端口/进程,如果端口或进程不存在,nginx挂了同时关闭keepalived,实现vip漂移即可。
#以下均在lb01、lb02进行
#创建脚本存放目录
mkdir -p /server/scripts
#书写脚本
cat > /server/scripts/check_nginx.sh << 'EOF'
#!/bin/bash
#检查端口是否存在,通过个数判断
count=`ss -lntup | grep nginx | wc -l`
#如果端口数量为0,则关闭Keepalived
if [ $count -eq 0 ];then
systemctl stop keepalived
fi
EOF
#赋予权限
chmod +x /server/scripts/check_nginx.sh
修改keepalived配置文件
示例
#放在vrrp_instance 实例上面
vrrp_script check_nginx { #check_nginx名字给脚本起个命令(keepalived使用)
script /server/scripts/check_nginx.sh #指定脚本路径,要有执行权限。
interval 1 #执行脚本的间隔,默认1秒
timeout 30 #脚本执行的超时时间,一般用于curl/wget等操作
weight 1 #权重(优先级) 如果仅仅1个脚本,可以忽略。一般用于keepalived中有多个监控脚本。
}
#定义一个脚本监控(或称为健康检查) 放在与在vrrp_instance实例里面并列
track_script {
check_nginx
}
lb01的keepalived配置
[root@lb01 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id keepalived-01
}
vrrp_script check_nginx {
script /server/scripts/check_nginx.sh
interval 1
timeout 30
weight 1
}
vrrp_instance lb {
state MASTER
nopreempt
interface eth0
virtual_router_id 10
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
10.0.0.150/24 dev eth0 label eth0:1
}
track_script {
check_nginx
}
}
frontend02的keepalived配置
[root@lb02 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id keepalived-02
}
vrrp_script check_nginx {
script /server/scripts/check_nginx.sh
interval 1
timeout 30
weight 1
}
vrrp_instance lb {
state BACKUP
nopreempt
interface eth0
virtual_router_id 10
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
10.0.0.150/24 dev eth0 label eth0:1
}
track_script {
check_nginx
}
}
重新加载配置文件
#lb01、lb02
systemctl reload keepalived
2.2 前端接入七层负载
lb01、lb02(需要安装nginx)下进行,书写一个配置文件。
cat > /etc/nginx/conf.d/ruoyi.zzb.com.conf << 'EOF'
upstream front_pools {
server 10.0.0.140:80;
server 10.0.0.141:80;
}
server {
listen 80;
server_name ruoyi.zzb.com;
location / {
# 静态文件地址
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://front_pools/;
}
location /prod-api/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080/;
}
}
EOF
2.3 后端接入四层负载
安装四层负载的插件
yum install -y nginx-mod-stream
Tips:(Nginx 1.9.13 版本起,stream
模块内置在 Nginx 的核心)
在nginx.conf配置文件下与http区块同级,配置以下内容,下面直接用追加方式进行
cat >> /etc/nginx/nginx.conf << EOF
stream {
upstream backend_pools {
server 10.0.0.142:8080;
server 10.0.0.143:8080;
}
server {
listen 8080;
proxy_pass backend_pools;
}
}
EOF
加载七层负载和四层负载的配置
#lb01、lb02
systemctl reload nginx
ss -lntup | grep 8080
3. 后端配置
拉取代码
git clone https://gitee.com/y_project/RuoYi-Vue /data/build
3.1 修改后端配置文件
# tree -L 1 /data/build/ruoyi-admin/src/main/resources/
/data/build/ruoyi-admin/src/main/resources/
├── application-druid.yml #数据源配置
├── application.yml #程序配置
├── banner.txt
├── i18n
├── logback.xml #日志配置
├── META-INF
└── mybatis
[root@backend01 build]# vim /data/build/ruoyi-admin/src/main/resources/logback.xml
3.1.1 修改日志配置
修改日志配置存放位置,以及修改日志存放目录以及修改日志输出编码格式,使用UTF-8。在该文件下的每一个<pattern>${log.pattern}</pattern>
下添加<charset>UTF-8</charset>
。
3.1.2 修改数据源配置
修改该文件,修改日志存放目录。
vim /data/build/ruoyi-admin/src/main/resources/application-druid.yml
3.1.3 修改程序配置
修改数据存储目录
vim /data/build/ruoyi-admin/src/main/resources/application.yml
修改redis配置
修改后端程序端口(可选,保持默认也可)
3.2 构建镜像(需要提前准备好镜像)
[root@backend01 data]# vim Dockerfile-backend #注意:dockerfile文件是使用相对路径(不能用绝对路径)
# 第一阶段:builder
FROM maven-jdk-8:3.8.3 AS builder
# 设置工作目录
WORKDIR /app
# 将项目目录复制到容器中
COPY build/ .
# 运行Maven构建
RUN mvn clean package -DskipTests
# 第二阶段:runtime
FROM jdk1.8:latest
# 设置工作目录
WORKDIR /app
# 将构建阶段生成的jar包复制到运行阶段
COPY --from=builder /app/ruoyi-admin/target/ruoyi-admin.jar /app/ruoyi.jar
# 暴露应用程序的端口
EXPOSE 8080
# 运行jar包
CMD ["java", "-jar", "ruoyi.jar"]
Tps:当选择jre作为环境运行时需要给日志文件(/var/log/ruoyi)权限不然没有写入权限(使用jdk镜像不用)
[root@backend01 data]# docker build -t ruoyi-backend01:v1.0 -f Dockerfile-backend .
将镜像文件推送至harbor,在backend02上拉取部署
3.3 启动后端
[root@backend01 data]# docker run -d --name ruoyi-backend -v /data/ruoyi_data:/data/ruoyi_data -v /var/log/ruoyi:/var/log/ruoyi -p 8080:8080 --restart=always backend01:v1.0
[root@backend02 data]# docker run -d --name ruoyi-backend -v /data/ruoyi_data:/data/ruoyi_data -v /var/log/ruoyi:/var/log/ruoyi -p 8080:8080 --restart=always backend02:v1.0
[root@backend01 data]# docker logs ruoyi-backend
4. 前端配置
4.1 配置前端nginx
frontend01、frontend02编写nginx.conf
#vim ruoyi.zzb.com.conf
server {
listen 80;
server_name ruoyi.zzb.com;
location / {
# 静态文件地址
root /usr/share/nginx/html/;
index index.html index.htm index;
try_files $uri $uri/ /index.html;
}
}
4.2 部署前端
拉取代码
git clone https://gitee.com/y_project/RuoYi-Vue /data/build
4.3 构建镜像(需要提前准备好镜像)
vim Dockerfile-frontend
# 第一阶段:构建阶段
FROM node:lts-alpine3.19 AS build-stage
WORKDIR /app
# 复制package.json和package-lock.json
COPY build/ruoyi-ui/ .
# 修改npm镜像源、安装项目依赖、运行构建命令
RUN npm config set registry https://registry.npmmirror.com && \
npm install && \
npm run build:prod
# 第二阶段:生产阶段
FROM nginx:latest AS production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html/
# 为前端导入nginx配置文件
COPY ruoyi.zzb.com.conf /etc/nginx/conf.d/ruoyi.zzb.com.conf
# 确保nginx在容器启动时运行
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
[root@backend01 data]# docker build -t ruoyi-front01:v1.0 -f Dockerfile-backend .
4.4 启动前端
docker run -d --name ruoyi-frontend -p 80:80 ruoyi-frontend:v1.0
5. 集群环境联调测试
用wireshark进行抓包,然后再系统里面多点几个面板,看lb主机的静态资源请求到前端主机是否进行负载,下面抓包很明显是已经负载!(140 141均为前端ip)
lb主机的nginx配置就是通过匹配到请求后端的七层请求,然后代理到lb主机本地的8080端口,本地8080端口由lb主机的nginx四层负载所监听,实现四层转发。
上传头像后查看nfs没问题
6.备忘录
后端
/data/ruoyi_data: 若依的数据存储目录
/var/log/ruoyi 若依的日志存储目录
前端:
/etc/nginx/ 容器nginx配置目录
/usr/share/nginx/html 容器nginx静态文件目录
k8s上部署(物理lb)
主机 | IP | 说明 |
---|---|---|
K8s Master节点(高可用集群至少得存在两台) | 10.0.0.103(VIP10.0.0.236) | 管理node |
K8s node节点01-02 | 10.0.0.106-107 | 部署前端+后端 |
storage | 10.0.0.144 | MySQL、Redis、NFS |
lb01 | 10.0.0.145(VIP10.0.0.150) | keepalived+负载均衡 |
lb02 | 10.0.0.146(VIP10.0.0.150) | keepalived+负载均衡 |
Tips:需要使用提前打包好的镜像
需要增加主机hosts记录lb的vip
此实验没有挂载后端日志文件,如需增加需新增PV和PVC以及修改后端 deployment(spec.volumes.containers.volumeMounts)
1. 配置持久化存储
PV
#cat pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-ruoyi
nfs:
path: /nfs_data
server: 10.0.0.144
PVC
#PVC
#cat pvc-nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: nfs-ruoyi #要求与PV定义的storageClassName一致
volumeMode: Filesystem #与PV配置一致
accessModes: #与PV配置一致
- ReadWriteMany
resources:
requests:
storage: 3Gi #小于等于PV大小
2. 后端配置
#后端
[root@k8s-master01 ruoyi]# cat backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ruoyi-backend
name: ruoyi-backend # Deployment的名称
spec:
replicas: 2 # 创建Pod的副本数
selector: # selector:定义Deployment如何找到要管理的Pod,与template的label (标签)对应
matchLabels:
app: ruoyi-backend
template:
metadata:
labels:
app: ruoyi-backend # 使用label(标签)标记Pod
spec:
volumes:
- name: nfs-storage #自定义volume资源名称
persistentVolumeClaim:
claimName: nfs-pvc #pvc资源名称
containers:
- name: ruoyi-backend
image: 10.0.0.138:5000/ruoyi-vue/ruoyi-backend:v1.0 # 运行此Pod使用的镜像
ports:
- containerPort: 8080 # 容器用于发送和接收流量的端口
volumeMounts: # 挂载PVC到容器内的目录
- name: nfs-storage
mountPath: /data/ruoyi_data # 根据需要更改挂载点
---
apiVersion: v1
kind: Service
metadata:
name: ruoyi-backend
spec:
selector:
app: ruoyi-backend
type: NodePort
ports:
- protocol: TCP
port: 33000 # Service在集群内部的端口
targetPort: 8080 # 容器内的端口
name: http
nodePort: 30088 # NodePort端口,确保在NodePort范围内
3. 前端配置
[root@k8s-master01 ruoyi]# cat frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ruoyi-frontend
name: ruoyi-frontend #Deployment的名称
spec:
replicas: 2 #创建Pod的副本数
selector: #selector:定义Deployment如何找到要管理的Pod,与template的label (标签)对应
matchLabels:
app: ruoyi-frontend
template:
metadata:
labels:
app: ruoyi-frontend #使用label(标签)标记Pod
spec:
containers:
- name: ruoyi-frontend
image: 10.0.0.138:5000/ruoyi-vue/ruoyi-frontend:v1.0 #运行此Pod使用的镜像
ports:
- containerPort: 80 #容器用于发送和接收流量的端口
---
apiVersion: v1
kind: Service
metadata:
name: ruoyi-frontend
spec:
selector:
app: ruoyi-frontend
type: NodePort
ports:
- protocol: TCP
port: 32000 # Service在集群内部的端口
targetPort: 80 # 容器内的端口
name: http
nodePort: 30080 # NodePort端口,确保在NodePort范围内
查看svc前后端暴露的端口
修改lb01、02的四层负载和七层负载配置
[root@lb01 ~]# vim /etc/nginx/nginx.conf
stream {
upstream backend_pools {
server 10.0.0.236:30088;
}
server {
listen 8080;
proxy_pass backend_pools;
}
}
[root@lb01 ~]# cat /etc/nginx/conf.d/ruoyi.zzb.com.conf
upstream front_pools {
server 10.0.0.236:30080;
}
server {
listen 80;
server_name ruoyi.zzb.com;
location / {
# 静态文件地址
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://front_pools/;
}
location /prod-api/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080/;
}
}
启动前端后端以及pv、pvc即可
k8s上部署(Ingress)
在上一次配置中去除物理lb使用k8s的ingress、如果要使用vip需要在master节点打上ingress标签并去除污点。需要更改前后端deploy文件里的文件service方式为clusterip。
1. 安装ingress控制器
[root@k8s-master01 Ingress]# cat deploy-ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resourceNames:
- ingress-nginx-leader
resources:
- leases
verbs:
- get
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
spec:
hostNetwork: true
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: registry.cn-beijing.aliyuncs.com/dotbalo/ingress-nginx-controller:v1.7.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
ingress: "true"
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.cn-beijing.aliyuncs.com/dotbalo/kube-webhook-certgen:v20230312
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.cn-beijing.aliyuncs.com/dotbalo/kube-webhook-certgen:v20230312
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
2. 配置ingress服务
#cat ruoyi-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: ruoyi-ingress
annotations:
spec:
ingressClassName: nginx # 指定Ingress Class的名字,确保与您的Ingress控制器匹配
rules:
- host: ruoyi.zzb.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ruoyi-frontend
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: ruoyi-ingress-rewrite
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: ruoyi.zzb.com
http:
paths:
- path: '/prod-api(/|$)(.*)'
pathType: ImplementationSpecific
backend:
service:
name: ruoyi-backend
port:
number: 8080
标签:ingress,前后,name,kubernetes,nginx,app,分离,案例,io
From: https://www.cnblogs.com/ggbaooo/p/18277884