首页 > 其他分享 >kubeadm升级k8s之1.23.17->1.24.17

kubeadm升级k8s之1.23.17->1.24.17

时间:2024-08-22 14:48:34浏览次数:11  
标签:staticpods upgrade 17 k8s 1.23 kubeadm kube

查看当前版本

[root@k8s-master31 ~]# kubectl get nodes -o wide
NAME           STATUS   ROLES                  AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                        CONTAINER-RUNTIME
k8s-master31   Ready    control-plane,master   47h   v1.23.17   10.0.0.31     <none>        openEuler 22.03 (LTS-SP1)   5.10.0-136.12.0.86.oe2203sp1.x86_64   docker://26.1.4
k8s-node34     Ready    <none>                 47h   v1.23.17   10.0.0.34     <none>        openEuler 22.03 (LTS-SP1)   5.10.0-136.12.0.86.oe2203sp1.x86_64   docker://26.1.4
k8s-node35     Ready    <none>                 47h   v1.23.17   10.0.0.35     <none>        openEuler 22.03 (LTS-SP1)   5.10.0-136.12.0.86.oe2203sp1.x86_64   docker://26.1.4

Master 节点操作

升级 kubeadm

[root@k8s-master31 ~]# yum install -y kubeadm-1.24.17-0 --disableexcludes=kubernetes
[root@k8s-master31 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.17", GitCommit:"22a9682c8fe855c321be75c5faacde343f909b04", GitTreeState:"clean", BuildDate:"2023-08-23T23:43:11Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master31 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0822 10:35:30.450616   51429 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.17
I0822 10:35:38.878235   51429 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.17
[upgrade/versions] Latest version in the v1.23 series: v1.23.17
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     3 x v1.23.17   v1.24.17
Upgrade to the latest stable version:
COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.23.17   v1.24.17
kube-controller-manager   v1.23.17   v1.24.17
kube-scheduler            v1.23.17   v1.24.17
kube-proxy                v1.23.17   v1.24.17
CoreDNS                   v1.8.6     v1.8.6
etcd                      3.5.6-0    3.5.6-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.24.17

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

修改节点 runtime

PS:安裝了docker 24版本,默認安裝了containerd
[root@k8s-master-01 ~]# kubectl edit nodes k8s-master31
apiVersion: v1
kind: Node
metadata:
  annotations:
    csi.volume.kubernetes.io/nodeid: '{"csi.tigera.io":"k8s-master31"}'
    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
# K8s 1.23版本之前使用的runtime是/var/run/dockershim.sock
# 修改为/var/run/containerd/containerd.sock

配置containerd修改默认Cgroup驱动

[root@k8s-master31 ~]# containerd config default > /etc/containerd/config.toml
[root@k8s-master31 ~]# sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml

[root@k8s-master31 ~]# vim /var/lib/kubelet/kubeadm-flags.env
# 移除--network-plugin=cni 
# 添加--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6 --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock"

systemctl daemon-reload
#重启containerd
systemctl restart containerd
#重启kubelet
systemctl restart kubelet

在 Kubernetes 的较早版本中,--network-plugin 选项用于指定 Kubelet 应该使用的网络插件,例如 cni、kubenet 等。然而,从 Kubernetes v1.24 版本开始,dockershim(包括 kubenet)已被完全移除,同时许多与 dockershim 相关的标志也不再被支持。这可能是你遇到这个问题的原因。

定义 crictl 如何连接到容器运行时。

[root@k8s-master31 ~]# cat >/etc/crictl.yaml<<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

# 加载
systemctl daemon-reload
systemctl restart containerd

执行升级

[root@k8s-master31 ~]# kubeadm upgrade apply v1.24.17
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0822 10:54:49.815166   15201 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.17"
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.17
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.24.17" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3701995785"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-22-10-55-29/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-22-10-55-29/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-22-10-55-29/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the deprecated label node-role.kubernetes.io/master='' from all control plane Nodes. After this step only the label node-role.kubernetes.io/control-plane='' will be present on control plane Nodes.
[upgrade/postupgrade] Adding the new taint &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} to all control plane Nodes. After this step both taints &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} and &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} should be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.17". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

升级Calico

# 查看当前集群配置
[root@k8s-master31 ~]# kubectl -n kube-system get cm kubeadm-config -o yaml

# 下载 Tigera Calico 操作器和自定义资源定义。
# 通过创建必要的自定义资源来安装 Calico。有关此清单中可用的配置选项的更多信息,请参阅安装参考。
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml
参考链接: https://docs.tigera.io/archive/v3.24/getting-started/kubernetes/quickstart
修改custom-resources.yaml里的IP段为上面集群的配置

Node 节点升级

在master上腾空worker 节点

[root@k8s-master31 ~]# kubectl drain k8s-node34 --ignore-daemonsets --delete-emptydir-data
node/k8s-node34 cordoned
WARNING: ignoring DaemonSet-managed Pods: calico-system/calico-node-5v6tj, calico-system/csi-node-driver-jhkkt, kube-system/kube-proxy-r5fz4
evicting pod tigera-operator/tigera-operator-6c49dc8ddf-99nr7
evicting pod calico-system/calico-kube-controllers-68995875fb-bpgfz
evicting pod calico-apiserver/calico-apiserver-86c46fdb85-hmglw
evicting pod calico-system/calico-typha-88d5b6455-4d9df
evicting pod kube-system/upgrade-health-check-w2jkr
pod/calico-typha-88d5b6455-4d9df evicted
pod/calico-apiserver-86c46fdb85-hmglw evicted
pod/tigera-operator-6c49dc8ddf-99nr7 evicted
pod/calico-kube-controllers-68995875fb-bpgfz evicted
pod/upgrade-health-check-w2jkr evicted

在 node 节点操作

[root@k8s-node34 ~]# yum install -y kubeadm-1.24.17-0 --disableexcludes=kubernetes
[root@k8s-node34 ~]# vim /var/lib/kubelet/kubeadm-flags.env
[root@k8s-node34 ~]# containerd config default > /etc/containerd/config.toml
[root@k8s-node34 ~]# sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
[root@k8s-node34 ~]# vim /etc/containerd/config.toml
[root@k8s-node34 ~]# cat >/etc/crictl.yaml<<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@k8s-node34 ~]# systemctl daemon-reload
#重启containerd
systemctl restart containerd
#重启kubelet
systemctl restart kubelet
[root@k8s-node34 ~]# yum install -y kubelet-1.24.17-0 kubectl-1.24.17-0 --disableexcludes=kubernetes
Last metadata expiration check: 0:24:58 ago on 2024年08月22日 星期四 14时02分20秒.
Dependencies resolved.
=========================================================================================================================================
 Package                          Architecture                    Version                             Repository                    Size
=========================================================================================================================================
Upgrading:
 kubectl                          x86_64                          1.24.17-0                           k8s                           10 M
 kubelet                          x86_64                          1.24.17-0                           k8s                           21 M

Transaction Summary
=========================================================================================================================================
Upgrade  2 Packages

Total download size: 31 M
Downloading Packages:
(1/2): c3dc5ffa817d2c69bdd77494b5b9240568c4eb0d06b7b1bf3546bdab971741f5-kubectl-1.24.17-0.x86_64.rpm     405 kB/s |  10 MB     00:25
(2/2): f46e0356e279308a525195d1ae939268faaea772a119cb752480be2b998bec54-kubelet-1.24.17-0.x86_64.rpm     401 kB/s |  21 MB     00:53
-----------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                    595 kB/s |  31 MB     00:53
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                 1/1
  Running scriptlet: kubelet-1.24.17-0.x86_64                                                                                        1/1
  Upgrading        : kubelet-1.24.17-0.x86_64                                                                                        1/4
  Upgrading        : kubectl-1.24.17-0.x86_64                                                                                        2/4
  Cleanup          : kubectl-1.23.17-0.x86_64                                                                                        3/4
  Cleanup          : kubelet-1.23.17-0.x86_64                                                                                        4/4
  Running scriptlet: kubelet-1.23.17-0.x86_64                                                                                        4/4
  Verifying        : kubectl-1.24.17-0.x86_64                                                                                        1/4
  Verifying        : kubectl-1.23.17-0.x86_64                                                                                        2/4
  Verifying        : kubelet-1.24.17-0.x86_64                                                                                        3/4
  Verifying        : kubelet-1.23.17-0.x86_64                                                                                        4/4

Upgraded:
  kubectl-1.24.17-0.x86_64                                            kubelet-1.24.17-0.x86_64

Complete!
[root@k8s-node34 ~]# echo "KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd" >/etc/sysconfig/kubelet
[root@k8s-node34 ~]# systemctl restart kubelet

在 master 节点操作

[root@k8s-master31 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.24.17"...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2793172026"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

查看集群状态

[root@k8s-master31 ~]# kubectl get nodes -o wide
NAME           STATUS                     ROLES           AGE    VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                        CONTAINER-RUNTIME
k8s-master31   Ready                      control-plane   2d3h   v1.24.17   10.0.0.31     <none>        openEuler 22.03 (LTS-SP1)   5.10.0-136.12.0.86.oe2203sp1.x86_64   containerd://1.6.33
k8s-node34     Ready,SchedulingDisabled   <none>          2d3h   v1.24.17   10.0.0.34     <none>        openEuler 22.03 (LTS-SP1)   5.10.0-136.12.0.86.oe2203sp1.x86_64   containerd://1.6.33
k8s-node35     Ready,SchedulingDisabled   <none>          2d3h   v1.24.17   10.0.0.35     <none>        openEuler 22.03 (LTS-SP1)   5.10.0-136.12.0.86.oe2203sp1.x86_64   containerd://1.6.33

测试

# 创建一个临时的 Nginx Pod
[root@k8s-master31 ~]# kubectl run temp-nginx --image=nginx --restart=Never
pod/temp-nginx created

# 创建一个 Service 来暴露 Nginx Pod
[root@k8s-master31 ~]# kubectl expose pod temp-nginx --port=80 --target-port=80 --name=temp-nginx-svc
service/temp-nginx-svc exposed

[root@k8s-master31 ~]# kubectl get svc,pod -o wide
NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE    SELECTOR
service/kubernetes       ClusterIP   172.96.0.1      <none>        443/TCP   2d3h   <none>
service/temp-nginx-svc   ClusterIP   172.101.81.80   <none>        80/TCP    31s    run=temp-nginx

NAME             READY   STATUS    RESTARTS   AGE   IP                NODE           NOMINATED NODE   READINESS GATES
pod/temp-nginx   1/1     Running   0          34s   172.244.221.136   k8s-master31   <none>           <none>

[root@k8s-master31 ~]# curl 172.244.221.136
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-node34 ~]# curl 172.101.81.80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#  删除 Pod 和 Service
[root@k8s-master31 ~]# kubectl delete pod temp-nginx
pod "temp-nginx" deleted
[root@k8s-master31 ~]# kubectl delete svc temp-nginx-svc
service "temp-nginx-svc" deleted

标签:staticpods,upgrade,17,k8s,1.23,kubeadm,kube
From: https://www.cnblogs.com/boradviews/p/18373868

相关文章

  • 【2025考研英语高分写作:20大必备范文】经典范文053 社会热点:健康 P177
    Directions:Writeanessaybasedonthefollowingchart.Inyourwriting,youshould1)interpretthechart,and2)giveyourcomments.Youshouldwriteabout150wordsontheANSWERSHEET.        Thelinechartaboveillustratesclearlytheamoun......
  • [NOI2017] 游戏
    先来讲一下到底什么叫K-SAT先来看看2-SAT的准确定义那么对于k-SAT,不是说每个集合就有\(k\)个元素了(每个集合仍然只有两个元素,因为布尔变量的取值只有\(0\)和\(1\)),而是说给出的限制条件涉及\(k\)个元素,比如3-SAT那么对于这道题目,如果不考虑\(\text{x}\)的话,就是一个裸的2-SAT......
  • 第17章_反射机制
    该篇笔记,是因为想重新学一下SpringCloud和SpringCloudAlibaba框架,但是b站尚硅谷的最新课程,使用SpringBoot3作为,单体服务的框架,而SpringBoot3最低要求JDK17,所以必须要学一下JDK8-JDK17之间的新特性。本来只想看,宋红康老师课程的第18章JDK8-17新特性,但是觉得反射的API有点忘记,遂......
  • AT_joisc2017_c 手持ち花火
    AT_joisc2017_c手持ち花火一道神秘贪心题。首先显然是二分速度\(v\)。然后发现题意可以被理解成其他人逐渐向\(k\)靠近,所以若跑了区间\([l,r]\),那么跑的距离就是\(x_r-x_l\),所以就要尽量增长跑动的时间,而注意到题意不是一碰到就要点燃,所以可以跟着跑,也就是说时间就是......
  • Navicat Premium Lite 17 可以免费使用啦
    搬运NavicatPremiumLite17可以免费使用啦~-『精品软件区』-吾爱破解-LCG-LSG|安卓破解|病毒分析|www.52pojie.cn前提为了更好的完成工作,平时使用过很多数据库可视化工具,比如DBeaver、DataGrip、HeidiSQL、DBBrowserforSQLite等,以上软件都是非常优秀的工具,那有没......
  • [ARC177B] Puzzle of Lamps
    [ARC177B]PuzzleofLamps思路首先可以发现这题并没有限制最少操作步骤,于是逆序遍历序列,若要将位置$i$的数字变成$1$(下标从$0$开始),则先执行$i+1$次操作$A$,再执行$i$次操作$B$,这样可以保证只将位置$i$的数字变成$1$。由于是逆序遍历,所以不会影响后面的数字。......
  • java: 错误: 无效的源发行版:17
    错误信息java:错误:无效的源发行版:17原因这个错误通常表示你的Java编译器版本不支持你指定的Java版本。解决方式pom.xml版本改为18或8<properties><java.version>18</java.version></properties>设置:改完直接finish键盘输入1.8,按自己......
  • 8.17 河南外包
    1.asio异步TCP连接池参数​ 连接池大小、连接超时、最大连接数、重试策略、空闲连接超时2.连接池的复用率是多少连接池大小:如果连接池中的连接数远大于实际需求,复用率可能较低,因为可能会有很多连接闲置而未被复用。请求频率:如果系统中有大量短时间内的数据库请求,复用......
  • FLink1.17-Kafka实时同步到MySQL实践
    1.组件版本组件版本Kafka3.7.0Flink1.17.0MySQL8.0.32 2.Kafka生产数据./kafka-console-producer.sh--broker-listhadoop01:9092,hadoop02:9092,hadoop03:9092--topic  kafka_test_table2>{"id":123,"test_age":33}&......
  • JDK17安装
    JDK17是JavaDevelopmentKit(Java开发工具包)的第17个长期支持(LTS)版本,由Oracle公司于2021年9月发布。作为Java语言的主要发行版,JDK17带来了许多新特性、增强功能和优化。但是我们在Linux环境下使用yum安装时,发现不能直接安装JDK17,使用:yumsearchjava|grep......