首页 > 其他分享 >sealer 自定义 k8s 镜像并部署高可用集群

sealer 自定义 k8s 镜像并部署高可用集群

时间:2023-12-18 10:06:45浏览次数:33  
标签:sealer kube 自定义 system 192.168 40h Running k8s calico

sealer 可以自定义 k8s 镜像,想把一些 dashboard 或者 helm 包管理器打入 k8s 镜像,可以直用 sealer 来自定义。

sealer 部署的 k8s 高可用集群自带负载均衡。sealer 的集群高可用使用了轻量级的负载均衡 lvscare。相比其它负载均衡,lvscare 非常小仅有几百行代码,而且 lvscare 只做 ipvs 规则的守护,本身不做负载非常稳定,直接在 node 上监听 apiserver,如果跪了就移除对应的规则,重新起来之后会自动加回,相当于是一个专用的负载均衡器。

部署方式和 sealor 一样指定 master 节点和 node 节点。

1、下载并部署 sealer

Download and install sealer. Sealer is a binary tool of golang. You can download and unzip it directly to the bin directory, and the release page can also be downloaded
wget -c https://sealer.oss-cn-beijing.aliyuncs.com/sealer-latest.tar.gz && \
    tar -xvf sealer-latest.tar.gz -C /usr/bin

2、创建 kubefile 文件并编辑 (自定义构建 k8s 镜像需要会 dockerfile 的镜像命令)

touch kubefile

FROM registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes:v1.22.5  # k8s 版本自定义
COPY helm /usr/bin                                                  # 这个 helm 已经解压并把 helm 移动到 /root/ 目录下了
COPY kube-prometheus-0.11.0 .                                       # 这个版本可能不一样,解压出来的文件夹名字也不一样
COPY loki-stack-2.1.2.tgz .                                         # loki 这个可以用 helm 下载安装包
CMD kubectl apply -f kube-prometheus-0.11.0/manifests/setup
CMD kubectl apply -f kube-prometheus-0.11.0/manifests
CMD helm install loki loki-stack-2.1.2.tgz -n monitoring

3、构建

sealer build -t k8s:v1.22.5 .   # 构建镜像名字可以根据自己情况定义
2022-06-29 23:02:41 [INFO] [executor.go:123] start to check the middleware file
2022-06-29 23:02:41 [INFO] [executor.go:63] run build layer: COPY helm /usr/bin
2022-06-29 23:02:42 [INFO] [executor.go:63] run build layer: COPY kube-prometheus-0.11.0 .
2022-06-29 23:02:42 [INFO] [executor.go:63] run build layer: COPY loki-stack-2.1.2.tgz .
2022-06-29 23:02:42 [INFO] [executor.go:95] exec all build instructs success
2022-06-29 23:02:42 [WARN] [executor.go:112] no rootfs diff content found
2022-06-29 23:02:42 [INFO] [build.go:100] build image amd64 k8s:v1.22.5 success

4、查看镜像并部署

[root@master3 ~]# sealer images 
+---------------------------+------------------------------------------------------------------+-------+---------+---------------------+-----------+
|        IMAGE NAME         |                             IMAGE ID                             | ARCH  | VARIANT |       CREATE        |   SIZE    |
+---------------------------+------------------------------------------------------------------+-------+---------+---------------------+-----------+
| k8s:v1.22.5               | ef293898df6f5a9a01bd5bc5708820ef9ff25acfe56ea20cfe3a45a725f59bb5 | amd64 |         | 2022-06-29 23:02:42 | 1004.76MB |
| kubernetes:v1.22.5        | 46f8c423be130a508116f41cda013502094804525c1274bc84296b674fe17618 | amd64 |         | 2022-06-29 23:02:42 | 956.60MB  |
+---------------------------+------------------------------------------------------------------+-------+---------+---------------------+-----------+


sealer run k8s:v1.22.5 --masters 192.168.200.3,192.168.200.4,192.168.200.5 \
    --nodes 192.168.200.6 \
    --user root \
    --passwd admin

5、查看节点状态

[root@master3 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master3   Ready    master   40h   v1.22.5
master4   Ready    master   40h   v1.22.5
master5   Ready    master   40h   v1.22.5
node6     Ready    <none>   40h   v1.22.5

查看所有 pod 状态

[root@master3 ~]# kubectl get po --all-namespaces -o wide 
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-57447598b7-46kbs          1/1     Running   5          40h   100.84.137.67    master5   <none>           <none>
calico-apiserver   calico-apiserver-57447598b7-hlws5          1/1     Running   2          40h   100.68.136.4     master3   <none>           <none>
calico-system      calico-kube-controllers-69dfd59986-tb496   1/1     Running   6          40h   100.125.38.209   node6     <none>           <none>
calico-system      calico-node-69dld                          1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
calico-system      calico-node-kdjzs                          1/1     Running   4          40h   192.168.200.3    master3   <none>           <none>
calico-system      calico-node-prktp                          1/1     Running   1          40h   192.168.200.5    master5   <none>           <none>
calico-system      calico-node-tm285                          1/1     Running   3          40h   192.168.200.4    master4   <none>           <none>
calico-system      calico-typha-779b5cfd4c-x4wnz              1/1     Running   2          40h   192.168.200.4    master4   <none>           <none>
calico-system      calico-typha-779b5cfd4c-zftxs              1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
kube-system        coredns-55bcc669d7-pvj45                   1/1     Running   2          40h   100.68.136.3     master3   <none>           <none>
kube-system        coredns-55bcc669d7-xkwvm                   1/1     Running   1          40h   100.84.137.68    master5   <none>           <none>
kube-system        etcd-master3                               1/1     Running   2          40h   192.168.200.3    master3   <none>           <none>
kube-system        etcd-master4                               1/1     Running   2          40h   192.168.200.4    master4   <none>           <none>
kube-system        etcd-master5                               1/1     Running   2          40h   192.168.200.5    master5   <none>           <none>
kube-system        kube-apiserver-master3                     1/1     Running   3          40h   192.168.200.3    master3   <none>           <none>
kube-system        kube-apiserver-master4                     1/1     Running   2          40h   192.168.200.4    master4   <none>           <none>
kube-system        kube-apiserver-master5                     1/1     Running   3          40h   192.168.200.5    master5   <none>           <none>
kube-system        kube-controller-manager-master3            1/1     Running   3          40h   192.168.200.3    master3   <none>           <none>
kube-system        kube-controller-manager-master4            1/1     Running   9          40h   192.168.200.4    master4   <none>           <none>
kube-system        kube-controller-manager-master5            1/1     Running   8          40h   192.168.200.5    master5   <none>           <none>
kube-system        kube-lvscare-node6                         1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
kube-system        kube-proxy-99cjb                           1/1     Running   1          40h   192.168.200.3    master3   <none>           <none>
kube-system        kube-proxy-lmdn6                           1/1     Running   1          40h   192.168.200.4    master4   <none>           <none>
kube-system        kube-proxy-ns9c5                           1/1     Running   1          40h   192.168.200.5    master5   <none>           <none>
kube-system        kube-proxy-xf6fx                           1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
kube-system        kube-scheduler-master3                     1/1     Running   4          40h   192.168.200.3    master3   <none>           <none>
kube-system        kube-scheduler-master4                     1/1     Running   5          40h   192.168.200.4    master4   <none>           <none>
kube-system        kube-scheduler-master5                     1/1     Running   7          40h   192.168.200.5    master5   <none>           <none>
monitoring         alertmanager-main-0                        2/2     Running   2          40h   100.125.38.210   node6     <none>           <none>
monitoring         alertmanager-main-1                        0/2     Pending   0          40h   <none>           <none>    <none>           <none>
monitoring         alertmanager-main-2                        0/2     Pending   0          40h   <none>           <none>    <none>           <none>
monitoring         blackbox-exporter-5c545d55d6-c8997         3/3     Running   3          40h   100.125.38.203   node6     <none>           <none>
monitoring         grafana-785db9984-xhrwx                    1/1     Running   1          40h   100.125.38.204   node6     <none>           <none>
monitoring         kube-state-metrics-54bd6b479c-jvt76        3/3     Running   3          40h   100.125.38.202   node6     <none>           <none>
monitoring         node-exporter-5hl54                        2/2     Running   2          40h   192.168.200.4    master4   <none>           <none>
monitoring         node-exporter-89jbp                        2/2     Running   2          40h   192.168.200.3    master3   <none>           <none>
monitoring         node-exporter-mqm4n                        2/2     Running   2          40h   192.168.200.6    node6     <none>           <none>
monitoring         node-exporter-mx6qr                        2/2     Running   2          40h   192.168.200.5    master5   <none>           <none>
monitoring         prometheus-adapter-7dbf69cc-65hp9          1/1     Running   1          40h   100.125.38.205   node6     <none>           <none>
monitoring         prometheus-adapter-7dbf69cc-xnjv4          1/1     Running   1          40h   100.125.38.207   node6     <none>           <none>
monitoring         prometheus-k8s-0                           2/2     Running   2          40h   100.125.38.208   node6     <none>           <none>
monitoring         prometheus-k8s-1                           0/2     Pending   0          40h   <none>           <none>    <none>           <none>
monitoring         prometheus-operator-54dd69bbf6-h5szm       2/2     Running   2          40h   100.125.38.206   node6     <none>           <none>
tigera-operator    tigera-operator-7cdb76dd8b-hfhrt           1/1     Running   9          40h   192.168.200.6    node6     <none>           <none>

标签:sealer,kube,自定义,system,192.168,40h,Running,k8s,calico
From: https://blog.51cto.com/u_14620403/8868177

相关文章

  • kubekey 部署内置 haproxy k8s 高可用集群
    内置haproxy高可用架构:1、下载脚本[root@master1~]#curl-sfLhttps://get-kk.kubesphere.io|VERSION=v2.0.0sh-如果访问Github和Googleapis受限先执行以下命令再执行上面的命令exportKKZONE=cn2、给脚本赋予执行权限[root@master1~]#chmod+xkk3、创建包含默认配......
  • sealos 离线部署 k8s 高可用集群
    sealos简介sealos特性与优势:通过内核ipvs对apiserver进行负载均衡,并且带apiserver健康检测,并不依赖haproxy和keepalived。支持离线安装,工具与资源包(二进制程序配置文件镜像yaml文件等)分离,这样不同版本替换不同离线包即可证书延期使用简单支持自定义配置内核负......
  • 下载镜像提示 output: Error response from daemon: Get https://k8s.gcr.io/v2/: x5
    出现这问题可能是两种原因:1、k8s所有节点的时间不统一。2、k8s配置文件镜像仓库有问题问题:[root@master1~]#kubeadmconfigimagespull--configkubeadm-config.yamlW092001:12:10.7940302723configset.go:202]WARNING:kubeadmcannotvalidatecomponentconfigs......
  • k8s集群从一千节点增加到五千台节点遇到的瓶颈
    Kubernetes自从1.6起便号称可以承载5000个以上的节点,但是从数十到5000的路上,难免会遇到问题。在kubernetes5000之路上的经验,包括遇到的问题、尝试解决问题以及找到真正的问题。1、问题一:1~500个节点之后问题:kubectl有时会出现timeout(p.s.kubectl-v=6可以显示所......
  • k8s集群优化
    1、内核参数调化fs.file-max=1000000#max-file表示系统级别的能够打开的文件句柄的数量,一般如果遇到文件句柄达到上限时,会碰到#"Toomanyopenfiles"或者Socket/File:Can’topensomanyfiles等错误。#配置arpcache大小net.ipv4.neigh.default.gc_thresh1=1024#存......
  • k8s的service和ep是如何关联和相互影响的
    1、ephemeralstorage是k8s1.8引入的特性,用作限制临时存储,可为Pod的每个容器单独配置。2、它包括四个方面:EmptyDievolumes、Containerlogs、imagelayers和containerwritablelayers。注:容器可写层即向容器内写入文件时占用的存储。3、当任意一个容器超过限制,或整个Pod超过限......
  • k8s安装metrics-server
    KubernetesMetricsServer:KubernetesMetricsServer是Cluster的核心监控数据的聚合器,kubeadm默认是不部署的。MetricsServer供Dashboard等其他组件使用,是一个扩展的APIServer,依赖于APIAggregator。所以,在安装MetricsServer之前需要先在kube-apiserver中开启API......
  • 查看k8s中etcd数据
    1.查看ETCD集群中运行的ETCDpod[root@master1~]#kubectlgetpod-nkube-system|grepetcdetcd-master11/1Running061metcd-master21/1Running058metcd-master31......
  • 解决k8s Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: c
    安装完k8s集群之后很可能会出现一下情况:[root@master1~]#kubectlgetcsNAMESTATUSMESSAGEERRORschedulerUnhealthyGethttp://127.0.0.1:10251......
  • k8s集群安装KubeSphere3.0
    架构:前提条件:k8s集群版本必须是1.15.x,1.16.x,1.17.x,or1.18.x必须有默认的storageclass内存和cpu最低要求:CPU>1Core,Memory>2G安装并设置默认storageclass略过,可以看之前的我发过的博客1.安装yaml文件(先安装第一个,再安装第二个)kubectlapply-fhttps://github.com......