首页 > 其他分享 >kubernetes-1.26安装

kubernetes-1.26安装

时间:2023-08-23 18:35:42浏览次数:48  
标签:kube kubernetes config calico master io k8s 安装 1.26

一、环境准备

k8s集群角色 IP 主机名 安装组件 配置
控制节点 192.168.10.10 master apiserver、controller-manager、scheduler、etcd、kube-proxy、docker、calico、contained 2核4G
工作节点 192.168.10.11 node1 kubelet-1.26、kube-proxy、docker、calico、coredns、contained 2核4G
工作节点 192.168.10.12 node2 kubelet-1.26、kube-proxy、docker、calico、coredns、contained 2核4G

 

 

 

 

1.1 基础环境配置

# 控制节点和工作节点都需要运行
# 1.设置主机名
hostnamectl set-hostname master

# 2.配置hosts
192.168.10.10 master
192.168.10.11 node1
192.168.10.12 node2

# 3.ssh信任
ssh-keygen -t rsa
ssh-copy-id node1

# 4.关闭交换分区
swapoff -a  # 临时关闭

# 5.修改机器内核参数
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

# 6. 关闭防火墙
systemctl stop firewalld ; systemctl disable firewalld

# 7.关闭selinux,修改 x selinux  配置文件之后,重启
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 8.配置阿里云yum源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast

# 9.配置kubernets源
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

# 10.时间同步并定时同步
yum install ntpdate -y
ntpdate time1.aliyun.com
* */1 * * * /usr/sbin/ntpdate time1.aliyun.com
systemctl restart crond

 1.2 基础软件包安装

yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl- devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm

1.3 安装containerd

# 1.安装containerd服务
yum -y install containerd

# 2.生成containerd配置文件
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

# 3.修改配置文件
vim /etc/containerd/config.toml
SystemdCgroup = true   # false改为true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"   # 如果版本不清楚后面kubeadm config images list --config=kubeadm.yml时可以看了再修改

# 4.配置为开机启动
systemctl enable containerd --now

# 5.修改/etc/crictl.yaml 文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

systemctl restart containerd

# 6.配置镜像加速器
# 编辑 vim /etc/containerd/config.toml 文件,修改
config_path = "/etc/containerd/certs.d"

mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml 
[host."https://pft7f97f.mirror.aliyuncs.com",host."https://registry.docker-cn.com",host."https://docker.mirrors.ustc.edu.cn"]
  capabilities = ["pull"]

systemctl restart containerd

1.4 安装docker辅助制作镜像

# 1. 安装docker并设置自启动
yum install docker-ce -y
systemctl enable docker --now 

# 2.配置镜像加速器
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors":["https://pft7f97f.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"]
}
EOF

systemctl daemon-reload
systemctl restart docker

 二、安装k8s

1.1 安装k8s所需安装包

# 1.安装k8s软件包,master和node都需要
yum install -y kubelet-1.26.7 kubeadm-1.26.7 kubectl-1.26.7
systemctl enable kubelet

注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的,kubeadm 安装k8s,k8s 控制节点和工作节点的组件,都是基于 pod 运行的,只要 pod 启动,就需要 kubelet 
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

1.2 kubeadm初始化k8s配置文件生成

# 安装单机版k8s
# 1.设置容器运行时,master,node
crictl config runtime-endpoint unix:///run/containerd/containerd.sock

#2.使用配置文件初始化k8s:master
kubeadm config print init-defaults > kubeadm.yaml

# 3.根据需求修改配置文件
[root@master ~]# cat kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
# 指定控制节点IP端口 advertiseAddress: 192.168.10.10 bindPort: 6443 nodeRegistration:
# 制定containerd容器运行时 criSocket: unix:///run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: master taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd
# 指定阿里云镜像仓库地址,k8s版本 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.26.7 networking: dnsDomain: cluster.local
# 指定pod网段和service网段 podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} --- # 新增:启动ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- # 新增:申明cgroup使用systemd
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd

1.3 k8s初始化

查看所需下载的镜像

[root@master ~]# kubeadm config images list --config=kubeadm.yaml 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

下载镜像命令:

[root@master ~]# kubeadm config images pull --config=kubeadm.yaml 
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

控制节点初始化:

[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.26.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.10.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.512119 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:c88d0ee03c6f2bf28a387899713d0f965f4742f5e4e96cb836faba4611ace553
初始化内容

或者单独执行
kubeadm init --kubernetes-version=1.26.7 --apiserver-advertise-address=192.168.10.10 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket /run/containerd/containerd.sock --ignore-preflight-errors=SystemVerification

控制节点执行:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.4 添加工作节点

# 添加工作节点node:
kubeadm join 192.168.10.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:c88d0ee03c6f2bf28a387899713d0f965f4742f5e4e96cb836faba4611ace553 
[root@node1 ~]# kubeadm join 192.168.10.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:c88d0ee03c6f2bf28a387899713d0f965f4742f5e4e96cb836faba4611ace553 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
工作节点执行日志
# 生成token命令
kubeadm token create --print-join-command
[root@node1 ~]# crictl images ls
IMAGE                                                            TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/pause                    3.9                 e6f1816883972       322kB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   v1.26.7             1e7eac3bc5c0b       21.8MB

查看:

# 控制节点运行:
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   14m     v1.26.7
node1    NotReady   <none>          3m13s   v1.26.7

# master给node1打上标签
[root@master ~]# kubectl label nodes node1 node-role.kubernetes.io/work=work
node/node1 labeled

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   18m     v1.26.7
node1    NotReady   work            7m33s   v1.26.7

三、安装网络组件calico

查看calico支持的版本:https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements

查看calico配置文件下载地址:https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises

下载:curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico-typha.yaml -o calico.yaml

# 修改:IP_AUTODETECTION_METHOD:获取 Node IP 地址的方式,默认使用第 1 个网络接口的 IP 地址,对于安装了多块网卡的 Node,可以使用正则表达式选择正确的网卡,
# 例如"interface=eth.*"表示选择名称以 eth 开头的网卡的 IP 地址。 # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" - name: IP_AUTODETECTION_METHOD value: "interface=ens33"

[root@master home]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
poddisruptionbudget.policy/calico-typha created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
service/calico-typha created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
deployment.apps/calico-typha created

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   65m   v1.26.7
node1    Ready    work            54m   v1.26.7
[root@master home]# crictl images ls
IMAGE                                                                         TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                                          v3.26.1             9dee260ef7f59       93.4MB
docker.io/calico/node                                                         v3.26.1             8065b798a4d67       86.6MB
registry.aliyuncs.com/google_containers/pause                                 3.9                 e6f1816883972       322kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.9.3              5185b96f0becf       14.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.6-0             fce326961ae2d       103MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.26.7             6ac727c486d08       36.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.26.7             17314033c0a0b       32.9MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.26.7             1e7eac3bc5c0b       21.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.26.7             c1902187a39f8       18MB

[root@master home]# kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-949d58b75-269j4   1/1     Running   0          9m54s   10.244.166.129   node1    <none>           <none>
calico-node-6xcvq                         1/1     Running   0          9m54s   192.168.10.10    master   <none>           <none>
calico-node-s2tqg                         1/1     Running   0          9m54s   192.168.10.11    node1    <none>           <none>
calico-typha-7575dd9f6f-gxm6l             1/1     Running   0          9m54s   192.168.10.11    node1    <none>           <none>
coredns-567c556887-4wh8v                  1/1     Running   0          3h      10.244.166.131   node1    <none>           <none>
coredns-567c556887-7j2sq                  1/1     Running   0          3h      10.244.166.130   node1    <none>           <none>
etcd-master                               1/1     Running   0          3h      192.168.10.10    master   <none>           <none>
kube-apiserver-master                     1/1     Running   0          3h      192.168.10.10    master   <none>           <none>
kube-controller-manager-master            1/1     Running   0          3h      192.168.10.10    master   <none>           <none>
kube-proxy-hfdjj                          1/1     Running   0          3h      192.168.10.10    master   <none>           <none>
kube-proxy-kspmr                          1/1     Running   0          169m    192.168.10.11    node1    <none>           <none>
kube-scheduler-master                     1/1     Running   0          3h      192.168.10.10    master   <none>           <none>

四、测试k8s正常访问网络

# 导入busybox镜像至node节点
ctr -n=k8s.io images import busybox.tar.gz

# master节点运行
[root@master home]# kubectl run busybox --image docker.io/library/busybox:latest --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping baidu.com
PING baidu.com (110.242.68.66): 56 data bytes
64 bytes from 110.242.68.66: seq=0 ttl=127 time=34.621 ms
64 bytes from 110.242.68.66: seq=1 ttl=127 time=32.764 ms

/ # nslookup  kubernetes.default.svc.cluster.local 
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

 

标签:kube,kubernetes,config,calico,master,io,k8s,安装,1.26
From: https://www.cnblogs.com/yangmeichong/p/17651413.html

相关文章

  • ubuntu-20.04.6-live-server-amd64乌班图服务器版操作系统安装记录
    安装盘,加载数据选择安装语言,默认English,直接回车更新安装、不更新安装和返回,按默认选择不更新安装,直接回车。选择键盘语言,默认English,直接回车。设置网络,默认dhcp获取了一个IP,服务器一般固定IP,方向键操作修改配置选中网卡后回车,菜单选择IPV4回车模式选择静态IP/手动,回车设置录入静......
  • CentOS离线安装gcc环境附安装包
    原文链接:https://blog.csdn.net/niceyoo/article/details/1147853331、关于gcclinux内核本身不依赖gcc,gcc只是一个编译软件,是在kernel的源码变成可执行文件的时候起作用,真正使用起来就没有什么关系。查看gcc版本gcc-v如果没有则显示:2、安装步骤2.1、下载gcc安装包gcc下载地......
  • mysql安装-linux
    参考来源:https://www.cnblogs.com/werr370/p/14633785.html#   问题1:cat/var/log/mysqld.log查看日志出现:FailedtoinitializeDDStorageEngine.DataDictionaryinitializationfailed.1、systemctlstartmysqld执行报错,查看日志 参考来源:https://blog.csdn.n......
  • K8S-安装笔记
    准备:主机环境的前期准备工作个人环境使用3台CentOSLinuxrelease8.5.2111,搭建需要联网,配置yum的k8s仓库等。IP地址:172.17.136.28/29/32/33,主机名对应为:gip28、gip29、gip32、gip33期中k8smaster主节点为gip28注意:以下操作如果没有特殊说明,则默认在所有的节点均执行。一、安装dock......
  • docker安装
    1、配置网络添加外网配置外网TYPE=EthernetBOOTPROTO=dhcpNAME=ens34DEVICE=ens34ONBOOT=yes(我的外网是ens34所以说里面的配置便是34如有不同可以修改)重启网络服务ystemctlrestartnetwork重启之后可以ping一下百度来看一下自己的服务是否有网有网之后就可以安装docker了2......
  • centos系统离线下载yum命令的rpm文件并安装
    因为我用的是windows服务器,因此需要一台虚拟机,用来安装centos,虚拟机的安装网上好多教程,这里不做过多介绍这次同样是按步操作在本地服务器创建下载目录->将yum文件下载到本地->在远程服务器上创建目录->上传文件到远程服务器目录->使用命令安装yum到服务器上这次的案例......
  • 安装Helm
    Helm是Kubernetes的一个包管理器,它用于简化和自动化在Kubernetes集群上部署、升级和管理应用程序。以下是在Linux上安装Helm的一般步骤:步骤1:下载Helm客户端打开终端,并使用以下命令下载Helm客户端的二进制文件:wgethttps://get.helm.sh/helm-v3.7.0-linux-amd64......
  • centos服务器系统下安装python3并与自带的python2
    centos服务器系统下安装python3并与自带的python2在centos中,自带有python2,因此需要经常安装python3。但是这里有一个坑,就是centos的yum是用python2写的,如果正常编译安装python3,那么yum就会直接挂了。为了方便以后编译安装python3,不用天天去网上找教程仅供参考。(因平台原因本文中www......
  • 【Protoc】VS2019 (VS平台) 使用 CMake 编译安装、使用 Protobuf 库
    背景:工作中需要使用到protobuf,看了一些教程,感觉都不是很适合,便自己总结一些开发环境:Win10VS2019CMake3.24.2Protobuf3.21.12(Protoc版本必须于Protobuf版本一致)MinGW版本的编译在之后有空再研究。https://stackoverflow.com/questions/9243816/how-to-build-......
  • linux 安装 jenkins
    linux安装jenkins1.需要安装jdk,版本在1.8以上,不在敖述了。2.在/usr/local下,执行mkdirjenkins,然后进入cdjenkins,下载jenkins包: wgethttp://pkg.jenkins-ci.org/redhat-stable/jenkins-2.190.3-1.1.noarch.rpm 安装: rpm-ivhjenkins-2.190.3-1.1.noarch.rpm3.cd......