首页 > 系统相关 >Ubuntu22.04 安装 K8S

Ubuntu22.04 安装 K8S

时间:2022-11-20 23:39:59浏览次数:68  
标签:etcd kubernetes -- Ubuntu22.04 192.168 etc https K8S 安装

一、环境

软件版本:

服务

版本

操作系统

ubuntu server 22.04 LTS

容器运行时

​containerd.io​​ 1.6.8-1

k8s

1.24.5

网络插件

calico 3.24.1

节点说明:

k8s集群角色

IP

主机名

安装组件

控制节点

192.168.33.181

u-master1

Apiserver、controller-manager、scheduler、etcd、kubelet、kube-proxy、calico、containerd

控制节点

192.168.33.182

u-master2

Apiserver、controller-manager、scheduler、etcd、kubelet、kube-proxy、calico、containerd

控制节点

192.168.33.183

u-master3

Apiserver、controller-manager、scheduler、etcd、kubelet、kube-proxy、calico、containerd

工作节点

192.168.33.191

u-node1

Kubelet、kube-proxy、calico、coredns、containerd

工作节点

192.168.33.192

u-node2

Kubelet、kube-proxy、calico、coredns、containerd

工作节点

192.168.33.193

u-node3

Kubelet、kube-proxy、calico、coredns、containerd


192.168.33.171

u-ha1

keepalived、haproxy


192.168.33.172

u-ha2

keepalived、haproxy

VIP

192.168.33.189



二、基于kubeadm部署k8s

2.1 部署k8s集群API访问入口的高可用

2.1.1 安装haproxy

利用haproxy实现kubeapi服务的负载均衡

#修改内核参数
cat >> /etc/sysctl.conf <<EOF
net.ipv4.ip_nonlocal_bind = 1
EOF

# 应用
sysctl -p

#安装haproxy
apt-get update && apt -y install haproxy

#配置haproxy,添加下面行
cat >> /etc/haproxy/haproxy.cfg <<EOF
listen stats
mode http
bind 0.0.0.0:8888
stats enable
log global
stats uri /status
stats auth haadmin:fgAgh734dsf0

listen kubernetes-api-6443
bind 192.168.33.189:6443
mode tcp
server master1 192.168.33.181:6443 check inter 3s fall 3 rise 3
server master2 192.168.33.182:6443 check inter 3s fall 3 rise 3
server master3 192.168.33.183:6443 check inter 3s fall 3 rise 3
EOF

systemctl restart haproxy

2.1.2 安装keepalived

安装keepalived实现haproxy的高可用

# 安装keepalived
apt update && apt -y install keepalived

cat >> /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
#指定router_id,#在ha2上为u-ha2
router_id u-ha1
#添加运行健康检查脚本的用户或者组
script_user root
enable_script_security
}
vrrp_script check_haproxy {
#定义检查脚本
script "/etc/keepalived/check_haproxy.sh"
interval 1
weight -30
fall 3
rise 2
timeout 2
}
vrrp_instance VI_1 {
state MASTER #在ha2上为BACKUP
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 66 #指定虚拟路由器ID,ha1和ha2此值必须相同
priority 100 #在ha2上为80
advert_int 1
authentication {
auth_type PASS
auth_pass GPmH8Ql8 #指定验证密码,ha1和ha2此值必须相同
}
virtual_ipaddress {
192.168.33.189/24 dev ens33 label ens33:1 #指定VIP,ha1和ha2此值必须相同
}
track_script {
check_haproxy #调用上面定义的脚本
}
}
EOF

cat > /etc/keepalived/check_haproxy.sh <<EOF
#!/bin/bash
LOGFILE="/var/log/keepalived-haproxy-state.log"
date >> \$LOGFILE

counter=\$( ps -C haproxy --no-heading | wc -l)

if [ "\$counter" = "0" ] ; then
echo "failed: check haproxy status!" >> \$LOGFILE
systemctl restart haproxy
sleep 2
counter=\$( ps -C haproxy --no-heading | wc -l)
if [ "\$counter" = "0" ] ; then
echo "failed after 2s: check haproxy status, stop keepalived" >> \$LOGFILE
systemctl stop keepalived
fi
fi

EOF

chmod a+x /etc/keepalived/check_haproxy.sh

systemctl restart keepalived

2.1.3 测试访问

浏览器访问验证,账号密码在haproxy.cfg上。

Ubuntu22.04 安装 K8S_k8s

Ubuntu22.04 安装 K8S_k8s_02

2.2 k8s节点初始化

2.2.1 设置主机host

# 每台上都修改主机名
hostnamectl hostname u-master1 && bash

# 配置hosts
cat > /etc/hosts <<EOF
192.168.33.181 u-master1
192.168.33.182 u-master2
192.168.33.183 u-master3
192.168.33.191 u-node1
192.168.33.192 u-node2
192.168.33.193 u-node3
EOF

2.2.2 配置ssh key 验证

配置ssh key 免密钥验证,方便后续传输文件

# 生成ssh密钥
ssh-keygen

# 远程复制到其他所有节点
ssh-copy-id -i ~/.ssh/id_rsa.hub [email protected]

2.2.3 禁用swap

swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab

# 或者
systemctl disable --now swap.img.swap
systemctl mask swap.target

2.2.4 时间同步

# 借助于chronyd服务(程序包名称chrony)设定各节点时间精确同步
apt-get -y install chrony
chronyc sources -v

# 设置成东八区时区
timedatectl set-timezone Asia/Shanghai

2.2.5 禁用防火墙

#禁用默认配置的iptables防火墙服务
ufw disable
ufw status

2.2.6 内核参数调整

为了让 Linux 节点上的 iptables 能够正确地查看桥接流量,需要确保在 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。

cat > /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

# 手动加载模块
modprobe overlay && modprobe br_netfilter

# 设置所需的sysctl参数,参数在重新启动后保持不变
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# 应用 sysctl 参数而不重新启动
sysctl --system

2.2.7 开启ipvs

在kubernetes中service有两种代理模型,一种是基于iptables(链表),另一种是基于ipvs(hash表)。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块。

# 安装ipset和ipvsadm:
apt install -y ipset ipvsadm

# 配置加载模块
cat > /etc/modules-load.d/ipvs.conf << EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF


# 临时加载
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh

# 开机加载配置,将ipvs相关模块加入配置文件中
cat >> /etc/modules <<EOF
ip_vs_sh
ip_vs_wrr
ip_vs_rr
ip_vs
nf_conntrack
EOF

2.3 k8s 节点安装containerd

2.3.1 添加阿里云containerd镜像

#添加key
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -

#设置阿里云镜像源
add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

#查看已经添加
cat /etc/apt/sources.list.d/archive_uri-https_mirrors_aliyun_com_docker-ce_linux_ubuntu-jammy.list

#更新软件
sudo apt update -y

2.3.2 安装配置containerd

# 安装
apt install -y containerd.io

# 生成containerd默认配置文件
cp /etc/containerd/config.toml /etc/containerd/config.toml.bak
containerd config default | tee /etc/containerd/config.toml
systemctl daemon-reload && systemctl restart containerd.service
systemctl status containerd

# sandbox_image镜像源设置为阿里云google_containers镜像源
sed -i "s#k8s.gcr.io/pause:3.6#registry.aliyuncs.com/google_containers/pause:3.8#g" /etc/containerd/config.toml

# 修改Systemdcgroup
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml

# 添加 endpoint加速器
sed -i '/registry.mirrors]/a\ \ \ \ \ \ \ \ [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]' /etc/containerd/config.toml
sed -i '/registry.mirrors."docker.io"]/a\ \ \ \ \ \ \ \ \ \ endpoint = ["https://ul2pzi84.mirror.aliyuncs.com"]' /etc/containerd/config.toml

#重新加载并重启containerd
systemctl daemon-reload && systemctl restart containerd

2.4 安装k8s组件

安装kubeadm、kubelet、kubectl

# 配置阿里云镜像站点
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat >/etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update

# 查看版本
apt-cache madison kubeadm|head
# 显示结果
kubeadm | 1.25.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.25.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.25.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages

# 安装指定版本
apt install -y kubeadm=1.24.5-00 kubelet=1.24.5-00 kubectl=1.24.5-00

# 设置crictl
cat > /etc/crictl.yaml << EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF

2.5 下载镜像

# 使用国内阿里云镜像站点,查看所需镜像
kubeadm config images list \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.24.5

# 指定版本下载
kubeadm config images pull \
--kubernetes-version=v1.24.5 \
--image-repository registry.aliyuncs.com/google_containers

# 查看镜像
crictl images

2.6 初始化 Kubernetes 集群

采用配置文件方式部署k8s

2.6.1 准备配置文件

# 生成默认配置,便于修改
kubeadm config print init-defaults > kubeadm.yaml

修改配置文件 kubeadm.yaml :

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: tupbjm.avx20bcrz2zd2h58 # 可以自定义,正则([a-z0-9]{6}).([a-z0-9]{16})
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.33.181 # 修改成节点ip
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: u-master1 # 节点的hostname
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
certSANs: # 主节点IP
- 192.168.33.181
- 192.168.33.182
- 192.168.33.183
- 192.168.33.189
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 国内源
kind: ClusterConfiguration
kubernetesVersion: 1.24.5 # 指定版本
controlPlaneEndpoint: "192.168.33.189:6443" # 设置高可用地址
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # 增加指定pod的网段
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
# 使用ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
# 指定cgroup
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

2.6.2 安装第一个管理节点

初始化

kubeadm init \
--config /root/kubeadm.yaml \
--ignore-preflight-errors=SystemVerification \
--upload-certs # 将控制平面证书上传到 kubeadm-certs Secret

显示以下即为成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.33.189:6443 --token tupbjm.avx20bcrz2zd2h58 \
--discovery-token-ca-cert-hash sha256:a762edd05d31a0a58bd4884f1959922a3577375ebd3936cff401dc6c71a44b96 \
--control-plane --certificate-key 6e3224ed304753f6ff52f49b35b8cff1bd79039bffb75064522298303364c542

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.33.189:6443 --token tupbjm.avx20bcrz2zd2h58 \
--discovery-token-ca-cert-hash sha256:a762edd05d31a0a58bd4884f1959922a3577375ebd3936cff401dc6c71a44b96

根据提示消息,在Master节点上如果以普通用户使用kubectl工具,需要执行如下操作:

mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.6.3 加入工作节点

依次加入工作节点

kubeadm join 192.168.33.189:6443 --token tupbjm.avx20bcrz2zd2h58 \
--discovery-token-ca-cert-hash sha256:a762edd05d31a0a58bd4884f1959922a3577375ebd3936cff401dc6c71a44b96

2.6.4 安装网络组件

# 使用calico
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml -O calico-3-24-1.yaml
kubectl apply -f calico-3-24-1.yaml

2.6.5 多master配置

在u-master2、u-master3节点执行加入命令

kubeadm join 192.168.33.189:6443 --token tupbjm.avx20bcrz2zd2h58 \
--discovery-token-ca-cert-hash sha256:a762edd05d31a0a58bd4884f1959922a3577375ebd3936cff401dc6c71a44b96 \
--control-plane --certificate-key 6e3224ed304753f6ff52f49b35b8cff1bd79039bffb75064522298303364c542
root@u-master1:/# kubectl get node 
NAME STATUS ROLES AGE VERSION
u-master1 Ready control-plane 62m v1.24.5
u-master2 Ready control-plane 60m v1.24.5
u-master3 Ready control-plane 59m v1.24.5
u-node1 Ready <none> 13m v1.24.5
u-node2 Ready <none> 12m v1.24.5
u-node3 Ready <none> 12m v1.24.5

2.6.6 etcd 高可用

修改u-master1、u-masert2中的cat /etc/kubernetes/manifests/etcd.yaml 中”- --initial-cluster“参数,然后重启kubelet。

- --initial-cluster=u-master1=u-master2=https://192.168.33.182:2380,u-master3=https://192.168.33.183:2380,u-master1=https://192.168.33.181:2380

查看etcd节点

etcdctl \
--endpoints=https://192.168.33.181:2379,https://192.168.33.182:2379,https://192.168.33.183:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
member list
  • 备份&还原
# 在u-master1节点上备份
etcdctl \
--endpoints=https://192.168.33.181:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save etcd_snap_save.db

# 在u-master1节点上还原
## 移除配置和删除数据
mv /etc/kubernetes/manifests/etcd.yaml /opt/
rm -rf /var/lib/etcd/

ETCDCTL_API=3 etcdctl snapshot restore snap-save.db \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--data-dir=/var/lib/etcd/ \
--endpoints=https://127.0.0.1:2379,https://192.168.33.181:2379 \
--initial-cluster=u-master2=https://192.168.33.182:2380,u-master3=https://192.168.33.183:2380,u-master1=https://192.168.33.181:2380 \
--name=u-master1 \
--initial-advertise-peer-urls=https://192.168.33.181:2380

####配置还原
mv /opt/etcd.yaml /etc/kubernetes/manifests/

# 在u-master2节点上还原
## 移除配置和删除数据
mv /etc/kubernetes/manifests/etcd.yaml /opt/
rm -rf /var/lib/etcd/

ETCDCTL_API=3 etcdctl snapshot restore snap-save.db \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--data-dir=/var/lib/etcd/ \
--endpoints=https://127.0.0.1:2379,https://192.168.33.182:2379 \
--initial-cluster=u-master2=https://192.168.33.182:2380,u-master3=https://192.168.33.183:2380,u-master1=https://192.168.33.181:2380 \
--name=u-master2 \
--initial-advertise-peer-urls=https://192.168.33.182:2380

####配置还原
mv /opt/etcd.yaml /etc/kubernetes/manifests/

# 在u-master3节点上还原
## 移除配置和删除数据
mv /etc/kubernetes/manifests/etcd.yaml /opt/
rm -rf /var/lib/etcd/

ETCDCTL_API=3 etcdctl snapshot restore snap-save.db \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--data-dir=/var/lib/etcd/ \
--endpoints=https://127.0.0.1:2379,https://192.168.33.182:2379 \
--initial-cluster=u-master2=https://192.168.33.183:2380,u-master3=https://192.168.33.183:2380,u-master1=https://192.168.33.181:2380 \
--name=u-master3 \
--initial-advertise-peer-urls=https://192.168.33.183:2380

####配置还原
mv /opt/etcd.yaml /etc/kubernetes/manifests/

其他命令:

# 查看endpoints 状态
ETCDCTL_API=3 etcdctl --write-out=table endpoint status \
--endpoints=https://192.168.33.181:2379,https://192.168.33.182:2379,https://192.168.33.183:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key

# 查看etcd endpoints的健康
ETCDCTL_API=3 etcdctl --write-out=table endpoint health \
--endpoints=https://192.168.33.181:2379,https://192.168.33.182:2379,https://192.168.33.183:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key

2.6.7 测试

启动busybox容器测试域名解析、网络访问等

root@u-master1:~# kubectl run busybox -it --image=busybox:1.28 --image-pull-policy='IfNotPresent' --restart=Never --rm  busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # ping kubernetes.default.svc.cluster.local -c 3
PING kubernetes.default.svc.cluster.local (10.96.0.1): 56 data bytes
64 bytes from 10.96.0.1: seq=0 ttl=64 time=0.032 ms
64 bytes from 10.96.0.1: seq=1 ttl=64 time=0.086 ms
64 bytes from 10.96.0.1: seq=2 ttl=64 time=0.114 ms

--- kubernetes.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.032/0.077/0.114 ms
/ # nslookup www.baidu.com
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: www.baidu.com
Address 1: 180.101.49.11
Address 2: 180.101.49.12
/ # ping www.baidu.com -c 3
PING www.baidu.com (180.101.49.11): 56 data bytes
64 bytes from 180.101.49.11: seq=0 ttl=127 time=28.027 ms
64 bytes from 180.101.49.11: seq=1 ttl=127 time=27.512 ms
64 bytes from 180.101.49.11: seq=2 ttl=127 time=27.656 ms

--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 27.512/27.731/28.027 ms
/ #

标签:etcd,kubernetes,--,Ubuntu22.04,192.168,etc,https,K8S,安装
From: https://blog.51cto.com/belbert/5872146

相关文章

  • python安装报错error: pybind11 2.10+ requires MSVC 2017 or newer
    pip安装paddleocr时报错,提示要2017或更高,c:\users\administrator\appdata\local\temp\pip-build-env-86xs2ijc\overlay\lib\site-packages\pybind11\include\pybind11\det......
  • k8S资源管理
    资源管理资源管理介绍在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种......
  • Git的安装以及配置
    Git的安装以及配置1.*Git的安装下载地址:官网Git的安装,网络上一找一大堆,所以这里笔者就简单将百度经验搬过来进行相关的简单介绍,大家熟悉一下步骤也就是了。1.双击既打......
  • nps安装问题
    1、源码:​​https://github.com/ehang-io/nps​​2、参考文档:​​​​https://ehang-io.github.io/nps/#/?id=nps​​3、gobuildcmd/nps/nps.go后,第一次npsinstall。默......
  • k8s健康状态----监控与日志
    一、背景监控和日志是大型分布式系统的重要基础设施,监控可以帮助开发者查看系统的运行状态,而日志可以协助问题的排查和诊断。在Kubernetes中,监控和日志属于生态的一部......
  • Windows server 2016 安装oracle的教程图解
    这篇文章主要介绍了Windowsserver2016安装oracle的教程图解,本文图文并茂给大家介绍的非常详细,具有一定的参考借鉴价值,需要的朋友可以参考下 1.安装oracleOracle的......
  • eCos仿真目标机(2)――安装
    宿主机软件为了获得仿真目标机的完整功能,用户必须编译安装I/O辅助进程ecosynth以及其它支持文件。没有辅助进程的情况下开发仿真目标机应用也是可以的,但是仅有少量的I/O设备......
  • windows server2016安装oracle 11g的图文教程
    Windows Server是微软面向服务器的操作系统,服务器操作系统和客户端操作系统是不一样的,下面这篇文章主要给大家介绍了关于windows server2016安装oracle 11g的相关资料......
  • 部署k8s集群的超详细实践步骤
    k8s是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可以促进声明式配置和自动化,下面这篇文章主要给大家介绍了关于部署k8s集群的实践步骤,文中通过图......
  • SDL安装和使用
    yuminstallSDL安装的是SDL2include的文件也是SDL2yuminstallxclock#include</usr/include/SDL2/SDL.h>#include<stdio.h>constintSCREEN_WIDTH=640;co......