https://blog.csdn.net/u010549795/article/details/132557648
EdgeMark测试环境从零开始搭建
KubeEdge也提供了类似KubeMark的模拟大规模集群的工具,值得注意的是目前EdgeMark只能模拟edgecore,无法模拟edgemesh,所以如果是对网络方面的测试,还是建议老老实实装虚拟机
环境配置
使用virtualbox搭建模拟网络环境,虚拟机双网卡,网卡1使用nat转换上互联网,网卡2使用hostonly网络(192.168.56.0/24)组成局域网
软件配置,由于兼容性问题,使用kubernetes1.23,KubeEdge1.13.1
主机名 IP 操作系统
master-KubeEdge 192.168.56.2 Ubuntu20.04
master-K8s 192.168.56.3 Ubuntu20.04
master-K8s集群里面的pod要能直接访问master-KubeEdge,所以需要对网络进行一些配置(实际上不需要,可能是因为KubeEdge自带云边通信隧道可以穿透)
我们在master-KubeEdge集群里面使用flannel网络插件,而对于外部master-K8s集群,还是使用CNI,可以直接访问主机网络IP地址192.168.56.2
很神奇,没想明白,但是就是可以用
初始化系统配置
所有主机修改主机名
sudo hostnamectl set-hostname master
reboot
1
2
所有主机关闭防火墙
sudo systemctl stop ufw
sudo systemctl disable ufw
1
2
所有主机禁用swap
sudo vi /etc/fstab
# 注释swap那一行
sudo swapon -a # 启用所有swap
sudo swapoff -a # 禁用所有swap
sudo swapon -s # 查看swap状态
1
2
3
4
5
所有主机设置时间同步
sudo apt install -y ntpdate
sudo ntpdate time.windows.com
sudo timedatectl set-timezone Asia/Shanghai
1
2
3
所有节点添加hosts
# 添加hosts
sudo vi /etc/hosts
# 加入如下几行
185.199.108.133 raw.githubusercontent.com
1
2
3
4
启用ipv4转发
sudo vi /etc/sysctl.conf
# 注释下面行
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sudo sysctl -p /etc/sysctl.conf
1
2
3
4
安装Docker
我们选择安装docker.io ubuntu的版本,省事
记住,需要在所有2台节点上都安装docker
sudo apt install docker.io
1
docker官方镜像仓库访问比较慢,可以使用dockerhub国内源加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://knjsrl1b.mirror.aliyuncs.com","https://docker.hub.com"]
}
EOF
# 阿里云镜像加速 https://knjsrl1b.mirror.aliyuncs.com
# 中科大镜像加速 https://docker.mirrors.ustc.edu.cn
# 云节点加入"exec-opts": ["native.cgroupdriver=systemd"],
# 边缘节点默认cgroupfs就行了,和kubeedge一致
sudo systemctl daemon-reload
sudo systemctl restart docker
1
2
3
4
5
6
7
8
9
10
11
12
主机安装Kubernetes
考虑到兼容性,我们选择kubernetes1.23.17进行安装
根据阿里云的教程,在2台主机上,使用阿里源安装kubelet,kubeadm和kubectl组件
sudo apt-get update && apt-get install -y apt-transport-https
sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo vi /etc/apt/sources.list.d/kubernetes.list
# 输入以下内容:
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
sudo apt update
# 可先使用apt list kubelet -a 查看所有版本,再指定版本
sudo apt install -y kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00
```# EdgeMark测试环境搭建
KubeEdge也提供了类似KubeMark的模拟大规模集群的工具,值得注意的是目前EdgeMark只能模拟edgecorem,无法模拟edgemesh,所以如果是对网络方面的测试,还是建议老老实实装虚拟机
## 环境配置
使用virtualbox搭建模拟网络环境,虚拟机双网卡,网卡1使用nat转换上互联网,网卡2使用hostonly网络(192.168.56.0/24)组成局域网
软件配置,由于兼容性问题,使用kubernetes1.23,KubeEdge1.13.1
| 主机名 | IP | 操作系统 |
| --------------- | ------------ | ----------- |
| master-KubeEdge | 192.168.56.2 | Ubuntu20.04 |
| master-K8s | 192.168.56.3 | Ubuntu20.04 |
master-K8s集群里面的pod要能直接访问master-KubeEdge,所以需要对网络进行一些配置(实际上不需要,可能是因为KubeEdge自带云边通信隧道可以穿透)
我们在master-KubeEdge集群里面使用flannel网络插件,而对于外部master-K8s集群,还是使用CNI,可以直接访问主机网络IP地址192.168.56.2
**很神奇,没想明白,但是就是可以用**
## 初始化系统配置
所有主机修改主机名
```bash
sudo hostnamectl set-hostname master
reboot
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
所有主机关闭防火墙
sudo systemctl stop ufw
sudo systemctl disable ufw
1
2
所有主机禁用swap
sudo vi /etc/fstab
# 注释swap那一行
sudo swapon -a # 启用所有swap
sudo swapoff -a # 禁用所有swap
sudo swapon -s # 查看swap状态
1
2
3
4
5
所有主机设置时间同步
sudo apt install -y ntpdate
sudo ntpdate time.windows.com
sudo timedatectl set-timezone Asia/Shanghai
1
2
3
所有节点添加hosts
# 添加hosts
sudo vi /etc/hosts
# 加入如下几行
185.199.108.133 raw.githubusercontent.com
1
2
3
4
启用ipv4转发
sudo vi /etc/sysctl.conf
# 注释下面行
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sudo sysctl -p /etc/sysctl.conf
1
2
3
4
安装Docker
我们选择安装docker.io ubuntu的版本,省事
记住,需要在所有2台节点上都安装docker
sudo apt install docker.io
1
docker官方镜像仓库访问比较慢,可以使用dockerhub国内源加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://knjsrl1b.mirror.aliyuncs.com","https://docker.hub.com"]
}
EOF
# 阿里云镜像加速 https://knjsrl1b.mirror.aliyuncs.com
# 中科大镜像加速 https://docker.mirrors.ustc.edu.cn
# 云节点加入"exec-opts": ["native.cgroupdriver=systemd"],
# 边缘节点默认cgroupfs就行了,和kubeedge一致
sudo systemctl daemon-reload
sudo systemctl restart docker
1
2
3
4
5
6
7
8
9
10
11
12
主机安装Kubernetes
考虑到兼容性,我们选择kubernetes1.23.17进行安装
根据阿里云的教程,在2台主机上,使用阿里源安装kubelet,kubeadm和kubectl组件
sudo apt-get update && apt-get install -y apt-transport-https
sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo vi /etc/apt/sources.list.d/kubernetes.list
# 输入以下内容:
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
sudo apt update
# 可先使用apt list kubelet -a 查看所有版本,再指定版本
sudo apt install -y kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00
1
2
3
4
5
6
7
8
在云master主机上使用kubeadm创建kubernetes集群,这里我们使用阿里云的镜像进行加速,这里kubeadm会安装和自己版本匹配的kubernetes
对master-KubeEdge,初始化如下:
sudo kubeadm init \
--apiserver-advertise-address=192.168.56.2 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
1
2
3
4
5
执行完毕会输出很多提示指令需要我们执行
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.2:6443 --token iwmho3.xxbh31t7134rc2zi \
--discovery-token-ca-cert-hash sha256:ac80680b9eec7d102751d55a99a07e6c9a7b2022abb68a75fbcb6b88cbb3978c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
对master-K8s,初始化如下:
sudo kubeadm init \
--apiserver-advertise-address=192.168.56.3 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
1
2
3
4
5
执行完毕会输出很多提示指令需要我们执行
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.3:6443 --token aekzd4.fp0e0y3q67gwhp11 \
--discovery-token-ca-cert-hash sha256:0e2a61b06cee8ed5e4067b3b0d49adce92a04432b53a1935fb5d6daab8a1d700
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
我们按照提示在普通用户和root用户下都执行一次,这样kubectl就可以访问到本地的kube-api-server了
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
1
2
3
我们接着安装CNI网络插件,下载太慢了可以使用这个网站查询raw.githubusercontent.com的IP地址并且写入hosts文件。
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
vi kube-flannel.yml
# kubectl edit -n kube-flannel daemonset.apps/kube-flannel-ds
......# 对于master-KubeEdge,修改下面的affinity亲和性,增加一个key使其不部署于边缘节点
- key: node-role.kubernetes.io/edge
operator: DoesNotExist
......
kubectl apply -f kube-flannel.yml
1
2
3
4
5
6
7
8
执行kubectl get pods -n kube-flannel如果出现如下说明网络插件安装成功
...
kube-flannel-ds-hgn9l 1/1 Running 0 44m
...
1
2
3
让master也作为工作节点可以运行用户Pod
kubectl taint node master node-role.kubernetes.io/master-
1
让master不参与运行用户Pod
kubectl taint node master node-role.kubernetes.io/master=:NoSchedule
1
过一会,在master主机上执行kubectl get nodes,如下则加入成功
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 13m v1.22.15
1
2
3
安装KubeEdge
kubeEdge和kubernetes类似,提供了keadm工具用来快速搭建kubeedge集群,我们可以提前在KubeEdge的github官网上面下载keadm1.13.1,建议使用下载工具,速度快
wget https://github.com/kubeedge/kubeedge/releases/download/v1.13.1/keadm-v1.13.1-linux-amd64.tar.gz
1
在master-KubeEdge节点上安装keadm
tar -xvf keadm-v1.13.1-linux-amd64.tar.gz
sudo mv keadm-v1.13.1-linux-amd64/keadm/keadm /usr/bin/
1
2
使用keadm安装kubeedge的云端组件cloudcore
如果速度慢可以提前拉取cloudcore镜像
sudo docker pull kubeedge/cloudcore:v1.13.1
1
sudo keadm init --advertise-address=192.168.56.2 --profile version=v1.13.1
# 这些参数已经没有用了 --set cloudcore-tag=v1.13.1 --kubeedge-version=1.13.1
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
=========CHART DETAILS=======
NAME: cloudcore
LAST DEPLOYED: Thu Nov 3 11:05:24 2022
NAMESPACE: kubeedge
STATUS: deployed
REVISION: 1
1
2
3
4
5
6
7
8
9
10
–advertise-address=xxx.xx.xx.xx 这里的xxx.xx.xx.xx换成云主机的公网地址,–profile version=v1.12.1 意思是指定安装的kubeEdge的版本,如果默认不指定那么keadm会自动去下载最新的版本
注意,这个命令会从仓库下载cloudcore容器镜像
我们可以看到cloudcore的Pod和service已经在运行了,cloudcore会监听本地的10000-10004端口
kubectl get pod,svc -n kubeedge
NAME READY STATUS RESTARTS AGE
pod/cloudcore-5768d46f8d-fqdcn 1/1 Running 0 78s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cloudcore ClusterIP 10.99.61.17 <none> 10000/TCP,10001/TCP,10002/TCP,10003/TCP,10004/TCP 78s
1
2
3
4
5
6
获得边缘设备接入的token
sudo keadm gettoken
56f840b308cdb7675acbf25e77eab230dde06513162692ff62e49fa30093fda6.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTMyODAyNjF9.QTd-Uz6gcnlLP4t7bljCEOlSy3Ywnp3nsX6_Bwd1vuo
1
2
编译Edgemark
解决alpine下载软件包速度太慢,修改kubeedge/build/edgemark/Dockerfile文件,使用国内镜像源,最后修改如下
ARG BUILD_FROM=golang:1.17.13-alpine3.16
FROM ${BUILD_FROM} AS builder
ARG GO_LDFLAGS
COPY . /go/src/github.com/kubeedge/kubeedge
# 加上这一行
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories
RUN apk --no-cache update && \
apk --no-cache upgrade && \
apk --no-cache add build-base linux-headers sqlite-dev binutils-gold && \
CGO_ENABLED=1 GO111MODULE=off go build -v -o /usr/local/bin/edgemark -ldflags="${GO_LDFLAGS} -w -s -extldflags -static" \
github.com/kubeedge/kubeedge/edge/cmd/edgemark
FROM alpine:3.16
COPY --from=builder /usr/local/bin/edgemark /usr/local/bin/edgemark
ENTRYPOINT ["edgemark"]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
必须使用root用户下载和编译,否则git报错
首先安装必要的包
apt install git build-essential docker.io jq
1
docker记得改国内源
克隆最新仓库
git clone https://github.com/kubeedge/kubeedge.git
cd kubeedge
1
2
编译
# 编译cloudcore
make all WHAT=cloudcore
# 编译edgecore
make all WHAT=edgecore
# 编译edgemark
make all WHAT=edgemark
# 编译edgemark容器镜像
make image WHAT=edgemark
1
2
3
4
5
6
7
8
构建容器镜像
make image WHAT=edgemark
1
将kubeedge/hack/lib/golang.sh229行中间的空格删除,最后编译成功
goldflags="${GOLDFLAGS=-s -w -buildid=}$(kubeedge::version::ldflags)"
1
verify可以在build文件夹下生成dockerfile打包文件
make verify
1
test对生成的二进制文件进行测试
make test
1
安装ginkgo测试框架
go install github.com/onsi/ginkgo/v2/ginkgo
go get github.com/onsi/gomega/...
1
2
集成测试
make integrationtest
1
部署hollow节点
参考KubeEdge官方指南,在master-KubeEdge上执行,导出tokensecret
# 保存访问cloudcore的tokensecret
kubectl get secret -nkubeedge tokensecret -oyaml > tokensecret.yaml
# 修改tokensecret.yaml文件中的命名空间
sed -i "s|namespace: .*|namespace: edgemark|g" tokensecret.yaml
# 建议删除namespace这一行,直接在default空间里访问secret
1
2
3
4
5
将tokensecret.yaml复制到外部的master-K8s
scp tokensecret.yaml [email protected]:~
1
然后在外部的master-K8s上执行
kubectl create ns edgemark
kubectl create -f tokensecret.yaml
1
2
用yaml文件创建hollow节点,可参考官方示例,我们进行修改
vi hollow-edge-node_template.yaml
1
输入如下内容
kind: Deployment
apiVersion: apps/v1
metadata:
name: hollow-edge-node
spec:
replicas: 10
selector:
matchLabels:
app: hollow-edge-node
template:
metadata:
labels:
app: hollow-edge-node
spec:
containers:
- name: hollow-edgecore
image: kubeedge/edgemark:v1.13.1
command:
- edgemark
args:
- --token=$(TOKEN)
- --name=$(NODE_NAME)
- --http-server=https://192.168.56.2:10002
- --websocket-server=192.168.56.2:10000
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: TOKEN
valueFrom:
secretKeyRef:
name: tokensecret
key: tokendata
resources:
requests:
cpu: 20m
memory: 50M
securityContext:
privileged: true
tolerations:
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
参数 {{numreplicas}} 表示的是edgemark集群中hollow节点的数量
参数 {{server}} 是cloudcore暴露的地址,等待边缘节点join
参数 {{server}} , {{numreplicas}} , {{edgemark_image_registry}} and {{edgemark_image_tag}} 需要填充
你的真实集群要有足够的资源来运行 {{numreplicas}} 数量的 hollow-node pods
最后,用命令创建hollow-node pods就可以了
kubectl apply -f hollow-edge-node_template.yaml
1
可以看到pod都运行起来了
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default hollow-edge-node-5599d47745-492q5 1/1 Running 0 9s
default hollow-edge-node-5599d47745-7gfkk 1/1 Running 0 9s
default hollow-edge-node-5599d47745-954cs 1/1 Running 0 9s
default hollow-edge-node-5599d47745-cdpx9 1/1 Running 0 9s
default hollow-edge-node-5599d47745-cw57s 1/1 Running 0 9s
default hollow-edge-node-5599d47745-kwkfr 1/1 Running 0 9s
default hollow-edge-node-5599d47745-mhlp8 1/1 Running 0 9s
default hollow-edge-node-5599d47745-sk7kh 1/1 Running 0 9s
default hollow-edge-node-5599d47745-ttj6m 1/1 Running 0 9s
default hollow-edge-node-5599d47745-vkw2f 1/1 Running 0 9s
1
2
3
4
5
6
7
8
9
10
11
12
然后在master-KubeEdge上就可以看到虚拟节点了
kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hollow-edge-node-5599d47745-492q5 Ready agent,edge 10s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.39 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-7gfkk Ready agent,edge 11s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.36 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-954cs Ready agent,edge 11s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.37 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-cdpx9 Ready agent,edge 10s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.42 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-cw57s Ready agent,edge 12s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.34 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-kwkfr Ready agent,edge 10s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.40 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-mhlp8 Ready agent,edge 10s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.41 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-sk7kh Ready agent,edge 11s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.35 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-ttj6m Ready agent,edge 11s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.38 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
hollow-edge-node-5599d47745-vkw2f Ready agent,edge 10s v1.23.15-kubeedge-v1.13.1-dirty 10.244.0.43 <none> Debian GNU/Linux 7 (wheezy) 3.16.0-0.bpo.4-amd64 fakeRuntime://0.1.0
master Ready control-plane,master 25h v1.23.17 10.0.2.15 <none> Ubuntu 20.04.5 LTS 5.4.0-125-generic docker://20.10.25
1
2
3
4
5
6
7
8
9
10
11
12
13
大功告成!!
在云master主机上使用kubeadm创建kubernetes集群,这里我们使用阿里云的镜像进行加速,这里kubeadm会安装和自己版本匹配的kubernetes