首页 > 其他分享 >2-2、kubernetes安装

2-2、kubernetes安装

时间:2022-11-14 23:59:40浏览次数:36  
标签:kubectl kubernetes -- com cluster Running kube 安装

kubernetes安装:
master,etcd:
node:
前提:基于主机名通信;
      时间同步;
      关闭firewalld和iptables.service
      OS:centos7.3,extra
步骤:
    etcd cluster,仅master节点;
    flannel,集群的所有节点;
    配置k8s的master:仅master节点:kubernetes-master
            启动的服务:kube-apiserver,kube-scheduler,kube-controller-manager
    配置的K8s的node节点:kubernetes-node
            先设定启动docker服务;
            启动的k8s服务:kube-proxy,kubelet
 kubeadm:
1、master,nodes安装kubelet,kubeadm,docker
2、master:kubeadm init
3、nodes:kubeadm join                

https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

关闭firewalld,selinux

1、配置yum源:
docker yum源:
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#阿里云yum源:
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    yum clean all
    yum makecache
#docker yum源
    cat >> /etc/yum.repos.d/docker.repo <<EOF
    [docker-repo]
    name=Docker Repository
    baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
    enabled=1
    gpgcheck=0
    EOF

kubernertes yum源:
cat >> /etc/yum.repos.d/k8s.repo <<EOF
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enabled=1
EOF

gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1
若要使用key:
# wget  https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
# yum --import yum-key.gpg
其它yum源:
wget http://mirrors.aliyun.com/repo/Centos-7.repo

2、master端操作:
# yum install -y docker-ce  kubelet  kubeadm  kubectl

# rpm -ql docker-ce
/usr/bin/docker-init
/usr/bin/docker-proxy
/usr/bin/dockerd-ce
/usr/lib/systemd/system/docker.service
/usr/lib/systemd/system/docker.socket
/var/lib/docker-engine/distribution_based_engine-ce.json

# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service

# rpm -ql kubeadm
/usr/bin/kubeadm
/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

# rpm -ql kubectl
/usr/bin/kubectl


# vi /usr/lib/systemd/system/docker.service
Environment="HTTPS_PROXY=http://www.ik8s.io:10080 
Environment=""NO_PROXY=127.0.0.0.8,192.168.31.0/16"

# systemctl daemon-reload
# systemctl start docker.service
# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
# systemctl enable docker.service
# systemctl enable kubelet

# ss -tnl


# vi  /etc/sysconfig/kubelet #禁用swap
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
# kubeadm init --help
# kubeadm init --kubernetes-version=stable-1.11 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12  --ignore-preflight-errors=Swap
# kubeadm init --kubernetes-version=stable-1 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12  --ignore-preflight-errors=Swap
--kubernetes-version=stable-1.11    #指定kubernetes版本
--pod-network-cidr=10.244.0.0/16    #指定pod的网段
--service-cidr=10.96.0.0/12         #指定service的网段
--ignore-preflight-errors=Swap      #忽略swap

初始化失败,国内网站无法访问dl.k8s.io/,因此需要事先把这些镜像拉取下来:
could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

方法1:有个墙外的代理服务器,对docker配置代理,需修改/etc/sysconfig/docker文件,添加:
    HTTP_PROXY=http://proxy_ip:port
    http_proxy=$HTTP_PROXY
   重启docker:systemctl restart docker


# docker image ls
# kubeadm config images pull
# kubeadm config images list   #初始化时需要的镜像
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

方法2:通过 docker.io/mirrorgooglecontainers中转一下https://hub.docker.com/u/mirrorgooglecontainers
# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#docker.io/mirrorgooglecontainers#g' |sh -x    #下载需要的镜像
# docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x  #重命名镜像
# docker images |grep mirrorgooglecontainers |awk '{print "docker rmi ", $1":"$2}' |sh -x     #删除mirrorgooglecontainers镜像

# docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.0
# docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.0
# docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.0
# docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.14.0
# docker pull docker.io/mirrorgooglecontainers/pause:3.1
# docker pull docker.io/mirrorgooglecontainers/etcd:3.3.10


# docker tag mirrorgooglecontainers/kube-apiserver:v1.14.0  k8s.gcr.io/kube-apiserver:v1.14.0
# docker tag mirrorgooglecontainers/kube-proxy:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0
# docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0
# docker tag mirrorgooglecontainers/kube-scheduler:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0
# docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
# docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

coredns没包含在docker.io/mirrorgooglecontainers中,需要手工从coredns官方镜像转换下。
# docker pull coredns/coredns:1.3.1
# docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
# docker rmi coredns/coredns:1.3.1

master初始化:
# kubeadm init --kubernetes-version=stable-1 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12  --ignore-preflight-errors=Swap
# kubectl get -h
# kubectl get cs    #kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

或:
# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU
因为后面要安装网络插件flannel ,所有这里要添加参数, --pod-network-cidr=10.244.0.0/16,10.244.0.0/16是flannel插件固定使用的ip段,它的值取决于你准备安装哪个网络插件
如果要自定义配置,先kubeadm config print init-defaults >kubeadm.conf,再修改,改完指定配置文件路径--config /root/kubeadm.conf
指定Kubenetes版本--kubernetes-version,如果不指定该参数,会从google网站下载最新的版本信息,因为它的默认值是stable-1。
因为使用的是虚拟机,只分配一个cpu,所以指定了参数--ignore-preflight-errors=NumCPU,如果你的cpu足够,不要添加这个参数.


初始化参数说明:
-apiserver-advertise-address string
API Server将要广播的监听地址。如指定为 `0.0.0.0` 将使用缺省的网卡地址。

--apiserver-bind-port int32     缺省值: 6443
API Server绑定的端口

--apiserver-cert-extra-sans stringSlice
可选的额外提供的证书主题别名(SANs)用于指定API Server的服务器证书。可以是IP地址也可以是DNS名称。

--cert-dir string     缺省值: "/etc/kubernetes/pki"
证书的存储路径。

--config string
kubeadm配置文件的路径。警告:配置文件的功能是实验性的。

--cri-socket string     缺省值: "/var/run/dockershim.sock"
指明要连接的CRI socket文件

--dry-run
不会应用任何改变;只会输出将要执行的操作。

--feature-gates string
键值对的集合,用来控制各种功能的开关。可选项有:
Auditing=true|false (当前为ALPHA状态 - 缺省值=false)
CoreDNS=true|false (缺省值=true)

-h, --help
获取init命令的帮助信息

--ignore-preflight-errors stringSlice
忽视检查项错误列表,列表中的每一个检查项如发生错误将被展示输出为警告,而非错误。 例如: 'IsPrivilegedUser,Swap'. 如填写为 'all' 则将忽视所有的检查项错误。

--kubernetes-version string     缺省值: "stable-1"
为control plane选择一个特定的Kubernetes版本。

--node-name string
指定节点的名称。

--pod-network-cidr string
指明pod网络可以使用的IP地址段。 如果设置了这个参数,control plane将会为每一个节点自动分配CIDRs。

--service-cidr string     缺省值: "10.96.0.0/12"
为service的虚拟IP地址另外指定IP地址段

--service-dns-domain string     缺省值: "cluster.local"
为services另外指定域名, 例如: "myorg.internal".

--skip-token-print
不打印出由 `kubeadm init` 命令生成的默认令牌。

--token string
这个令牌用于建立主从节点间的双向受信链接。格式为 [a-z0-9]{6}\.[a-z0-9]{16} - 示例: abcdef.0123456789abcdef

--token-ttl duration     缺省值: 24h0m0s
令牌被自动删除前的可用时长 (示例: 1s, 2m, 3h). 如果设置为 '0', 令牌将永不过期。

-----------------------

部署pod网络插件:flannel插件
选择flannel作为网络插件:
    vim /etc/sysctl.conf,添加以下内容
    net.ipv4.ip_forward=1
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    修改后,及时生效
    sysctl -p

地址:https://github.com/coreos/flannel
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
flannel 默认会使用主机的第一张网卡,如果你有多张网卡,需要通过配置单独指定。修改 kube-flannel.yml 中的以下部分
vim kube-flannel.yml 
 containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33              #添加

# kubectl apply -f kube-flannel.yml 

查看各组件的状态:
# kubectl get cs  
# kubectl  get  componentstatus 

# kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
vm1.cluster.com   Ready    master   13m   v1.14.0

# kubectl get pod
No resources found.
# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-4sr5b                   1/1     Running   0          14m
coredns-fb8b8dccf-rmj7h                   1/1     Running   0          14m
etcd-vm1.cluster.com                      1/1     Running   0          13m
kube-apiserver-vm1.cluster.com            1/1     Running   0          13m
kube-controller-manager-vm1.cluster.com   1/1     Running   0          13m
kube-flannel-ds-amd64-rnght               1/1     Running   0          2m30s
kube-proxy-mxjwr                          1/1     Running   0          14m
kube-scheduler-vm1.cluster.com            1/1     Running   0          13m

# kubectl get ns  #名称空间
NAME              STATUS   AGE
default           Active   16m
kube-node-lease   Active   16m
kube-public       Active   16m
kube-system       Active   16m


3、在node节点上操作:
# yum install -y docker-ce  kubelet  kubeadm
# vi /usr/lib/systemd/system/docker.service
Environment="HTTPS_PROXY=http://www.ik8s.io:10080 
Environment=""NO_PROXY=127.0.0.0.8,192.168.31.0/16"
# vi  /etc/sysconfig/kubelet #禁用swap
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

# systemctl start docker 
# systemctl enable docker 
# systemctl enable kubelet
注意,这里不需要启动kubelet,初始化的过程中会自动启动的,如果此时启动了会出现如下报错,忽略即可。日志在tail -f /var/log/messages
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory

kubeadm join 192.168.31.11:6443 --token rquyna.2jykkhlqq7zr306v \
    --discovery-token-ca-cert-hash sha256:f7d07c0ba9ce136a0fb5d3a623146c51e17dfe49d69273474dc4ac902415dc79 --ignore-preflight-errors=Swap


node节点所需要的几个镜像:
k8s.gcr.io/kube-proxy-amd64:v1.10.0
k8s.gcr.io/pause-amd64:3.1
quay.io/coreos/flannel:v0.9.1-amd64(为网络插件的镜像,这里选择flannel为网络插件)

# docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.14.0
# docker pull docker.io/mirrorgooglecontainers/pause:3.1

# docker tag mirrorgooglecontainers/kube-proxy:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0
# docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

# docker pull coredns/coredns:1.3.1
# docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
# docker rmi coredns/coredns:1.3.1


node节点上会拉取如下镜像:
# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.14.0             5cd54e388aba        10 days ago         82.1MB
k8s.gcr.io/kube-scheduler            v1.14.0             00638a24688b        10 days ago         81.6MB
k8s.gcr.io/kube-apiserver            v1.14.0             ecf910f40d6e        10 days ago         210MB
k8s.gcr.io/kube-controller-manager   v1.14.0             b95b1efa0436        10 days ago         158MB
quay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        2 months ago        52.6MB
coredns/coredns                      1.3.1               eb516548c180        2 months ago        40.3MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        2 months ago        40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        4 months ago        258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        15 months ago       742kB


在master上查看:
# kubectl get nodes
NAME              STATUS     ROLES    AGE     VERSION
vm1.cluster.com   Ready      master   36m     v1.14.0
vm2.cluster.com   NotReady   <none>   4m36s   v1.14.0

如要剔除node节点:
# kubectl delete node  vm2.cluster.com

# kubectl get pods -n kube-system
NAME                                      READY   STATUS              RESTARTS   AGE
coredns-fb8b8dccf-4sr5b                   1/1     Running             0          36m
coredns-fb8b8dccf-rmj7h                   1/1     Running             0          36m
etcd-vm1.cluster.com                      1/1     Running             0          35m
kube-apiserver-vm1.cluster.com            1/1     Running             0          35m
kube-controller-manager-vm1.cluster.com   1/1     Running             0          35m
kube-flannel-ds-amd64-rnght               1/1     Running             0          24m
kube-flannel-ds-amd64-sng8b               0/1     Init:0/1            0          4m42s
kube-proxy-hptk5                          0/1     ContainerCreating   0          4m42s
kube-proxy-mxjwr                          1/1     Running             0          36m
kube-scheduler-vm1.cluster.com            1/1     Running             0          35m

# kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS              RESTARTS   AGE     IP              NODE              NOMINATED NODE   READINESS GATES
coredns-fb8b8dccf-4sr5b                   1/1     Running             0          36m     10.244.0.3      vm1.cluster.com   <none>           <none>
coredns-fb8b8dccf-rmj7h                   1/1     Running             0          36m     10.244.0.2      vm1.cluster.com   <none>           <none>
etcd-vm1.cluster.com                      1/1     Running             0          35m     192.168.31.11   vm1.cluster.com   <none>           <none>
kube-apiserver-vm1.cluster.com            1/1     Running             0          35m     192.168.31.11   vm1.cluster.com   <none>           <none>
kube-controller-manager-vm1.cluster.com   1/1     Running             0          35m     192.168.31.11   vm1.cluster.com   <none>           <none>
kube-flannel-ds-amd64-rnght               1/1     Running             0          25m     192.168.31.11   vm1.cluster.com   <none>           <none>
kube-flannel-ds-amd64-sng8b               0/1     Init:0/1            0          4m55s   192.168.31.22   vm2.cluster.com   <none>           <none>
kube-proxy-hptk5                          0/1     ContainerCreating   0          4m55s   192.168.31.22   vm2.cluster.com   <none>           <none>
kube-proxy-mxjwr                          1/1     Running             0          36m     192.168.31.11   vm1.cluster.com   <none>           <none>
kube-scheduler-vm1.cluster.com            1/1     Running             0          36m     192.168.31.11   vm1.cluster.com   <none>           <none>


pod,service,replicaset,deployment,statefulet,daemonset,job,cronjob,node

deployment,job:pod的控制器


# kubectl version
# kubectl cluster-info
Kubernetes master is running at https://192.168.31.11:6443
KubeDNS is running at https://192.168.31.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=8080 --replicas=1 --generator=run-pod/v1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deploy created (dry run)

# kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   0/1     1            0           69s


# kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name]
[--name=name] [--external-ip=external-ip-of-service] [--type=type] [options]
# kubectl expose deployment nginx-deploy  --name=nginx --port=80  --target-port=80  --protocol=TCP
# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   5h50m
nginx        ClusterIP   10.108.178.106   <none>        80/TCP    9s

# curl 10.108.178.106

# kubectl describe service nginx
# kubectl edit svc nginx  #编辑这个service

# kubectl scaled --replicas=3 deployment nginx  #扩容到3个pod

# kubectl describe pods nginx
# kubectl set image deployment nginx nginx=nginx:1.15-alpine  #更新镜像版本
# kubectl rollout status deployment nginx    #查看更新过程,灰度
# kubectl rollout undo deployment nginx    #回滚,默认是回滚到上一个版本


在外部访问,需要修改pod的类型
# kubectl edit svc nginx 
spec:
  clusterIP: 10.108.178.106
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-deploy
  sessionAffinity: None
  type: ClusterIP    --->修改问NodePort


# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   6h39m
nginx        NodePort 10.108.178.106   <none>        80:30020/TCP    48m

在浏览器访问:10.108.178.106:30020


# kubectl run myapp --image=ikubernetes/myapp:v1  --replicas=2
# kubectl expose pod myapp --name=myapp  --port=80

#实时监视watch
# kubectl get pod -w 

#增加/缩减副本数量:
#kubectl scale --replicas=2 deployment myapp
#kubectl get pod 

#升级
#kubectl set image deployment myapp myapp=ikubernetes/v2
#kubectl rollout status deployment myapp
#kubectl describe pod myapp-xxxx

#回滚
#kubectl rollout  undo deployment myapp
#kubectl describe pod myapp-xxxx


#查看生成的iptabes规则
#iptabes -vnL

 

标签:kubectl,kubernetes,--,com,cluster,Running,kube,安装
From: https://www.cnblogs.com/skyzy/p/16890979.html

相关文章

  • 1-1、kubernetes(k8s)-介绍
    kubernetes(k8s)-安装(二)什么是Kubernetesk8s组件介绍:http://docs.kubernetes.org.cn/703.htmlKubernetes是一个开源平台,用于跨主机群集自动部署,扩展和操作应用程序......
  • 1-6、kubernetes常用命令
    kubernetes常用命令1、查看类命令kubectlcluster-info----查看集群信息kubectl-shttp://localhost:8080getcomponentstatuses----查看各组件信息kubectl......
  • 2-1、kubernetes基础
    kubernetes基础master/node:master:APIserver,Scheduler,Controller-Manager,etcdnode:kubelet,docker,kube-proxyPOD,Label,LabelSelectorLabel:key=v......
  • 2 ansible安装
    ansible安装http://www.ansible.com.cn/1.1、有环境的情况下,直接yum,需要epelyum源yuminstallepel-release-yyum-yinstallansible1.2、没有环境,可以先下载rp......
  • 云服务器(Linux)安装部署Kafka
    云服务器(Linux)安装部署Kafka前期准备kafka的安装需要依赖于jdk,需要在服务器上提前安装好该环境,这里使用用jdk1.8。下载安装包官网地址:较新的版本已自带Zookeeper,无......
  • ElasticSearch的安装
    windows上安装1、下载指定版本并解压下载地址:https://www.elastic.co/cn/downloads/past-releases#elasticsearch2、配置JDK环境将安装包的JDK目录配置进系统环境变量......
  • hyperworks2021位安装教程
    hyperworks2021位安装教程:1.先使用“百度网盘客户端”下载hw21_EN_x64软件安装包到电脑磁盘英文路径文件夹下,并鼠标右击进行解压缩,然后在文件夹内找到hwDesktop2021.2.exe,......
  • spark (一) 入门 & 安装
    目录基本概念spark核心模块sparkcore(核心)sparksql(结构化数据操作)sparkstreaming(流式数据操作)部署模式local(本地模式)standalone(集群模式)onyarn(集群模式)......
  • ubuntu安装rabbitmq
    系统:Ubuntu20.04tips:一定要在终端sudoapt-getupdate1.安装erlangsudoapt-getinstallerlang-nox2.安装rabbitmqsudoapt-getinstallrabbitmq-serve......
  • 使用conda的CUDA运行编译环境安装软件
     作业调度系统常见作业调度系统有slurmLSFPBS,一般通过moduleload加载自己需要的软件。这些调度系统的使用可以阅读相关文档:GADI/PBS,上海交大/slurm,上科大/LSF使......