首页 > 其他分享 >k8s部署elk+filebeat+kafka-kraft模式集群(一)ES集群+kibana部署

k8s部署elk+filebeat+kafka-kraft模式集群(一)ES集群+kibana部署

时间:2022-09-29 14:55:57浏览次数:59  
标签:elk name 部署 kibana client 集群 nfs k8s es

前言:

这次是在部署后很久才想起来整理了下文档,如有遗漏见谅,期间也遇到过很多坑有些目前还没头绪希望有大佬让我学习下

一、环境准备

k8s-master013.127.10.209
k8s-master02 3.127.10.95
k8s-master03 3.127.10.66
k8s-node01 3.127.10.233
k8s-node02 3.127.33.173
harbor 3.127.33.174
内网dns服务器 3.127.33.173

1、k8s各节点部署nfs

挂载目录为 /home/k8s/elasticsearch/storage

 

2、安装制备器Provisioner

# cat rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

# cat deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: 3.127.33.174:8443/kubernetes/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 3.127.10.95
            - name: NFS_PATH
              value: /home/k8s/elasticsearch/storage
      volumes:
        - name: nfs-client-root
          nfs:
            server: 3.127.10.95
            path: /home/k8s/elasticsearch/storage

# cat es-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"
reclaimPolicy: Retain

 

 

 3、ES集群部署

# cat es-cluster-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: es-svc
  namespace: elk
  labels:
    app: es-cluster-svc
spec:
  selector:
    app: es
  type: ClusterIP
  clusterIP: None
  sessionAffinity: None
  ports:
  - name: outer-port
    port: 9200
    protocol: TCP
    targetPort: 9200
  - name: cluster-port
    port: 9300
    protocol: TCP
    targetPort: 9300
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: elk
  labels:
    app: es-cluster
spec:
  podManagementPolicy: OrderedReady
  replicas: 3
  serviceName: es-svc
  selector:
    matchLabels:
      app: es
  template:
    metadata:
      labels:
        app: es
      namespace: elk
    spec:
      containers:
      - name: es-cluster
        image: 3.127.33.174:8443/elk/elasticsearch:8.1.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: "16Gi"
            cpu: "200m"
        ports:
        - name: outer-port
          containerPort: 9200
          protocol: TCP
        - name: cluster-port
          containerPort: 9300
          protocol: TCP
				env:
        - name: cluster.name
          value: "es-cluster"
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
#        - name: discovery.zen.ping.unicast.hosts
        - name: discovery.seed_hosts
          value: "es-cluster-0.es-svc,es-cluster-1.es-svc,es-cluster-2.es-svc"
#        - name: discovery.zen.minimum_master_nodes
#          value: "2"
        - name: cluster.initial_master_nodes
          value: "es-cluster-0"
        - name: ES_JAVA_OPTS
          value: "-Xms1024m -Xmx1024m"
        - name: xpack.security.enabled
          value: "false"
        volumeMounts:
        - name: es-volume
          mountPath: /usr/share/elasticsearch/data
      initContainers:
      - name: fix-permissions
        image: 3.127.33.174:8443/elk/busybox:latest
        imagePullPolicy: IfNotPresent
#       uid,gid为1000
				command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: es-volume
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: 3.127.33.174:8443/elk/busybox:latest
        imagePullPolicy: IfNotPresent
        command: ["sysctl","-w","vm.max_map_count=655360"]
        securityContext:
          privileged: true
      - name: increase-ulimit
        image: 3.127.33.174:8443/elk/busybox:latest
        imagePullPolicy: IfNotPresent
        command: ["sh","-c","ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: es-volume
      namespace: elk
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: "150Gi"
      storageClassName: managed-nfs-storage

 kubectl get pods -n elk -o wide

 

 至此es集群正常部署完毕

kibana部署

# cat kibana.yaml

apiVersion: v1
kind: Service
metadata:
  name: kibana-svc
  namespace: elk
  labels:
    app: kibana-svc
spec:
  selector:
    app: kibana-8.1.0
  ports:
  - name: kibana-port
    port: 5601
    protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-deployment
  namespace: elk
  labels:
    app: kibana-dep
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana-8.1.0
  template:
    metadata:
      name: kibana
      labels:
        app: kibana-8.1.0
    spec:
      containers:
      - name: kibana
        image: 3.127.33.174:8443/elk/kibana:8.1.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: "1000m"
          requests:
cpu: "200m" ports: - name: kibana-web containerPort: 5601 protocol: TCP env: - name: ELASTICSEARCH_HOSTS value: http://es-svc:9200 readinessProbe: initialDelaySeconds: 10 periodSeconds: 10 httpGet: port: 5601 timeoutSeconds: 100 --- # 部署ingress通过域名来访问kibana apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kibana-ingress namespace: elk labels: app: kibana-ingress spec: ingressClassName: nginx defaultBackend: service: name: kibana-svc port: name: kibana-port rules: - host: jszw.kibana.com http: paths: - path: / pathType: Prefix backend: service: name: kibana-svc port: name: kibana-port

标签:elk,name,部署,kibana,client,集群,nfs,k8s,es
From: https://www.cnblogs.com/precomp/p/16735339.html

相关文章

  • minio通过docker方式部署
    MinIO是在GNUAffero通用公共许可证v3.0下发布的高性能对象存储。它是与AmazonS3云存储服务兼容的API官方文档http://docs.minio.org.cn/docs/master/minio-adm......
  • Postgres-XL集群软件介绍及搭建
    介绍Postgres-XLPostgres-XL全称为PostgreseXtensibleLattice,是TransLattice公司及其收购数据库技术公司–StormDB的产品。Postgres-XL是一个横向扩展的开源数据库集群,......
  • Docker部署SQL Server 2019 Always On集群
    Docker部署Alwayson集群SQLServer在2016年开始支持Linux。随着2017和2019版本的发布,它开始支持Linux和容器平台上的HA/DR、Kubernetes和大数据集群解决方案。在本文中,我们......
  • Docker基础知识 (13) - 部署 MariaDB 集群 (一) | 主从复制
    MariaDB数据库是MySQL的一个分支,主要由开源社区维护,采用GPL授权许可MariaDB的目的是完全兼容MySQL,包括API和命令行,使之能轻松成为MySQL的代替品。在存储引擎方......
  • 项目优化打包部署
    一、项目优化1.去掉打印console需求:在开发环境中,保留打印console;在生产上线环境,自动去掉打印console使用步骤:第一步:在项目根目录下,创建如下图两个配置文件在.......
  • openshift3.11社区版部署
    安装注意事项 1、保证能联网2、开启Selinux3、操作系统语言不能是中文4、infra节点会自动部署router,lb不要放在infra节点上,所以80端口不能冲突5、如果webconsole访......
  • ssdb单机部署
    环境:OS:Centos7db:1.9.81.下载安装介质[root@localhostsoft]#wget--no-check-certificatehttps://github.com/ideawu/ssdb/archive/master.zip2.解压安装[root@loc......
  • Oracle部署,关于日志文件系统选择(硬盘格式化、挂载)
    之前部署过好多Oracle服务,采用的日志文件系统一直是ext3。但是我观察到很多人在格式化/挂载数据盘时,采用的日志文件系统类型有ext3、ext4、xfs等,这不禁让我发出疑问,哪个类......
  • SignalR分布式部署
    在多台服务器组成的分布式环境中,我们可以采用黏性会话或者禁用协商的方式来保证来自同一个客户端的请求被同一台服务器处理,但是在分布式环境中,还有其他问题需要解决。假设......
  • k8s集群上 docker 镜像编译
    因工作需要,在k8s集群上程序构建镜像。调研了dockerindocker方案。见链接。https://applatix.com/case-docker-docker-kubernetes-part-2/怎么都感觉不够智能,而且有点......