首页 > 其他分享 >备忘记录-20240404.构建服务的k8s资源清单

备忘记录-20240404.构建服务的k8s资源清单

时间:2024-04-04 09:33:33浏览次数:10  
标签:php name tomcat 备忘 nginx NFS k8s 20240404 metadata

导读

记录一次搭建服务的成果

框架

graph TB C(Client) --> ig(ingress) ig --> np((nginx-php\nservice)) ig --> tc((tomcat\nservice)) np --> ng1(nginx) np --> ng2(nginx) ng2 -..-> ps((php\nservice)) ng1 -..-> ps ps --> p1(PHP) ps --> p2(PHP) ps --> p3(PHP) tc --> t1(tomcat) tc --> t2(tomcat) tc --> t3(tomcat) ng1 -..-NFS ng2 -..-NFS p1 -..- NFS p2 -..- NFS p3 -..- NFS t1 -..- NFS t2 -..- NFS t3 -..- NFS style p2 stroke-dasharray: 5 5; style p3 stroke-dasharray: 5 5; style t2 stroke-dasharray: 5 5; style t3 stroke-dasharray: 5 5;

资源清单

---
# www.cpnf
# php-fpm 服务的配置文件
apiVersion: v1
kind: ConfigMap
metadata:
  name: php2listen
data:
  www.conf: |
    [www]
    user = nobody
    group = nobody
    listen = 0.0.0.0:9000
    pm = ondemand
    pm.max_children = 50
    pm.start_servers = 5
    pm.min_spare_servers = 5
    pm.max_spare_servers = 35
    pm.status_path = /status
    slowlog = /var/log/php-fpm/www-slow.log
    php_admin_value[error_log] = /var/log/php-fpm/www-error.log
    php_admin_flag[log_errors] = on
    php_value[session.save_handler] = files
    php_value[session.save_path]    = /var/lib/php/session
    php_value[soap.wsdl_cache_dir]  = /var/lib/php/wsdlcache
# end www.conf 
# deleted `listen.acl_users=`, `listen.allowed_clients=`
# changed `listen=`

---
# nginx.conf
# nging的配置文件
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx2php
data:
  nginx.conf: |
    worker_processes auto;
    worker_cpu_affinity auto;
    worker_rlimit_nofile 4096;
    error_log /dev/stdout warn;
    events {
        use epoll;
        worker_connections  1024;
    }
    http {
        include       mime.types;
        default_type  application/octet-stream;
        sendfile        on;
        keepalive_timeout  65;
        server {
            listen       80;
            server_name  localhost;
            location / {
                root   html;
                index  index.html index.htm;
            }
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   html;
            }
            location ~ \.php$ {
                root           html;
                fastcgi_pass   10.245.1.81:9000;  # php-fpm服务的地址
                fastcgi_index  index.php;
                include        fastcgi.conf;
            }
        }
    }
# end nginx.conf
# changed location ~ \.php$ {...

---
## nginx service
kind: Service
apiVersion: v1
metadata:
  name: nginxservice
spec:
  type: NodePort
  selector: { run: nginx, app: web }     
  ports:
  - { protocol: TCP, port: 80, targetPort: 80 , nodePort: 31080} 
# nodePort指定NodePort服务映射的节点端口,端口在30000-32767,一般不需要手动指定

---
## nginx deployment
kind: Deployment  
apiVersion: apps/v1 
metadata:     
  name: webnginx
spec:        
  replicas: 2  
  selector:    
    matchLabels: { run: nginx, app: web }        
  template:     
    metadata:
      labels: { run: nginx, app: web }       
    spec:
      volumes:
      - name: website              # 卷名称
        nfs:                       # NFS 资源类型
          server: 192.168.1.231    # NFS 服务器地址
          path: /var/webroot       # NFS 共享目录
      - name: nginx2php 
        configMap:     
          name: nginx2php
      restartPolicy: Always
      containers:
      - name: webnginx
        image: "myos:nginx"
        volumeMounts:
        - name: website                     # 卷名称
          mountPath: /usr/local/nginx/html  # 路径
        - name: nginx2php
          subPath: nginx.conf
          mountPath: /usr/local/nginx/conf/nginx.conf

---
#  php
## php service 
kind: Service
apiVersion: v1
metadata:
  name: phpservice
spec:
  type: ClusterIP
  clusterIP: 10.245.1.81
  selector: { run: php, app: web }
  ports:
  - { protocol: TCP, port: 9000, targetPort: 9000 }

---
## php deployment
kind: Deployment  
apiVersion: apps/v1 
metadata:     
  name: webphp
spec:        
  replicas: 1  
  selector:    
    matchLabels: { run: php, app: web }        
  template:     
    metadata:
      labels: { run: php, app: web }       
    spec:
      restartPolicy: Always
      volumes:
         - name: website              # 卷名称
           nfs:                       # NFS 资源类型
             server: 192.168.1.231    # NFS 服务器地址
             path: /var/webroot       # NFS 共享目录
         - name: php2listen 
           configMap:     
             name: php2listen
      containers:
      - name: phpfpm
        image: myos:php-fpm
        imagePullPolicy: Always
        volumeMounts:
        - name: website                     # 卷名称
          mountPath: /usr/local/nginx/html  # 路径
        - name: php2listen
          subPath: www.conf
          mountPath: /etc/php-fpm.d/www.conf
        resources:
          requests:
            cpu: 150m
            
---
## hpa for php 1~3
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v1
metadata:
  name: php-hpa
spec:
  minReplicas: 1
  maxReplicas: 3
  targetCPUUtilizationPercentage: 100
  scaleTargetRef:
    kind: Deployment
    apiVersion: apps/v1
    name: webphp

---
#  tomcat
## tomcat service 
kind: Service
apiVersion: v1
metadata:
  name: tomcatservice
spec:
  type: NodePort 
  selector: { run: tomcat, app: web }
  ports:
  - { protocol: TCP, port: 8080, targetPort: 8080, nodePort: 31088 }

---
## tomcat deployment
#  tomcat:latest
kind: Deployment  
apiVersion: apps/v1 
metadata:     
  name: webtomcat
spec:        
  replicas: 1                         # 因为随后会设置hpa动态调整pod数量,这里设置为1
  selector:    
    matchLabels: { run: tomcat, app: web }        
  template:     
    metadata:
      labels: { run: tomcat, app: web }       
    spec:
      restartPolicy: Always
      volumes:
         - name: website              # 卷名称
           nfs:                       # NFS 资源类型
             server: 192.168.1.231    # NFS 服务器地址
             path: /var/webroot/ROOT  # NFS 共享目录
      containers:
      - name: webtomcat
        image: harbor:443/k8s/tomcat:latest 
        imagePullPolicy: Always
        volumeMounts:
        - name: website                    
          mountPath: /usr/local/tomcat/webapps/ROOT 
        resources:
          requests:
            cpu: 200m                  # 指定最低需求是为了hpa的警戒线

---
## hpa for tomcat 1~3
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v1
metadata:
  name: tomcat-hpa
spec:
  minReplicas: 1
  maxReplicas: 3
  targetCPUUtilizationPercentage: 100  # 达到最低需求的100%则增加pod
  scaleTargetRef:
    kind: Deployment
    apiVersion: apps/v1
    name: webtomcat

---
#  ingress
## jsp for tomcat
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: mying  
spec:
  ingressClassName: nginx 
  rules:
    - host: www.test.com
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: nginxservice
                port:
                  number: 80
          - pathType: Prefix
            path: "/*.jsp"                    # jsp结尾的文件用tomcat服务
            backend:
              service:
                name: tomcatservice
                port:
                  number: 8080

标签:php,name,tomcat,备忘,nginx,NFS,k8s,20240404,metadata
From: https://www.cnblogs.com/ling-2945/p/18113920

相关文章

  • PM2 常用命令备忘单
    PM2常用命令备忘单 概述以下命令,基本涵盖了PM2的所有使用场景开始#分叉模式pm2startapp.js--namemy-api#给进程命名#集群模式pm2startapp.js-i0#根据可用CPU数量启动最大进程数,并使用负载均衡pm2startapp.js-imax#与上述相同,但已弃用pm2s......
  • 十七、k8s-helm-初探
    一、为什么要用helm1.1 常规的部署时通过多个yaml实现的由于k8s缺少对发布的应用版本管理和控制,使得部署的应用维护和更新等面临诸多挑战,主要体现在以下几方面:1、如何将这些服务作为一个整体管理2、这些资源文件如何高效复用3、不支持应用级别的版本管理二、helm介绍......
  • Kubernetes(k8s):如何进行 Kubernetes 集群健康检查?
    Kubernetes(k8s):如何进行Kubernetes集群健康检查?)一、节点健康检查1、使用kubectl查看节点状态2、查看节点详细信息3、检查节点资源使用情况2、Pod健康检查2.1、使用kubectl查看Pod状态2.2、查看特定Pod的详细信息,包括事件和条件3、服务健康检查3.1、使用ku......
  • ( —基础— ) k8s----介绍(1),2024年这些高频面试知识点最后再发一次
    kubectldashboard部署工具:使用批量部署工具如(ansible/saltstack)、手动二进制、apt-get/yum等方式安装,以守护进程的方式启动在宿主机上,类似于是Nginx一样使用service脚本启动。master,node作用Master:是集群的网关和中枢枢纽,主要作用:暴露API接口,跟踪其他服务器......
  • Kubernetes(k8s):部署、使用 metrics-server
    Kubernetes(k8s):部署、使用metrics-server一、metrics-server简介二、部署metrics-server2.1、下载MetricsServer部署文件2.2、修改metrics-server.yaml文件2.3、部署MetricsServer2.4、检查MetricsServer三、使用MetricsServer3.1、查看节点使用情况3.2、......
  • 基础知识-K8s(docker jenkins git)部分
    (0402,更新到Git)资料来源roadmap.sh一小时学会Git|GeekHourDocker部分Docker(容器)到底是什么我的例子预制菜的做法。为了能让使用者都能同一种食材和同一种烹饪方法,我特意在中央厨房,将一种菜式里的食材处理到半熟或者全熟的状态,然后用真空的包装包好,之后在仓库里存放。使......
  • k8s + springcloud 微服务开发调试工具kt Connect的使用
    概览KtConnect(全称KubernetesToolkitConnect)是一款基于Kubernetes环境用于提高本地测试联调效率的小工具。通过这个工具,可以不在本地启动所有服务,只需启动当前开发的服务即可,其它服务使用的是部署在k8s集群的实例,如下图:Reference官方文档:https://github.com/alibaba/......
  • k8s集群部署
    集群规划软件版本备注操作系统CentOSLinuxrelease7.9.2009(Core)kubernetesv1.29.2dockerDockerversion25.0.3,build4debf41calicov3.27.2角色Ip备注k8s-master-01192.168.11.121k8s-node-01192.168.11.122k8s-node-......
  • K8S 安全监控-falco 二进制部署方式
    基本了解:Falco是一个Linux安全工具,它使用系统调用来保护和监控系统。Falco最初由Sysdig开发,后来加入CNCF孵化器,成为首个加入CNCF的运行时安全项目。Falco提供了一组默认规则,可以监控内核态的异常行为,例如:对于系统目录/etc,/usr/bin,/usr/sbin的读写行为。文件所有权、访问权......
  • 记一次 K8s 故障处理
    记一次K8s故障处理k8s技术圈 2024-03-3120:38 四川 听全文 以下文章来源于SRE运维进阶之路 ,作者ClaySRE运维进阶之路.专注于SRE运维、云原生、稳定性、高可用性、可观测性、DevOps等技术 Calico异常重启问题复盘集群内网络架构为,基于CalicoBGP......