首页 > 其他分享 >Release Pipeline

Release Pipeline

时间:2023-04-26 17:47:42浏览次数:34  
标签:Pipeline name parameters value echo && Argo Release

Requirements

  • When code PR merged into env branch, can build the code and push image into image registry automatically and efficiently

  • When new image been pushed into image registry, can detect the change and update the image of service running in k8s cluster

  • Continuous deployment process should be smooth and under-control

  • Message notification

Tool selection

  • Github Webhook / Argo Workflow

  • Helm

  • Argo Event

  • Argo Workflow

  • Argo CD

  • Argo Rollout

Compared with current Github action + ArgoCD

  • Provides a good UI to dev team

  • More configuration, less scripts

  • Better access management

Github Branch && Environments

 

Github Branch

Environment

development

dev

sandbox

sandbox

release

integration

main

live

 

Release Process

1.Code PR

Developer merge PR into development branch(PR merged and closed)

 

2.Argo Event

Argo event support many event source, such as Github PR event.

Argo event listen on Github action event(PR) of environment branch(development)

Github PR event Webhook trigger argo event

Argo event also has so many sensors and triggers(sensors used to parameterized input of trigger)

Argo event trigger Argo workflow with parameters

 

3.Argo Workflow

Argo workflow designed for Kubernetes workflow schedule.

In the Argo workflow template, we configured different type of tasks(application code clone、package build、application helm repo update)

All these tasks can run parallel or sequentially depends on your design. Can also define the task execute sequence(dependency).

We define our CICD pipeline task running sequentially(application code clone → package build → unit test → image build and push into ECR → update service helm repo )

Argo workflow run different workflow for different services base on your input parameters from argo event sensor and argo event trigger.

We can also define a task for sending slack message of final release result dependent on Argo CD release result(Get Argo CD application release result from argo CD api)

 

4. Argo CD

Argo CD is a Kubernetes-native continuous deployment (CD) tool. Unlike external CD tools that only enable push-based deployments, Argo CD can pull updated code from Git repositories and deploy it directly to Kubernetes resources. Automatic synchronization of application state to the current version of declarative configuration.

After Argo CD application configuration(Auto-sync with service helm repository), Argo CD server listen on service helm repo. If service helm repository has any change, Argo CD server will sync state of application to the current version of declarative configuration automatically.

So after argo workflow changed the service helm repo(by argo workflow), Argo CD syncs the desired state to current state which will trigger k8s api service to update the service state to desired state(update service pods tag to new version).

At the same time Argo CD keep track on the service pods health check, if new version pods health check failed, the old version pods will not been destroy. If health check successful, new version pods will perform rolling update

 

5. Argo Rollout

Argo Rollout can be considered as an extension of Kubernetes Deployment(argo CD uses deployment ), which makes up for the lack of functionality in the deployment release strategy, and supports the canary deployment or blue-green upgrade release policy.

We just need to change deployment to rollout, can got all above functions.

 

Samples

 

Argo event source(Github Webhook)

apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: github
spec:
  service:
    ports:
      - name: vbsiv
        port: 12000
        targetPort: 12000
  github:
    vbsiv:
      repositories:
        - owner: argoproj
          names:
            - argo-events
            - argo-workflows
      # Github application auth. Instead of using personal token `apiToken` use app PEM            
      # Github will send events to following port and endpoint
      webhook:
        # endpoint to listen to events on
        endpoint: /push
        # port to run internal HTTP server on
        port: "12000"
        # HTTP request method to allow. In this case, only POST requests are accepted
        method: POST
        # url the event-source will use to register at Github.
        # This url must be reachable from outside the cluster.
        # The name for the service is in `<event-source-name>-eventsource-svc` format.
        # You will need to create an Ingress or Openshift Route for the event-source service so that it can be reached from GitHub.
        url: http://url-that-is-reachable-from-GitHub
      # type of events to listen to.
      # following listens to everything, hence *
      # You can find more info on https://developer.github.com/v3/activity/events/types/
      events:
        - "*"

      # apiToken refers to K8s secret that stores the github api token
      # if apiToken is provided controller will create webhook on GitHub repo
      # +optional
      apiToken:
        # Name of the K8s secret that contains the access token
        name: github-access
        # Key within the K8s secret whose corresponding value (must be base64 encoded) is access token
        key: token

      # type of the connection between event-source and Github.
      # You should set it to false to avoid man-in-the-middle and other attacks.
      insecure: true
      # Determines if notifications are sent when the webhook is triggered
      active: true
      # The media type used to serialize the payloads
      contentType: json

The github event source will create a service listen on port 12000 and also expose the web hook url to be public which github can access. The webhook service listen on all github POST request event. When developer create and closed a PR to service project, github will request a post request to argo event web hook service(url exposed to github) and create a github event to event bus.

 

Argo event bus(MQ used by argo work event, event source as producer, trigger as consumer)

apiVersion: argoproj.io/v1alpha1
kind: EventBus
metadata:
  name: default
spec:
  nats:
    native:
      # Optional, defaults to 3. If it is < 3, set it to 3, that is the minimal requirement.
      replicas: 3
      # Optional, authen strategy, "none" or "token", defaults to "none"
      auth: token
      containerTemplate:
        resources:
          requests:
            cpu: "10m"
      metricsContainerTemplate:
        resources:
          requests:
            cpu: "10m"
      antiAffinity: false
      persistence:
        storageClassName: gp2
        accessMode: ReadWriteOnce
        volumeSize: 10Gi

 

Argo event sensor(trigger argo workflow)

apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: github
spec:
  template:
    serviceAccountName: cicd
  dependencies:
    - name: test-dep
      eventSourceName: github
      eventName: github
      filters:
        data:
          # Name of the event that triggered the delivery: [pull_request, push, yadayadayada]
          # https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads
          - path: header.X-Github-Event
            type: string
            value:
              - pull_request
          - path: body.action
            type: string
            value:
              - opened
              - edited
              - reopened
              - synchronize
          - path: body.pull_request.state
            type: string
            value:
              - open
          - path: body.pull_request.base.ref
            type: string
            value:
              - development
  triggers:
    - template:
        name: workflow-template-cicd-vbsiv
        k8s:
          operation: create
          source:
            resource:
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                name: workflow-template-cicd-vbsiv-
              spec:
                entrypoint: main
                arguments:
                  parameters:
                    - name: pr-title
                    - name: pr-number
                    - name: short-sha
                templates:
                  - name: main
                    inputs:
                      parameters:
                        - name: pr-title
                        - name: pr-number
                        - name: short-sha
                    container:
                      image: docker/whalesay:latest
                      command: [cowsay]
                      args: ["{{inputs.parameters.pr-title}}"]
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: body.pull_request.title
              dest: spec.arguments.parameters.0.value
            - src:
                dependencyName: test-dep
                dataKey: body.pull_request.number
              dest: spec.arguments.parameters.1.value
            - src:
                dependencyName: test-dep
                dataTemplate: "{{ .Input.body.pull_request.head.sha | substr 0 7 }}"
              dest: spec.arguments.parameters.2.value
            # Append pull request number and short sha to dynamically assign workflow name <github-21500-2c065a>
            - src:
                dependencyName: test-dep
                dataTemplate: "{{ .Input.body.pull_request.number }}-{{ .Input.body.pull_request.head.sha | substr 0 7 }}"
              dest: metadata.name
              operation: append
      retryStrategy:
        steps: 3

Event sensor can consume the event with event name(github) from event bus. Event sensor will create parameter which be used by trigger from github post request data. And then run the trigger which creates a argo workflow template.

 

Argo Workflow Template (service vbsiv)

metadata:
  name: workflow-template-cicd-vbsiv # Argo event trigger will execute the workflow from the name
  generateName: workflow-template-cicd-vbsiv-
  namespace: argocd
spec:
  entrypoint: main                   # argo workflow start  
    arguments:                       # the parameters are used by template tasks which will sent from argo sensor
      parameters:
        - name: repo
          value: 'https://github.com/Appen/sid_verification_backend_api.git'
        - name: branch
          value: integration
        - name: registry
          value: 411719562396.dkr.ecr.us-east-1.amazonaws.com/datacollect-vbsiv
        - name: namespace
          value: argocd
        - name: servicerepopath
          value: ''
        - name: deploynamespace
          value: datacollect
        - name: servicename
          value: vbsiv
        - name: argoproject
          value: datacollect
        - name: helmrepo
          value: 'https://github.com/Appen-International/datacollect-helm.git'
        - name: helmbranch
          value: integration
        - name: helmrepopath
          value: app
        - name: helmservicename
          value: vbsiv
    serviceAccountName: datacollect   
    volumeClaimTemplates:
      - metadata:
          name: work
          creationTimestamp: null
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 20Gi
        status: {}

  templates:
    - name: main
      inputs: {}
      outputs: {}
      metadata: {}
      dag:
        tasks:
          - name: clone
            template: clone
            arguments:
              parameters:
                - name: repo
                  value: '{{workflow.parameters.repo}}'
                - name: branch
                  value: '{{workflow.parameters.branch}}'
          - name: build
            template: build
            arguments:
              parameters:
                - name: registry
                  value: '{{workflow.parameters.registry}}'
                - name: servicerepopath
                  value: '{{workflow.parameters.servicerepopath}}'
                - name: servicename
                  value: '{{workflow.parameters.servicename}}'
                - name: repo
                  value: '{{workflow.parameters.repo}}'
            dependencies:
              - clone
          - name: updateHelmRepo
            template: updateHelmRepo
            arguments:
              parameters:
                - name: helmrepo
                  value: '{{workflow.parameters.helmrepo}}'
                - name: helmbranch
                  value: '{{workflow.parameters.helmbranch}}'
                - name: helmrepopath
                  value: '{{workflow.parameters.helmrepopath}}'
                - name: servicename
                  value: '{{workflow.parameters.servicename}}'
                - name: helmservicename
                  value: '{{workflow.parameters.helmservicename}}'
                - name: argoproject
                  value: '{{workflow.parameters.argoproject}}'
                - name: servicerepopath
                  value: '{{workflow.parameters.servicerepopath}}'
            dependencies:
              - build
    - name: clone
      inputs:
        parameters:
          - name: repo
          - name: branch
      outputs: {}
      metadata: {}
      script:
        name: ''
        image: 'alpine/git:v2.30.1'
        command:
          - sh
        workingDir: /work
        env:
          - name: GITHUB_TOKEN
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_TOKEN
                optional: false
        resources: {}
        volumeMounts:
          - name: work
            mountPath: /work
        source: >
          authurl=`echo '{{inputs.parameters.repo}}' |awk -F '//' '{print $2}'`
          && \

          fullpath="https://${GITHUB_TOKEN}:x-oauth-basic@${authurl}" && \

          echo && \

          echo "Repo url: '{{inputs.parameters.repo}}'" && \

          echo "Repo: $authurl" && \

          echo "Branch: '{{inputs.parameters.branch}}'" && \

          echo && \

          echo "git config:" && \

          git config --global user.email "[email protected]" && \

          git config --global user.name "Xudong Geng" && \

          git config --global credential.helper store && \

          echo "clone project:" && \

          git clone --branch '{{inputs.parameters.branch}}' --single-branch
          $fullpath && \

          ls -la ./*
    - name: build
      inputs:
        parameters:
          - name: registry
          - name: servicerepopath
          - name: servicename
          - name: repo
      outputs:
        parameters:
          - name: revision
            valueFrom:
              path: /mainctrfs/work/revision.txt
            globalName: servicerevision
      metadata: {}
      script:
        name: ''
        image: 'docker:20.10.21-git'
        command:
          - sh
        workingDir: /work/
        env:
          - name: GITHUB_USER
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_USER
                optional: false
          - name: GITHUB_TOKEN
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_TOKEN
                optional: false
          - name: DOCKER_HOST
            value: 127.0.0.1
        resources: {}
        volumeMounts:
          - name: work
            mountPath: /work
        securityContext:
          privileged: true
        source: >
          apk add alpine-sdk git libffi-dev openssh openssl-dev py3-pip
          python3-dev && \

          pip3 install awscli && \


          reponame=`echo '{{inputs.parameters.repo}}' |awk -F '/' '{print $NF}'
          |awk -F '\.git' '{print $1}'`

          cd "$reponame" && \


          if [ {{inputs.parameters.servicerepopath}} != "" ]

          then
            cd {{inputs.parameters.servicerepopath}} 
          fi && \


          TAG=`git rev-parse HEAD` && \

          ecrregion=`echo '{{inputs.parameters.registry}}' |awk -F '.' '{print
          $4}'` && \

          ecrrepo=`echo '{{inputs.parameters.registry}}' |awk -F '/' '{print
          $1}'` && \

          mkdir -p /mainctrfs/work/ && \

          echo "$TAG" > /mainctrfs/work/revision.txt && \


          echo "=======================INFO=================================" &&
          \

          echo "Revision: $TAG" && \

          echo -n "AWS-cli version:" && aws --version && \

          echo "Service repo path: '{{inputs.parameters.servicerepopath}}'" && \

          echo -n "Service work directory:" && pwd && \

          echo "Directory content:" && ls
          /work/'{{inputs.parameters.servicerepopath}}' && \

          echo -n "Tag info: " && cat /mainctrfs/work/revision.txt && \

          echo "ECR repo: $ecrrepo" && \

          echo "ECR region: $ecrregion" && \

          echo "=======================END==================================" &&
          \


          echo && \

          aws ecr get-login-password --region "$ecrregion" | docker login
          --username AWS --password-stdin "$ecrrepo" && \

          docker build -t '{{inputs.parameters.registry}}':"$TAG" -f Dockerfile
          . && \

          docker push '{{inputs.parameters.registry}}':"$TAG"
      sidecars:
        - name: dind
          image: 'docker:19.03.13-dind'
          command:
            - dockerd-entrypoint.sh
          env:
            - name: DOCKER_TLS_CERTDIR
          resources: {}
          securityContext:
            privileged: true
          mirrorVolumeMounts: true
    - name: updateHelmRepo
      inputs:
        parameters:
          - name: helmrepo
          - name: helmbranch
          - name: argoproject
          - name: helmrepopath
          - name: servicename
          - name: helmservicename
          - name: servicerepopath
          - name: servicerevision
            value: '{{workflow.outputs.parameters.servicerevision}}'
      outputs: {}
      metadata: {}
      script:
        name: ''
        image: 'docker:20.10.21-git'
        command:
          - sh
        workingDir: /work/
        env:
          - name: GITHUB_TOKEN
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_TOKEN
                optional: false
        resources: {}
        volumeMounts:
          - name: work
            mountPath: /work
        source: >
          apk add alpine-sdk git libffi-dev openssh openssl-dev py3-pip
          python3-dev && \

          pip3 install awscli && \


          helmauthurl=`echo '{{inputs.parameters.helmrepo}}' | awk -F '//'
          '{print $2}'` && \

          helmfullpath="https://${GITHUB_TOKEN}:x-oauth-basic@${helmauthurl}" &&
          \

          revision='{{inputs.parameters.servicerevision}}' && \

          reponame=`echo '{{inputs.parameters.helmrepo}}' | awk -F '/' '{print
          $NF}' | awk -F '\.git' '{print $1}'` && \


          for i in range{1..40} ;do echo -n "#";done && echo && \

          echo -n "Current dir:" && pwd && \

          echo "Current files:" && ls ./*  && \

          echo "Helm repo: $helmauthurl" && \

          echo "Helm branch: '{{inputs.parameters.helmbranch}}'" && \

          echo "Revision: $revision" && \

          echo "Repo name: $reponame" && \

          for i in range{1..40} ;do echo -n "#";done && echo && \


          git config --global user.email "[email protected]" && \

          git config --global user.name "Xudong Geng" && \

          git config --global credential.helper store && \

          git clone --branch '{{inputs.parameters.helmbranch}}' --single-branch
          $helmfullpath && \


          cd $reponame && \

          if test -z '{{inputs.parameters.helmrepopath}}'

          then
            valuespath={{inputs.parameters.helmservicename}}
          else
            valuespath={{inputs.parameters.helmrepopath}}/{{inputs.parameters.helmservicename}}
          fi


          for yamlfile in `ls $valuespath/values*.yaml`

          do
            sed -i "s@tag: \"\([0-9a-zA-Z_]\{1,\}\)\"@tag: \""$revision"\"@g" $yamlfile
          done && \

          git add . && git commit -a -m "update service helm repo resivion." &&
          \

          git push origin HEAD

The argo workflow template creates argo workflow including several ordinal tasks(clone application repository ->build package and image and push into ECR->update helm repo)

 

Argo CD Application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: vbsiv-service
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: datacollect
  source:
    repoURL: https://github.com/Appen-International/datacollect-helm
    path: app/vbsiv
    targetRevision: development
  destination:
    server: https://kubernetes.default.svc
    namespace: datacollect
  syncPolicy:
    automated: {}

Argo CD listen on the service repository(https://github.com/Appen-International/datacollect-helm) with service path “app/vbsiv“ and branch “development“. There is any update on the service helm repo which will cause Argo CD (k8s controller) to update the service state to desired state(new version). We just need to update service image tag and will update the service the new tag to the destination namespace of k8s cluster.

 

Argo Rollout

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  labels:
    app: vbsiv
  name: vbsiv
spec:
  replicas: 5
  selector:
    matchLabels:
      app: vbsiv
  strategy:
    canary:
      steps:
      - setWeight: 20
      - pause: {}
      - setWeight: 40
      - pause: {duration: 10m}
      - setWeight: 60
      - pause: {duration: 10m}
      - setWeight: 80
      - pause: {duration: 10m}
  template:
    metadata:
      labels:
        app: vbsiv
    spec:
      containers:
      - image: vbsiv:v1
        name: vbsiv-service

Argo CD supports rolling update and rollout. The above sample is configured as rollout for canary scenario.

The release is divided into eight steps, where step pause is stopped and will remain stopped until triggered by an external event, such as an automated tool or a user manually promote.

The first step is to set the weight to 20%. Since there are five replicas, upgrading to 20% means only one replica is upgraded, followed by 40%, 60%, 80%, and so on.

One copy of the new version was found in step 1. Since we did not pass the step 2 test , the Service did not split the traffic and approximately 20% of the traffic was forwarded to the new version.

If canaryService and stableService are specified in the.spec.strategy, traffic is divided after the upgrade. CanaryService forwards only the traffic of the new version, while stableService forwards only the traffic of the old version. This is done by changing the Selector of the Service. The upgrade will automatically add a Hash to the two services. Continue promote after you perform this step, the new version is 40% .Thus, we can use rollout to progressively release our services by defining the canary policy.

标签:Pipeline,name,parameters,value,echo,&&,Argo,Release
From: https://www.cnblogs.com/lavender2012/p/17356821.html

相关文章

  • [MLIR] CodeGen Pipeline总结
    参考资料:[MLIR]CodeGenPipeline总结-知乎(zhihu.com)本文主要以tensorflow为例,介绍了其接入MLIR后的CodeGen过程,以及简要分析了一些现在常用的CodeGenpipeline。本文是本人在结合博客(CodegenDialectOverview-MLIR-LLVMDiscussionForums)以及相关资料而写......
  • redis之哈希类型-列表类型-集合类型-有序集合-慢查询-pipeline-发布订阅-Bitmap位图-H
    目录redis之哈希类型-列表类型-集合类型-有序集合-慢查询-pipeline-发布订阅-Bitmap位图-HyperLogLog-GEO地理位置昨日内容回顾今日内容详细1哈希类型2列表类型3集合类型4有序集合5慢查询6pipeline与事务7发布订阅8Bitmap位图9HyperLogLog10GEO地理位置redis之哈希类型......
  • 哈希类型 列表类型 集合类型 有序集合 慢查询 pipeline与事务 发布订阅 Bitmap位图 Hy
    昨日回顾#1redis介绍 -特性#速度快:10wops(每秒10w读写),数据存在内存中,c语言实现,单线程模型#持久化:rdb和aof#多种数据结构:5大数据结构BitMaps位图:布隆过滤器本质是字符串HyperLogLog:超小内存唯一值计数,12kbHyperLogLog本质是......
  • 使用pipeline执行命令遇到redis.Nil的坑
    参考项目kratos_rockscacheredis数据准备关键代码特别注意,使用pipeline的Exec方法,一定要判断一下redis.Nil这个错误:~~~......
  • x64逆向——MT、MT在release和debug下的四种模式寻找main入口
    vs代码生成四种模式:MT选项:链接LIB版的C和C++运行库。在链接时就会在将C和C++运行时库(LIBCMT.LIB、LIBC.LIB)集成到程序中,程序体积会变大。MTd选项:LIB的调试版。MD选项:使用DLL版的C和C++运行库,这样在程序运行时会动态的加载对应的DLL,程序体积会减小,缺点是在系统没有对应DLL时程序无......
  • 软件中GA、Release、RC、Beta、Alpha 各版本号的意义
    1、GA:(generalavailability)GeneralAvailability,正式发布的版本,国外通常用GA来标识release版本,GA版本是开发团队认为该版本是稳定版(有的软件可能会标识为Stable版本或者Production版本,其意思和GA相同),可以在较为关键的场合使用,比如生产环境。2、Release:该版本意味“......
  • 29、Pipeline Job进阶之部署应用至Kubernetes集群
    PipelineJob进阶之部署应用至Kubernetes集群在jenkins上的k8s云节点,在原来maven-and-docker模板的基础之上,添加容器也可以添加pod模板,通过继承的方式来实现maven-docker-kubectl方式来定义添加podtemplate添加容器:使用kubesphere/kubectl:latest镜像安装插件用于认证到k8s集群之......
  • 2、Pipeline语法及使用自定义工具的Maven工程
    Pipeline语法声明式pipeline的结构pipeline的定义有一个明确的、必须遵循的结构,它由一些directive(指令)和section(配置段)组成,每一个section又可包含其它的section、directive和step(执行步骤),以及一些condition(执行条件)的定义;◼Section:用于将那些在某个时间点需要一同运行的条目(i......
  • 1、Pipeline Job的简单构建与代码片断生成器
    PipelineJob脚本式语法和声明式语法Jenkins2.x支持两种pipeline语法:脚本式语法和声明式语法脚本式流水线://脚本式流水线:node用于脚本式流水线,从技术层面上来说,//它是一个步骤,代表可以用于流水线中执行活动的资源node('node01'){stages{stage('Build'){steps......
  • iOS:AutoReleasePool
    具体参考文章AutoRelease是依靠AutoreleasePoolPage来进行push和pop进行工作的AutoreleasePoolPage为双向链表,parent字段指向上一层,child指向下一层每个AutoreleasePoolPage的大小为4096字节每个AutoreleasePoolPage最多可以存放505个对象。首个page可以......