有部分场景需要使用output把目录或者文件传递到下一个步骤。
argo提供了两种方式
一种是参数方式parameter
一种是组件方式artifacts
各自适用于不同的场景,参数方式是把某个文本的内容读取出来传递给下一步骤。
组件方式可以传递文件本身或者文件目录。
参数方式parameter
参数方式的用法配置比较简单,参考如下:
# Output parameters provide a way to use the contents of a file,
# as a parameter value in a workflow. In that regard, they are
# similar in concept to script templates, with the difference being
# that the ouput parameter values are obtained via file contents
# instead of stdout (as with script templates). Secondly, there can
# be multiple 'output.parameters.xxx' in a single template, versus
# a single 'output.result' from a script template.
#
# In this example, the 'whalesay' template produces an output
# parameter named 'hello-param', taken from the file contents of
# /tmp/hello_world.txt. This parameter is passed to a subsequent
# step as an input parameter to the template, 'print-message'.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-parameter-
spec:
entrypoint: output-parameter
templates:
- name: output-parameter
steps:
- - name: generate-parameter
template: whalesay
- - name: consume-parameter
template: print-message
arguments:
parameters:
- name: message
value: "{{steps.generate-parameter.outputs.parameters.hello-param}}"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["echo -n hello world > /tmp/hello_world.txt"]
outputs:
parameters:
- name: hello-param
valueFrom:
path: /tmp/hello_world.txt
- name: print-message
inputs:
parameters:
- name: message
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
STEPS模式和DAG模式的传递区别
注意steps模式的使用传参方式是:
{{steps.generate-parameter.outputs.parameters.hello-param}}
如果是DAG templates 则使用 tasks 作为前缀与其他步骤关联, 例如
{{tasks.generate-artifact.outputs.artifacts.hello-art}}
组件方式artifacts
示例
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-artifacts-
spec:
entrypoint: output-artifacts
templates:
- name: output-artifacts
steps:
- - name: generate-artifacts
template: generate
- - name: consume-artifacts
template: consume
arguments:
artifacts:
- name: in-artifact
from: "{{steps.generate.outputs.artifacts.out-artifact}}"
- name: generate
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["echo -n hello world > /tmp/hello_world.txt"]
outputs:
artifacts:
- name: out-artifact
path: /tmp/hello_world.txt
- name: consume
inputs:
artifacts:
- name: in-artifact
path: /tmp/input.txt
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["
echo 'input artifact contents:' &&
cat /tmp/input.txt
"]
可能遇到的问题
controller is not configured with a default archive location
原因
组件方式需要有一个中转文件的地方,所以需要给argo配置一个存储引擎。
问题参考源码参考
目前argo支持三种类型的存储:
aws的s3,gcs(Google Cloud Storage),Minio
解决方案
在使用的地方配置s3存储引擎
在outputs增加代码如下:
s3:
endpoint: s3.amazonaws.com
bucket: my-aws-bucket-name
key: path/in/bucket/my-input-artifact.txt
accessKeySecret:
name: my-aws-s3-credentials
key: accessKey
secretKeySecret:
name: my-aws-s3-credentials
key: secretKey
如果s3的key已经在当前环境配置好,则不需要accessKeySecret和secretKeySecret配置。
如下:
s3:
endpoint: s3.amazonaws.com
bucket: my-aws-bucket-name
key: path/in/bucket/my-input-artifact.txt
完整示例代码如下:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-artifacts-
spec:
entrypoint: output-artifacts
templates:
- name: output-artifacts
steps:
- - name: generate-artifacts
template: generate
- - name: consume-artifacts
template: consume
arguments:
artifacts:
- name: in-artifact
from: "{{steps.generate.outputs.artifacts.out-artifact}}"
- name: generate
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["echo -n hello world > /tmp/hello_world.txt"]
outputs:
artifacts:
- name: out-artifact
path: /tmp/hello_world.txt
s3:
endpoint: s3.amazonaws.com
bucket: my-aws-bucket-name
key: path/in/bucket/my-input-artifact.txt
- name: consume
inputs:
artifacts:
- name: in-artifact
path: /tmp/input.txt
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["
echo 'input artifact contents:' &&
cat /tmp/input.txt
"]
在配置文件中统一配置s3存储引擎
如果在每个使用的地方都去加s3的配置,那代码会很冗余,argo有一个统一的配置可以进行设置。
使用命令编辑配置文件如下:
kubectl edit configmap workflow-controller-configmap -n argo
增加内容如下:
data:
config: |
artifactRepository:
s3:
bucket: my-aws-bucket-name
keyPrefix: prefix/in/bucket #optional可选
endpoint: s3.amazonaws.com #AWS => s3.amazonaws.com; GCS => storage.googleapis.com
insecure: true #omit for S3/GCS. Needed when minio runs without TLS
accessKeySecret:
name: my-aws-s3-credentials
key: accessKey
secretKeySecret:
name: my-aws-s3-credentials
key: secretKey
在配置文件中统一配置minio存储引擎
minio存储引擎是argo自带的存储引擎。可以很方便的安装。
使用minio之前需要先安装,步骤如下:
官网步骤参考:
Install an Artifact Repository
需要翻墙
brew install kubernetes-helm # mac
helm init
helm install stable/minio --name argo-artifacts --set service.type=LoadBalancer --set persistence.enabled=false
使用命令编辑配置文件如下:
kubectl edit configmap workflow-controller-configmap -n argo
增加内容如下:
data:
config: |
artifactRepository:
s3:
bucket: my-bucket
endpoint: argo-artifacts-minio.default:9000
insecure: true
# accessKeySecret and secretKeySecret are secret selectors.
# It references the k8s secret named 'argo-artifacts-minio'
# which was created during the minio helm install. The keys,
# 'accesskey' and 'secretkey', inside that secret are where the
# actual minio credentials are stored.
accessKeySecret:
name: argo-artifacts
key: accesskey
secretKeySecret:
name: argo-artifacts
key: secretkey
注意,这里的账号密码accessKeySecret和secretKeySecret的name和key都要根据自己的环境来设置。
而不是使用官网上的
AccessKey: AKIAIOSFODNN7EXAMPLE
SecretKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
获取步骤如下:
kubectl get secret
输出如下:
NAME TYPE DATA AGE
argo-artifacts Opaque 2 4h
default-token-2cvxb kubernetes.io/service-account-token 3 61d
#则argo-artifacts是我们需要的name
kubectl get secret/argo-artifacts -o wide
kubectl describe secret/argo-artifacts
输出
Name: argo-artifacts
Namespace: default
Labels: app=minio
chart=minio-1.9.1
heritage=Tiller
release=argo-artifacts
Annotations: <none>
Type: Opaque
Data
====
accesskey: 20 bytes
secretkey: 40 bytes
可以看到确实包含两个密钥文件。
但是要看到里面的密钥值比较麻烦,需要新建一个挂载这个secret的pod才能看到,步骤如下:
创建一个Pod通过卷访问秘密数据
下面是一个配置文件可以用来创建一个Pod:
vi secret-pod.yaml
输入内容如下:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: /etc/secret-volume
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
secretName: argo-artifacts
这里的secretName要对应上面kubectl get secret得到的name。
1.创建Pod:
kubectl create -f secret-pod.yaml
2.验证Pod是否运行:
kubectl get pod secret-test-pod
输出:
NAME READY STATUS RESTARTS AGE
secret-test-pod 1/1 Running 0 10s
3.使用shell进入到pod运行的容器里面:
kubectl exec -it secret-test-pod -- /bin/bash
4.这个秘密数据公开在容器/etc/secret-volume目录里面通过卷挂载的方式。进入这个目录,并查看这个数据:
root@secret-test-pod:/# cd /etc/secret-volume
5.在shell里面查看/etc/secret-volume目录下的文件:
root@secret-test-pod:/etc/secret-volume# ls
输出展示了两个文件,每一个都对应相应的秘密数据:
accesskey secretkey
输出文本:
cat accesskey
AKIAIOSFODNN7EXAMPLE
cat secretkey
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
发现通过Helm安装的argo-artifacts原始密码跟官网上的值一样。
但是这样设置无法通过认证。
(目前有bug,默认密码不可用)
可能遇到的问题:
secret ‘argo-artifacts-minio-user’ does not have the key ‘AKIAIOSFODNN7EXAMPLE’
原因
使用默认密码无法通过验证
解决方案(待官方回复):
Minio - Default Artifact does not work
转载请注明出处:
argo的输入输出–output和input输出目录或文件到下一步骤
可能遇到的问题–failed to save outputs secrets my-aws-s3-credentials not found
这种情况可能map中或者设置的key和value与aws的iam的role认证不一致。
解决方案,去掉指定的key和value。
只指定桶和文件名,这样会默认使用iam角色的认证权限。
如下:
outputs:
artifacts:
- name: first-output
path: /data/test/first_result.snp
s3:
endpoint: s3.cn-northwest-2.amazonaws.com.cn
bucket: my-test-env
key: tmp
可能遇到的问题–输出的文件在下一个pod中使用时乱码
原因argo会对缓存到s3的文件默认进行压缩保存。
解决方案,指定不需要压缩。
archive:
none: {}
如下:
outputs:
artifacts:
- name: first-output
path: /data/test/first_result.snp
archive:
none: {}
s3:
endpoint: s3.cn-northwest-2.amazonaws.com.cn
bucket: my-test-env
key: tmp
标签:s3,name,output,--,artifacts,secret,argo,input From: https://blog.51cto.com/u_16218512/7013695