1、先安装prometheus-operator和Ollama大模型服务【这里忽略】
2、安装k8sgpt-operator
helm repo add k8sgpt https://charts.k8sgpt.ai/
helm repo update
helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace
helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace --set serviceMonitor.enabled=true --set grafanaDashboard.enable=true
3、检查k8sgpt 自定义资源是否创建成功
kubectl api-resources | grep -i gpt
4、配置 K8sGPT yaml,这里 baseUrl要使用Ollama的IP 地址
kubectl apply -n k8sgpt-operator-system -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
name: k8sgpt-ollama
spec:
ai:
enabled: true
model: llama3.1:8b
backend: localai
baseUrl: http://127.0.0.1:11434/v1
noCache: false
filters:
- Pod
- Ingress
repository: ghcr.io/k8sgpt-ai/k8sgpt
version: v0.3.40
5、查看报告对象
kubectl get result -n k8sgpt-operator-system
6、k8sgpt 扫描分析结果
kubectl get result -n k8sgpt-operator-system -o jsonpath='{.items[].spec}' | jq .