首页 > 其他分享 >这是我在51CTO博客的第一篇博文Loki 实现 Kubernetes1.24 容器日志监控

这是我在51CTO博客的第一篇博文Loki 实现 Kubernetes1.24 容器日志监控

时间:2023-10-28 17:05:38浏览次数:39  
标签:__ name kubernetes 51CTO labels Loki meta Kubernetes1.24 loki

使用 Loki 实现 Kubernetes1.24 容器日志监控

一、基本介绍

1.Loki 架构

2.Loki 工作原理

二、使用 Loki 实现容器日志监控

1.安装 Loki

2.安装 Promtail

3.安装 Grafana

4.验证

一、基本介绍

Loki 是由 Grafana Labs 团队开发的,基于 Go 语言实现,是一个水平可扩展,高可用性,多租户的日志聚合系统。它的设计非常经济高效且易于操作,因为它不会为日志内容编制索引,而是为每个日志流配置一组标签。Loki 项目受 Prometheus 启发。

官方的介绍就是:Like Prometheus, but for logs,类似于 Prometheus 的日志系统。

1.Loki 架构

  • Loki:主服务,用于存储日志和处理查询。
  • Promtail:代理服务,用于采集日志,并转发给 Loki。
  • Grafana:通过 Web 界面来提供数据展示、查询、告警等功能。


这是我在51CTO博客的第一篇博文Loki 实现 Kubernetes1.24 容器日志监控_k8s

2.Loki 工作原理

首先由 Promtail 进行日志采集,并发送给 Distributor 组件,Distributor 组件会对接收到的日志流进行正确性校验,并将验证后的日志分批并行发送给 Ingester 组件。Ingester 组件会将接收过来的日志流构建成数据块,并进行压缩后存放到所连接的后端存储中。


这是我在51CTO博客的第一篇博文Loki 实现 Kubernetes1.24 容器日志监控_k8s_02

Querier 组件,用于接收 HTTP 查询请求,并将查询请求转发给 Ingester 组件,来返回存在 Ingester 内存中的数据。要是在 Ingester 的内存中没有找到符合条件的数据时,那么 Querier 组件便会直接在后端存储中进行查询(内置去重功能)。

二、使用 Loki 实现容器日志监控

1.安装 Loki

1)创建 RBAC 授权

[root@k8s-master01 ~]# cat <<END > loki-rbac.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: loki
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: loki
  namespace: loki
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: loki
  namespace: loki
rules:
- apiGroups: ["extensions"]
  resources: ["podsecuritypolicies"]
  verbs: ["use"]
  resourceNames: [loki]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: loki
  namespace: loki
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: loki
subjects:
- kind: ServiceAccount
  name: loki
END
[root@k8s-master01 ~]# kubectl create -f loki-rbac.yaml

2)创建 ConfigMap 文件

[root@k8s-master01 ~]# cat <<END > loki-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-config
  namespace: loki
  labels:
    app: loki
data:
  loki.yaml: |
    auth_enabled: false
    ingester:
      chunk_idle_period: 3m
      chunk_block_size: 262144
      chunk_retain_period: 1m
      max_transfer_retries: 0
      lifecycler:
        ring:
          kvstore:
            store: inmemory
          replication_factor: 1
    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
    schema_config:
      configs:
      - from: "2022-10-21"
        store: boltdb-shipper
        object_store: filesystem
        schema: v11
        index:
          prefix: index_
          period: 24h
    server:
      http_listen_port: 3100
    storage_config:
      boltdb_shipper:
        active_index_directory: /data/loki/boltdb-shipper-active
        cache_location: /data/loki/boltdb-shipper-cache
        cache_ttl: 24h         
        shared_store: filesystem
      filesystem:
        directory: /data/loki/chunks
    chunk_store_config:
      max_look_back_period: 0s
    table_manager:
      retention_deletes_enabled: true
      retention_period: 48h
    compactor:
      working_directory: /data/loki/boltdb-shipper-compactor
      shared_store: filesystem
END
[root@k8s-master01 ~]# kubectl create -f loki-configmap.yaml

3)创建 StatefulSet

[root@k8s-master01 ~]# cat <<END > loki-data-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: loki-nfs-sc
provisioner: fuseim.pri/ifs
END

[root@k8s-master01 ~]# kubectl create -f loki-data-sc.yaml
[root@k8s-master01 ~]# cat <<END > loki-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: loki-pvc
  namespace: loki
  annotations:
    volume.beta.kubernetes.io/storage-class: "loki-nfs-sc"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
END
[root@k8s-master01 ~]# kubectl create -f loki-pvc.yaml
[root@k8s-master01 ~]# cat <<END > loki-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: loki
  namespace: loki
  labels:
    app: loki
    release: loki
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: loki
      release: loki
#  serviceName: loki-headless
  template:
    metadata:
      labels:
        app: loki
        release: loki
    spec:
      containers:
      - name: loki
        image: 10.94.99.109:8000/loki/loki:2.3.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3100
          name: http-metrics
          protocol: TCP
        args:
          - -config.file=/etc/loki/loki.yaml
        volumeMounts:
        - name: loki-config
          mountPath: /etc/loki
        - name: storage
          mountPath: /data
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ready
            port: http-metrics
            scheme: HTTP
          initialDelaySeconds: 45
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /ready
            port: http-metrics
            scheme: HTTP
          initialDelaySeconds: 45
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
      securityContext:
        fsGroup: 10001
        runAsGroup: 10001
        runAsNonRoot: true
        runAsUser: 10001
      serviceAccount: loki
      serviceAccountName: loki
      volumes:
      - name: loki-config
        configMap:
          defaultMode: 493
          name: loki
      - name: storage
        persistentVolumeClaim:
          claimName: loki-pvc
END
[root@k8s-master01 ~]# kubectl create -f loki-deployment.yaml

2.安装 Promtail

1)创建 RBAC 授权文件

[root@k8s-master01 ~]# cat <<END > promtail-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: loki-promtail
  labels:
    app: promtail
  namespace: loki
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    app: promtail
  name: promtail-clusterrole
  namespace: loki
rules:
- apiGroups: [""]
  resources: ["nodes","nodes/proxy","services","endpoints","pods"]
  verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: promtail-clusterrolebinding
  labels:
    app: promtail
  namespace: loki
subjects:
  - kind: ServiceAccount
    name: loki-promtail
    namespace: loki
roleRef:
  kind: ClusterRole
  name: promtail-clusterrole
  apiGroup: rbac.authorization.k8s.io
END
[root@k8s-master01 ~]# kubectl create -f promtail-rbac.yaml

2)创建 ConfigMap 文件

Promtail 配置文件:官方介绍

[root@k8s-master01 ~]# cat <<"END" > promtail-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-promtail
  namespace: loki
  labels:
    app: promtail
data:
  promtail.yaml: |
    client:   # 配置Promtail如何连接到Loki的实例
      backoff_config:  # 配置当请求失败时如何重试请求给Loki
        max_period: 5m
        max_retries: 10
        min_period: 500ms
      batchsize: 1048576  # 发送给Loki的最大批次大小(以字节为单位)
      batchwait: 1s  # 发送批处理前等待的最大时间(即使批次大小未达到最大值)
      external_labels: {}  # 所有发送给Loki的日志添加静态标签
      timeout: 10s   # 等待服务器响应请求的最大时间
    positions:
      filename: /run/promtail/positions.yaml
    server:
      http_listen_port: 3101
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: kubernetes-pods-name
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_label_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-app
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        source_labels:
        - __meta_kubernetes_pod_label_name
      - source_labels:
        - __meta_kubernetes_pod_label_app
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-direct-controllers
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: drop
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-indirect-controller
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: keep
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - action: replace
        regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-static
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: ''
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_label_component
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
        - __meta_kubernetes_pod_container_name
        target_label: __path__
END
[root@k8s-master01 ~]# kubectl create -f promtail-configmap.yaml

3)创建 DaemonSet 文件

[root@k8s-master01 ~]# cat <<END > promtail-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: loki-promtail
  namespace: loki
  labels:
    app: promtail
spec:
  selector:
    matchLabels:
      app: promtail
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: promtail
    spec:
      serviceAccountName: loki-promtail
      containers:
        - name: promtail
          image: 10.94.99.109:8000/loki/promtail:2.3.0
          imagePullPolicy: IfNotPresent
          args:
          - -config.file=/etc/promtail/promtail.yaml
          - -client.url=http://loki.loki.svc.cluster.local:3100/loki/api/v1/push
          env:
          - name: HOSTNAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          volumeMounts:
          - mountPath: /etc/promtail
            name: config
          - mountPath: /run/promtail
            name: run
          - mountPath: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io  #我这里修改过docker默认存储目录
            name: docker
            readOnly: true
          - mountPath: /var/log/pods
            name: pods
            readOnly: true
          ports:
          - containerPort: 3101
            name: http-metrics
            protocol: TCP
          securityContext:
            readOnlyRootFilesystem: true
            runAsGroup: 0
            runAsUser: 0
          readinessProbe:
            failureThreshold: 5
            httpGet:
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
        - name: config
          configMap:
            name: loki-promtail
        - name: run
          hostPath:
            path: /run/promtail
            type: ""
        - name: docker
          hostPath:
            path: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io  #我这里修改过docker默认存储目录
        - name: pods
          hostPath:
            path: /var/log/pods
END
[root@k8s-master01 ~]# kubectl create -f promtail-daemonset.yaml

[root@k8s-master01 ~]# cat <<END > loki-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: loki
  namespace: loki
  labels:
    app: loki
    release: loki
spec:
  ports:
  - name: http-metrics
    port: 3100
    protocol: TCP
#    targetPort: 3100
#    nodePort: 32001
  #type: NodePort
  selector:
    app: loki
    release: loki
---
#apiVersion: v1
#kind: Service
#metadata:
#  name: loki-headless
#  namespace: loki
#  labels:
#    app: loki
#    release: loki
#spec:
#  clusterIP: None
#  publishNotReadyAddresses: true
#  ports:
#  - name: http-metrics
#    port: 3100
#    protocol: TCP
#    targetPort: http-metrics
#    nodePort: 32002
#    type: NodePort
#  selector:
#    app: loki
#    release: loki
END
[root@k8s-master01 ~]# kubectl create -f loki-service.yaml

4)Promtail 关键配置

volumeMounts:
    - mountPath: /var/lib/docker/containers
      name: docker
      readOnly: true
    - mountPath: /var/log/pods
      name: pods
      readOnly: true
volumes:
- name: docker
  hostPath:
    path: /var/lib/docker/containers
- name: pods
  hostPath:
    path: /var/log/pods

这里需要注意,hostPath 和 mountPath 配置的路径要相同(这里说的相同指的是,要和宿主机的容器目录相同),因为 Promtail 在读取容器内的日志时,会通过 K8s 的 API 接口来返回容器信息(通过源路径取的)。如果配置的不同,将会导致 httpGet 检查失败。

3.安装 Grafana

已有Grafana看板不需要再次部署

添加数据源--->选择loki数据源--->配置loki URL地址 : http://loki.loki.svc.cluster.local:3100 ----> 验证

helm show values grafana/loki-stack > ./loki-stack.yaml

打开loki-stack.yaml可以看到,一些安装的配置。我们去掉一些默认为false的选项配置,如fluent-bit,prometheus,filebeat,logstash等,然后因为我们需要用到grafana,所以把grafana.enabled置成true,得到的yaml内容为:

loki:
  enabled: true
  isDefault: true

promtail:
  enabled: true
  config:
    lokiAddress: http://{{ .Release.Name }}:3100/loki/api/v1/push

grafana:
  enabled: true
  sidecar:
    datasources:
      enabled: true
      maxLines: 1000
  image:
    tag: 8.3.5

ps.这一步可以不用做,那么在安装的时候只需要 --set grafana.enabled=true 即可。

# 添加 repo
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# 安装 chart
helm upgrade --install loki --namespace=loki-stack  grafana/loki-stack --values ./loki-stack.yaml
#or
helm upgrade --install loki-stack grafana/loki-stack -n loki-stack --create-namespace --set grafana.enabled=true

由于 promtail 被默认配置为处理 docker 格式的日志,而笔者使用的是 containerd,需要更改 promtail 的 configmap 设置为处理 cri (containerd) 格式的日志:

kubectl get -n loki-stack configmaps/loki-stack-promtail -o yaml | sed -E 's|- docker: \{\}|- cri: {}|g' | kubectl apply -n loki-stack -f -

# 修改 configmap 后需要重新部署 promtail 
kubectl rollout restart -n loki-stack daemonsets/loki-stack-promtail

部署完成后,会开始自动收集 k8s 容器日志。

接下来,访问Grafana UI界面来查看部署结果。首先,通过以下命令获取Grafana管理员的密码:

$ kubectl get secret --namespace loki-stack loki-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

然后通过以下命令转发Grafana的接口,以便通过Web UI进行访问。默认情况下,端口转发的地址localhost,可以根据kubectl所在实例的情况补充设置–address 。

$ kubectl port-forward --namespace loki-stack service/loki-stack-grafana 3000:80

标签:__,name,kubernetes,51CTO,labels,Loki,meta,Kubernetes1.24,loki
From: https://blog.51cto.com/u_15678406/8072028

相关文章

  • 这是我在51CTO博客的第一篇博文
    大家好,我是毕夏,从事医疗行业16年了,目前主要是口腔医院,之前做过竞价推广,前端设计,后台程序员,从入门是一枚程序员,现在还是一枚程序员,技术没长进,头发是掉了不少。医疗网站比较好解决,一个简单的CMS即可,我擅长PHP的!下面和大家分享一点技术,就是自动生成日志$file='../logs/'.date("Ymd")......
  • 我的第一篇51CTO博客
    大家好,我是zero,一名Linux运维工程师。我有3年的运维工作经验,主要负责公司业务系统的日常运维和故障排查。我的技术栈以Linux、Shell为主,熟悉LAMP架构的搭建和优化。最近两年,我参与了公司私有云项目的建设,使用OpenStack部署了计算、存储、网络等服务。并通过Python编写了云平......
  • 十四、kubernetes日志收集之Loki收集K8s日志
    3.使用Loki收集K8s日志3.1架构说明无论是ELK、EFK还是Filebeat,都需要用到Elasticsearch来存储数据,Elasticsearch本身就像“一座大山”,维护难度和资源使用都是偏高的。对于很多公司而言,特别是新创公司,可能并不想大费周章地去搭建一个ELK、EFK或者其他重量级的日志平台,刚开始的人力......
  • 这是我在51CTO博客的第一篇博文
    【第1段】自我介绍大家好,我是梅梅。我已在PHP开发领域工作了两年。毕业于XX大学计算机科学与技术专业。这两年,我参与了多个后端项目的构建,主要使用Laravel和Symfony等框架。在公司,我成功地为多个业务线搭建了稳定、高效的API服务,并针对高并发场景进行了相应的性能优化。我对后端......
  • 这是我在51CTO博客的第一篇博文
    我是一名运维,就写今天的事情吧`GitLab`是一个开源的项目管理和版本控制系统,基于`Git`。你可以使用它来托管代码仓库、进行代码审查、跟踪问题、和CI/CD等。以下是如何安装`GitLab`的基本步骤:1.**安装依赖项**: 在安装`GitLab`之前,你需要确保安装了必要的依赖项。对于大......
  • 这是我在51CTO博客的第一篇博文
    1.狮子import turtle as tdef hair():  # 画头发    t.penup()    t.goto(-50, 150)    t.pendown()    t.fillcolor('#a2774d')    t.begin_fill()    for j in range(10):  # 重复执行10次        t.setheading(60 - (j * 36)......
  • 这是我在51CTO博客的第一篇博文
    1.狮子import turtle as tdef hair():  # 画头发    t.penup()    t.goto(-50, 150)    t.pendown()    t.fillcolor('#a2774d')    t.begin_fill()    for j in range(10):  # 重复执行10次        t.setheading(60 - (j * 36)......
  • 这是我在51CTO博客的第三篇博文
    自我介绍我是51CTO新人,初来学习,喜欢研究计算机,学习dotnet的winform,望各位前辈老师们多多指教。技术分享今天带来的是在winform中,利用DataTable与DataGridView配合进行数据的显示。'全局设置DimDGVDTAsDataTable'Load等初始化设置DGVDT=NewDataTableWit......
  • ubuntu20.04使用kubeadm安装kubernetes1.24.4
    介绍1.k8s的版本在1.24版本开始Kubernetes正式移除对Dockershim的支持,Kubernetes1.24之后,如还想继续在k8s中使用docker,需要自行安装cri-dockerd组件或者containerd组件,下面的步骤,经过反复测试很多次,步骤应该很稳2.#更新阿里云yumsudocp/etc/apt/sources.list/etc/apt/sourc......
  • 记在51CTO的第一篇博文,学习网络安全相关知识记录
    作为一名在校大学生,除去学校所学知识,还有许多课余时间自学的知识,现在通过博文的方式记录下来并且进行分享,有不足之处请多多指出!一、学习笔记部分前后端分离前端:JS框架,API传输数据(使用接口传输,前后端两个网页)前端只用于显示页面,后端在其他域名下,使用后台管理软件来指向前端网站前端......