Rancher无法启动

环境信息:
Rancher Server 设置
Rancher 版本:2.7.9
安装选项 (Docker install/Helm Chart): Helm Chart
k3s版本:v1.26.9+k3s1
在线或离线部署:离线部署
**主机操作系统:三台ubtuntu22.04.6,两台server,一台agent,都已关闭防火墙

问题描述:
Rancher 无法访问

附加上下文/日志:

日志
**查看pods运行情况**

 kubectl get pods -n kube-system
E0526 09:09:41.645390  340032 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0526 09:09:41.659942  340032 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0526 09:09:41.661778  340032 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0526 09:09:41.664870  340032 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
NAME                                            READY   STATUS              RESTARTS          AGE
svclb-jdchain-blance-228aaa0b-6l6gs             1/1     Running             2 (2d22h ago)     424d
svclb-anythingllm-loadbalancer-727b8af7-2wb8q   1/1     Running             2 (2d22h ago)     97d
traefik-57c84cf78d-cc6br                        0/1     ContainerCreating   1                 2d19h
local-path-provisioner-76d776f6f9-bkbjc         1/1     Running             0                 2d8h
svclb-ollama-loadbalancer-4863a12a-gp5km        1/1     Running             0                 99d
svclb-dify-bl-2451c7fe-d7752                    1/1     Running             0                 83d
svclb-pgsql-bl-5a504e6f-8vmmn                   1/1     Running             0                 83d
svclb-dify-api-bl-787d9c8c-6pths                1/1     Running             0                 74d
svclb-dify-weaviate-bl-fbd122d6-t2jxh           1/1     Running             0                 10d
svclb-liangsyh-loadbalancer-a7dfcd05-j2qbz      1/1     Running             0                 485d
svclb-traefik-ca7181d8-b624f                    2/2     Running             0                 503d
svclb-activemq-loadbalancer-14fb1ed9-5rvcr      2/2     Running             0                 500d
svclb-dify-weaviate-bl-fbd122d6-2km7n           1/1     Running             0                 2d19h
svclb-anythingllm-loadbalancer-727b8af7-5hb74   1/1     Running             0                 2d19h
svclb-traefik-ca7181d8-5v2p5                    2/2     Running             0                 2d19h
svclb-dify-bl-2451c7fe-g856l                    1/1     Running             0                 2d19h
svclb-dify-api-bl-787d9c8c-n7252                1/1     Running             0                 2d19h
svclb-jdchain-blance-228aaa0b-74trl             1/1     Running             0                 2d19h
svclb-activemq-loadbalancer-14fb1ed9-v49bs      2/2     Running             0                 2d19h
svclb-ollama-loadbalancer-4863a12a-2tpxm        1/1     Running             0                 2d19h
svclb-liangsyh-loadbalancer-a7dfcd05-grcgk      1/1     Running             0                 2d19h
svclb-pgsql-bl-5a504e6f-pfnsp                   1/1     Running             0                 2d19h
coredns-59b4f5bbd5-bgp48                        1/1     Running             0                 2d18h
metrics-server-68cf49699b-4ff67                 1/1     Running             0                 61m
helm-install-traefik-p56rq                      0/1     CrashLoopBackOff    680 (4m49s ago)   2d17h
helm-install-traefik-crd-p9r6c                  0/1     CrashLoopBackOff    677 (2m20s ago)   2d17h


**查看pod运行情况**

kubectl get pods -n  cattle-system
E0526 09:38:21.699386  340344 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0526 09:38:26.701568  340344 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0526 09:38:31.705988  340344 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0526 09:38:36.709516  340344 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
NAME                               READY   STATUS              RESTARTS          AGE
rancher-6b6b974475-wbn9s           0/1     Error               19                2d21h
rancher-6b6b974475-txjcx           0/1     Error               15                2d19h
rancher-6b6b974475-khq6f           0/1     ContainerCreating   15 (2d19h ago)    2d19h
helm-operation-l8nf8               0/2     Error               1                 2d19h
rancher-webhook-7dc5857799-lpsrp  

**查看metrics-server-68cf49699b-4ff67 配置**


**k3s日志**

 webhook "rancher.cattle.io.secrets": failed to call webhook: Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s": no endpoints available for service "rancher-webhook"
May 26 09:15:46 k3s-master k3s[339476]: I0526 09:15:46.043446  339476 event.go:294] "Event occurred" object="k3s-master" fieldPath="" kind="Node" apiVersion="" type="Warning" reason="NodePasswordValidationFailed" message="Deferred node password secret validation failed: Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s\": no endpoints available for service \"rancher-webhook\""
May 26 09:15:46 k3s-master k3s[339476]: time="2025-05-26T09:15:46+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:46 k3s-master kernel: [242917.195132] nfs: server 10.43.152.27 not responding, timed out
May 26 09:15:47 k3s-master k3s[339476]: time="2025-05-26T09:15:47+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:48 k3s-master k3s[339476]: time="2025-05-26T09:15:48+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:49 k3s-master k3s[339476]: time="2025-05-26T09:15:49+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:50 k3s-master k3s[339476]: time="2025-05-26T09:15:50+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:50 k3s-master k3s[339476]: E0526 09:15:50.885567  339476 available_controller.go:456] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1: Get "https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
May 26 09:15:50 k3s-master k3s[339476]: I0526 09:15:50.932502  339476 trace.go:236] Trace[799792701]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:40.556) (total time: 130375ms):
May 26 09:15:50 k3s-master k3s[339476]: Trace[799792701]: [2m10.375578839s] [2m10.375578839s] END
May 26 09:15:50 k3s-master k3s[339476]: I0526 09:15:50.932580  339476 trace.go:236] Trace[1669590953]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:40.556) (total time: 130376ms):
May 26 09:15:50 k3s-master k3s[339476]: Trace[1669590953]: [2m10.376027304s] [2m10.376027304s] END
May 26 09:15:50 k3s-master k3s[339476]: I0526 09:15:50.932870  339476 trace.go:236] Trace[857053381]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:40.556) (total time: 130376ms):
May 26 09:15:50 k3s-master k3s[339476]: Trace[857053381]: [2m10.376346863s] [2m10.376346863s] END
May 26 09:15:50 k3s-master k3s[339476]: I0526 09:15:50.933066  339476 trace.go:236] Trace[1068626721]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:40.556) (total time: 130376ms):
May 26 09:15:50 k3s-master k3s[339476]: Trace[1068626721]: [2m10.376527304s] [2m10.376527304s] END
May 26 09:15:50 k3s-master k3s[339476]: I0526 09:15:50.933313  339476 trace.go:236] Trace[402534906]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:40.556) (total time: 130376ms):
May 26 09:15:50 k3s-master k3s[339476]: Trace[402534906]: [2m10.376484982s] [2m10.376484982s] END
May 26 09:15:51 k3s-master k3s[339476]: W0526 09:15:51.031974  339476 dispatcher.go:204] Failed calling webhook, failing closed rancher.cattle.io.secrets: failed calling webhook "rancher.cattle.io.secrets": failed to call webhook: Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s": no endpoints available for service "rancher-webhook"
May 26 09:15:51 k3s-master k3s[339476]: I0526 09:15:51.032877  339476 event.go:294] "Event occurred" object="k3s-master" fieldPath="" kind="Node" apiVersion="" type="Warning" reason="NodePasswordValidationFailed" message="Deferred node password secret validation failed: Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s\": no endpoints available for service \"rancher-webhook\""
May 26 09:15:51 k3s-master k3s[339476]: time="2025-05-26T09:15:51+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:52 k3s-master k3s[339476]: time="2025-05-26T09:15:52+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:53 k3s-master k3s[339476]: time="2025-05-26T09:15:53+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:54 k3s-master k3s[339476]: time="2025-05-26T09:15:54+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:55 k3s-master k3s[339476]: I0526 09:15:55.030204  339476 trace.go:236] Trace[125564393]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:45.557) (total time: 129472ms):
May 26 09:15:55 k3s-master k3s[339476]: Trace[125564393]: [2m9.472497374s] [2m9.472497374s] END
May 26 09:15:55 k3s-master k3s[339476]: I0526 09:15:55.030912  339476 trace.go:236] Trace[670599592]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:45.557) (total time: 129473ms):
May 26 09:15:55 k3s-master k3s[339476]: Trace[670599592]: [2m9.473164003s] [2m9.473164003s] END
May 26 09:15:55 k3s-master k3s[339476]: I0526 09:15:55.030913  339476 trace.go:236] Trace[1614112011]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:45.557) (total time: 129473ms):
May 26 09:15:55 k3s-master k3s[339476]: Trace[1614112011]: [2m9.473314051s] [2m9.473314051s] END
May 26 09:15:55 k3s-master k3s[339476]: I0526 09:15:55.031634  339476 trace.go:236] Trace[925978820]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:45.557) (total time: 129473ms):
May 26 09:15:55 k3s-master k3s[339476]: Trace[925978820]: [2m9.473780137s] [2m9.473780137s] END
May 26 09:15:55 k3s-master k3s[339476]: I0526 09:15:55.032114  339476 trace.go:236] Trace[69834777]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:45.557) (total time: 129474ms):
May 26 09:15:55 k3s-master k3s[339476]: Trace[69834777]: [2m9.474213985s] [2m9.474213985s] END
May 26 09:15:55 k3s-master k3s[339476]: time="2025-05-26T09:15:55+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:55 k3s-master kernel: [242926.154788] nfs: server 10.43.152.27 not responding, timed out
May 26 09:15:55 k3s-master k3s[339476]: E0526 09:15:55.890562  339476 available_controller.go:456] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1: Get "https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
May 26 09:15:56 k3s-master k3s[339476]: W0526 09:15:55.999998  339476 dispatcher.go:204] Failed calling webhook, failing closed rancher.cattle.io.secrets: failed calling webhook "rancher.cattle.io.secrets": failed to call webhook: Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s": no endpoints available for service "rancher-webhook"
May 26 09:15:56 k3s-master k3s[339476]: I0526 09:15:56.000509  339476 event.go:294] "Event occurred" object="k3s-master" fieldPath="" kind="Node" apiVersion="" type="Warning" reason="NodePasswordValidationFailed" message="Deferred node password secret validation failed: Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s\": no endpoints available for service \"rancher-webhook\""
May 26 09:15:56 k3s-master k3s[339476]: time="2025-05-26T09:15:56+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:57 k3s-master k3s[339476]: time="2025-05-26T09:15:57+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:58 k3s-master k3s[339476]: time="2025-05-26T09:15:58+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:15:59 k3s-master k3s[339476]: time="2025-05-26T09:15:59+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:00 k3s-master k3s[339476]: time="2025-05-26T09:16:00+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:00 k3s-master k3s[339476]: E0526 09:16:00.899846  339476 available_controller.go:456] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1: Get "https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
May 26 09:16:00 k3s-master k3s[339476]: W0526 09:16:00.998960  339476 dispatcher.go:204] Failed calling webhook, failing closed rancher.cattle.io.secrets: failed calling webhook "rancher.cattle.io.secrets": failed to call webhook: Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s": no endpoints available for service "rancher-webhook"
May 26 09:16:00 k3s-master k3s[339476]: I0526 09:16:00.999417  339476 event.go:294] "Event occurred" object="k3s-master" fieldPath="" kind="Node" apiVersion="" type="Warning" reason="NodePasswordValidationFailed" message="Deferred node password secret validation failed: Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s\": no endpoints available for service \"rancher-webhook\""
May 26 09:16:01 k3s-master k3s[339476]: I0526 09:16:01.172713  339476 trace.go:236] Trace[810147232]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:50.572) (total time: 130600ms):
May 26 09:16:01 k3s-master k3s[339476]: Trace[810147232]: [2m10.600071481s] [2m10.600071481s] END
May 26 09:16:01 k3s-master k3s[339476]: I0526 09:16:01.172714  339476 trace.go:236] Trace[1129002975]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:50.572) (total time: 130600ms):
May 26 09:16:01 k3s-master k3s[339476]: Trace[1129002975]: [2m10.600043694s] [2m10.600043694s] END
May 26 09:16:01 k3s-master k3s[339476]: I0526 09:16:01.172799  339476 trace.go:236] Trace[2006425504]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:50.572) (total time: 130600ms):
May 26 09:16:01 k3s-master k3s[339476]: Trace[2006425504]: [2m10.600152504s] [2m10.600152504s] END
May 26 09:16:01 k3s-master k3s[339476]: I0526 09:16:01.172880  339476 trace.go:236] Trace[913528700]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:50.572) (total time: 130600ms):
May 26 09:16:01 k3s-master k3s[339476]: Trace[913528700]: [2m10.60008374s] [2m10.60008374s] END
May 26 09:16:01 k3s-master k3s[339476]: I0526 09:16:01.173095  339476 trace.go:236] Trace[1703270175]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:50.572) (total time: 130600ms):
May 26 09:16:01 k3s-master k3s[339476]: Trace[1703270175]: [2m10.600300707s] [2m10.600300707s] END
May 26 09:16:01 k3s-master k3s[339476]: time="2025-05-26T09:16:01+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:02 k3s-master k3s[339476]: E0526 09:16:02.592590  339476 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
May 26 09:16:02 k3s-master k3s[339476]: I0526 09:16:02.592619  339476 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
May 26 09:16:02 k3s-master k3s[339476]: E0526 09:16:02.593186  339476 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: request timed out, Header: map[]
May 26 09:16:02 k3s-master k3s[339476]: I0526 09:16:02.593659  339476 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
May 26 09:16:02 k3s-master k3s[339476]: time="2025-05-26T09:16:02+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:03 k3s-master k3s[339476]: I0526 09:16:03.220483  339476 trace.go:236] Trace[205072606]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:52.437) (total time: 130782ms):
May 26 09:16:03 k3s-master k3s[339476]: Trace[205072606]: [2m10.782558039s] [2m10.782558039s] END
May 26 09:16:03 k3s-master k3s[339476]: time="2025-05-26T09:16:03+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:04 k3s-master k3s[339476]: time="2025-05-26T09:16:04+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:05 k3s-master k3s[339476]: I0526 09:16:05.268494  339476 trace.go:236] Trace[1174083779]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:55.573) (total time: 129695ms):
May 26 09:16:05 k3s-master k3s[339476]: Trace[1174083779]: [2m9.695219514s] [2m9.695219514s] END
May 26 09:16:05 k3s-master k3s[339476]: I0526 09:16:05.268494  339476 trace.go:236] Trace[1430380764]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:55.573) (total time: 129695ms):
May 26 09:16:05 k3s-master k3s[339476]: Trace[1430380764]: [2m9.695315913s] [2m9.695315913s] END
May 26 09:16:05 k3s-master k3s[339476]: I0526 09:16:05.268494  339476 trace.go:236] Trace[1392557590]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:55.573) (total time: 129695ms):
May 26 09:16:05 k3s-master k3s[339476]: Trace[1392557590]: [2m9.695409489s] [2m9.695409489s] END
May 26 09:16:05 k3s-master k3s[339476]: I0526 09:16:05.268666  339476 trace.go:236] Trace[1043470432]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:55.573) (total time: 129695ms):
May 26 09:16:05 k3s-master k3s[339476]: Trace[1043470432]: [2m9.695650993s] [2m9.695650993s] END
May 26 09:16:05 k3s-master k3s[339476]: I0526 09:16:05.268674  339476 trace.go:236] Trace[985886152]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:13:55.573) (total time: 129695ms):
May 26 09:16:05 k3s-master k3s[339476]: Trace[985886152]: [2m9.695586254s] [2m9.695586254s] END
May 26 09:16:05 k3s-master k3s[339476]: time="2025-05-26T09:16:05+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:05 k3s-master k3s[339476]: E0526 09:16:05.908519  339476 available_controller.go:456] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
May 26 09:16:05 k3s-master k3s[339476]: W0526 09:16:05.999833  339476 dispatcher.go:204] Failed calling webhook, failing closed rancher.cattle.io.secrets: failed calling webhook "rancher.cattle.io.secrets": failed to call webhook: Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s": no endpoints available for service "rancher-webhook"
May 26 09:16:06 k3s-master k3s[339476]: I0526 09:16:06.000627  339476 event.go:294] "Event occurred" object="k3s-master" fieldPath="" kind="Node" apiVersion="" type="Warning" reason="NodePasswordValidationFailed" message="Deferred node password secret validation failed: Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s\": no endpoints available for service \"rancher-webhook\""
May 26 09:16:06 k3s-master kernel: [242936.650538] nfs: server 10.43.152.27 not responding, timed out
May 26 09:16:06 k3s-master k3s[339476]: time="2025-05-26T09:16:06+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:07 k3s-master k3s[339476]: time="2025-05-26T09:16:07+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:08 k3s-master k3s[339476]: time="2025-05-26T09:16:08+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:09 k3s-master k3s[339476]: time="2025-05-26T09:16:09+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:10 k3s-master k3s[339476]: time="2025-05-26T09:16:10+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"
May 26 09:16:10 k3s-master k3s[339476]: E0526 09:16:10.922372  339476 available_controller.go:456] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1: Get "https://10.42.1.170:10250/apis/metrics.k8s.io/v1beta1": context deadline exceeded
May 26 09:16:11 k3s-master k3s[339476]: W0526 09:16:11.002940  339476 dispatcher.go:204] Failed calling webhook, failing closed rancher.cattle.io.secrets: failed calling webhook "rancher.cattle.io.secrets": failed to call webhook: Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s": no endpoints available for service "rancher-webhook"
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.003683  339476 event.go:294] "Event occurred" object="k3s-master" fieldPath="" kind="Node" apiVersion="" type="Warning" reason="NodePasswordValidationFailed" message="Deferred node password secret validation failed: Internal error occurred: failed calling webhook \"rancher.cattle.io.secrets\": failed to call webhook: Post \"https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation/secrets?timeout=15s\": no endpoints available for service \"rancher-webhook\""
May 26 09:16:11 k3s-master k3s[339476]: W0526 09:16:11.289723  339476 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.412618  339476 trace.go:236] Trace[720014838]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.095) (total time: 130316ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[720014838]: [2m10.316753496s] [2m10.316753496s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.412689  339476 trace.go:236] Trace[1374902485]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.093) (total time: 130318ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[1374902485]: [2m10.318899026s] [2m10.318899026s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.412743  339476 trace.go:236] Trace[1800124716]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.095) (total time: 130317ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[1800124716]: [2m10.317486738s] [2m10.317486738s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.412796  339476 trace.go:236] Trace[1810129219]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.590) (total time: 129822ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[1810129219]: [2m9.822236709s] [2m9.822236709s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.412886  339476 trace.go:236] Trace[1538231606]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:00.586) (total time: 130826ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[1538231606]: [2m10.826033877s] [2m10.826033877s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.412967  339476 trace.go:236] Trace[632524332]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.202) (total time: 130210ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[632524332]: [2m10.210458243s] [2m10.210458243s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.413052  339476 trace.go:236] Trace[329317791]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:00.586) (total time: 130826ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[329317791]: [2m10.826179463s] [2m10.826179463s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.413086  339476 trace.go:236] Trace[1516428921]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:00.586) (total time: 130826ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[1516428921]: [2m10.82610492s] [2m10.82610492s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.413128  339476 trace.go:236] Trace[642445179]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:00.586) (total time: 130826ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[642445179]: [2m10.826181738s] [2m10.826181738s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.413208  339476 trace.go:236] Trace[38365685]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:00.586) (total time: 130826ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[38365685]: [2m10.826600969s] [2m10.826600969s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.413257  339476 trace.go:236] Trace[638874405]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.114) (total time: 130298ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[638874405]: [2m10.298309375s] [2m10.298309375s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.413279  339476 trace.go:236] Trace[652034886]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.101) (total time: 130312ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[652034886]: [2m10.312019742s] [2m10.312019742s] END
May 26 09:16:11 k3s-master k3s[339476]: I0526 09:16:11.413399  339476 trace.go:236] Trace[432243606]: "Proxy via http_connect protocol over tcp" address:10.42.1.170:10250 (26-May-2025 09:14:01.099) (total time: 130313ms):
May 26 09:16:11 k3s-master k3s[339476]: Trace[432243606]: [2m10.313699843s] [2m10.313699843s] END
May 26 09:16:11 k3s-master k3s[339476]: time="2025-05-26T09:16:11+08:00" level=info msg="Waiting for control-plane node k3s-master startup: nodes \"k3s-master\" not found"


先排查下 Traefik 报错的原因吧,先保证 Traefik 启动成功,再去盘查 rancher,你可以使用下面的命令来去排查:

kubectl describe pod traefik-57c84cf78d-cc6br -n kube-system
kubectl logs traefik-57c84cf78d-cc6br -n kube-system

:bulb: 如果您在生产环境中使用 Rancher,希望获得更专业、及时的技术支持,也欢迎了解一下我们的商业订阅服务。可以点击论坛右上角聊天(:speech_balloon:)图标,私信联系我了解详细信息,我们有中文支持团队为您服务 :blush:

因为是测试环境,随后删除整个集群,重新安装k3s发现,metrics-server提示日志错误如下
I0530 02:26:56.498718 1 dynamic_serving_content.go:131] “Starting controller” name=“serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key”
I0530 02:26:56.499112 1 secure_serving.go:267] Serving securely on [::]:10250
I0530 02:26:56.499138 1 tlsconfig.go:240] “Starting DynamicServingCertificateController”
W0530 02:26:56.499174 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
I0530 02:26:56.598376 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0530 02:26:56.598475 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0530 02:26:56.598484 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0530 02:26:56.646320 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:26:57.266337 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:26:57.649189 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:26:59.267256 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:27:01.267261 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:27:03.267763 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:27:05.267548 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:27:07.268040 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:27:09.266372 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:27:11.267207 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
E0530 02:33:37.167549 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: the server is currently unable to handle the request (get configmaps)
E0530 02:33:37.291417 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.Node: the server is currently unable to handle the request (get nodes)
E0530 02:33:38.368690 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: the server is currently unable to handle the request (get configmaps)
W0530 02:33:38.435646 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.Node: apiserver not ready
E0530 02:33:38.435666 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: apiserver not ready
W0530 02:33:38.552644 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:38.552668 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:39.528753 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.PartialObjectMetadata: the server is currently unable to handle the request
W0530 02:33:39.715704 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:39.715729 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:39.763661 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: the server is currently unable to handle the request (get configmaps)
W0530 02:33:40.724990 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.PartialObjectMetadata: apiserver not ready
E0530 02:33:40.725012 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: apiserver not ready
W0530 02:33:40.842728 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:40.842755 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:41.502625 1 scraper.go:140] “Failed to scrape node” err=“Get "https://172.18.11.247:10250/metrics/resource\”: dial tcp 172.18.11.247:10250: connect: connection refused" node=“k3s-master”
W0530 02:33:41.603593 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.Node: apiserver not ready
E0530 02:33:41.603615 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: apiserver not ready
W0530 02:33:41.629420 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:41.629440 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: apiserver not ready
W0530 02:33:42.800177 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:42.800200 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: apiserver not ready
W0530 02:33:43.432121 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.PartialObjectMetadata: apiserver not ready
E0530 02:33:43.432142 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: apiserver not ready
W0530 02:33:43.581099 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: apiserver not ready
E0530 02:33:43.581123 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.17/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: apiserver not ready
I0530 02:33:48.420493 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:33:48.866288 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:33:50.421035 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:33:52.421704 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:33:54.420761 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”
I0530 02:33:56.421157 1 server.go:187] “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”

查看k3s日志是
May 30 13:33:46 k3s-master systemd[1177]: data-k3s-k3s-containerd-io.containerd.grpc.v1.cri-sandboxes-c96bc9af7813c64b32926962c8c9efb304bd9be94e15a943937837e7942c3d9f-shm.mount: Succeeded.
May 30 13:33:46 k3s-master systemd[1]: data-k3s-k3s-containerd-io.containerd.grpc.v1.cri-sandboxes-c96bc9af7813c64b32926962c8c9efb304bd9be94e15a943937837e7942c3d9f-shm.mount: Succeeded.
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.695857 1682 job_controller.go:514] enqueueing job kube-system/helm-install-traefik
May 30 13:33:46 k3s-master systemd[1]: cri-containerd-c96bc9af7813c64b32926962c8c9efb304bd9be94e15a943937837e7942c3d9f.scope: Succeeded.
May 30 13:33:46 k3s-master systemd[1]: data-k3s-k3s-containerd-io.containerd.runtime.v2.task-k8s.io-c96bc9af7813c64b32926962c8c9efb304bd9be94e15a943937837e7942c3d9f-rootfs.mount: Succeeded.
May 30 13:33:46 k3s-master systemd[1177]: data-k3s-k3s-containerd-io.containerd.runtime.v2.task-k8s.io-c96bc9af7813c64b32926962c8c9efb304bd9be94e15a943937837e7942c3d9f-rootfs.mount: Succeeded.
May 30 13:33:46 k3s-master kernel: [ 167.100434] cni0: port 1(vethf247339a) entered disabled state
May 30 13:33:46 k3s-master kernel: [ 167.103374] device vethf247339a left promiscuous mode
May 30 13:33:46 k3s-master kernel: [ 167.103377] cni0: port 1(vethf247339a) entered disabled state
May 30 13:33:46 k3s-master systemd-networkd[887]: vethf247339a: Link DOWN
May 30 13:33:46 k3s-master systemd-networkd[887]: vethf247339a: Lost carrier
May 30 13:33:46 k3s-master systemd-networkd[887]: rtnl: received neighbor for link ‘6’ we don’t know about, ignoring.
May 30 13:33:46 k3s-master systemd-networkd[887]: rtnl: received neighbor for link ‘6’ we don’t know about, ignoring.
May 30 13:33:46 k3s-master systemd[1177]: run-netns-cni\x2d99762788\x2d7640\x2d8156\x2d3a5c\x2d4b324baa1f10.mount: Succeeded.
May 30 13:33:46 k3s-master systemd[1]: run-netns-cni\x2d99762788\x2d7640\x2d8156\x2d3a5c\x2d4b324baa1f10.mount: Succeeded.
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.919355 1682 job_controller.go:514] enqueueing job kube-system/helm-install-traefik
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.924097 1682 reconciler_common.go:169] “operationExecutor.UnmountVolume started for volume "content" (UniqueName: "kubernetes.io/configmap/28ab06aa-cd4d-4312-a466-a577db6f04a4-content") pod "28ab06aa-cd4d-4312-a466-a577db6f04a4" (UID: "28ab06aa-cd4d-4312-a466-a577db6f04a4") "
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.924296 1682 reconciler_common.go:169] “operationExecutor.UnmountVolume started for volume "values" (UniqueName: "kubernetes.io/secret/28ab06aa-cd4d-4312-a466-a577db6f04a4-values") pod "28ab06aa-cd4d-4312-a466-a577db6f04a4" (UID: "28ab06aa-cd4d-4312-a466-a577db6f04a4") "
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.924437 1682 reconciler_common.go:169] “operationExecutor.UnmountVolume started for volume "kube-api-access-p4dtc" (UniqueName: "kubernetes.io/projected/28ab06aa-cd4d-4312-a466-a577db6f04a4-kube-api-access-p4dtc") pod "28ab06aa-cd4d-4312-a466-a577db6f04a4" (UID: "28ab06aa-cd4d-4312-a466-a577db6f04a4") "
May 30 13:33:46 k3s-master k3s[1682]: W0530 13:33:46.924307 1682 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/28ab06aa-cd4d-4312-a466-a577db6f04a4/volumes/kubernetes.io~configmap/content: clearQuota called, but quotas disabled
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.924805 1682 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume “kubernetes.io/configmap/28ab06aa-cd4d-4312-a466-a577db6f04a4-content” (OuterVolumeSpecName: “content”) pod “28ab06aa-cd4d-4312-a466-a577db6f04a4” (UID: “28ab06aa-cd4d-4312-a466-a577db6f04a4”). InnerVolumeSpecName “content”. PluginName “kubernetes.io/configmap”, VolumeGidValue “”
May 30 13:33:46 k3s-master systemd[1177]: data-k3s-kubelet-pods-28ab06aa\x2dcd4d\x2d4312\x2da466\x2da577db6f04a4-volumes-kubernetes.io\x7esecret-values.mount: Succeeded.
May 30 13:33:46 k3s-master systemd[1]: data-k3s-kubelet-pods-28ab06aa\x2dcd4d\x2d4312\x2da466\x2da577db6f04a4-volumes-kubernetes.io\x7esecret-values.mount: Succeeded.
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.927666 1682 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume “kubernetes.io/secret/28ab06aa-cd4d-4312-a466-a577db6f04a4-values” (OuterVolumeSpecName: “values”) pod “28ab06aa-cd4d-4312-a466-a577db6f04a4” (UID: “28ab06aa-cd4d-4312-a466-a577db6f04a4”). InnerVolumeSpecName “values”. PluginName “kubernetes.io/secret”, VolumeGidValue “”
May 30 13:33:46 k3s-master k3s[1682]: I0530 13:33:46.929313 1682 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume “kubernetes.io/projected/28ab06aa-cd4d-4312-a466-a577db6f04a4-kube-api-access-p4dtc” (OuterVolumeSpecName: “kube-api-access-p4dtc”) pod “28ab06aa-cd4d-4312-a466-a577db6f04a4” (UID: “28ab06aa-cd4d-4312-a466-a577db6f04a4”). InnerVolumeSpecName “kube-api-access-p4dtc”. PluginName “kubernetes.io/projected”, VolumeGidValue “”
May 30 13:33:46 k3s-master systemd[1177]: data-k3s-kubelet-pods-28ab06aa\x2dcd4d\x2d4312\x2da466\x2da577db6f04a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4dtc.mount: Succeeded.
May 30 13:33:46 k3s-master systemd[1]: data-k3s-kubelet-pods-28ab06aa\x2dcd4d\x2d4312\x2da466\x2da577db6f04a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4dtc.mount: Succeeded.
May 30 13:33:47 k3s-master k3s[1682]: I0530 13:33:47.025574 1682 reconciler_common.go:295] “Volume detached for volume "values" (UniqueName: "kubernetes.io/secret/28ab06aa-cd4d-4312-a466-a577db6f04a4-values") on node "k3s-master" DevicePath ""”
May 30 13:33:47 k3s-master k3s[1682]: I0530 13:33:47.025821 1682 reconciler_common.go:295] “Volume detached for volume "kube-api-access-p4dtc" (UniqueName: "kubernetes.io/projected/28ab06aa-cd4d-4312-a466-a577db6f04a4-kube-api-access-p4dtc") on node "k3s-master" DevicePath ""”
May 30 13:33:47 k3s-master k3s[1682]: I0530 13:33:47.025956 1682 reconciler_common.go:295] “Volume detached for volume "content" (UniqueName: "kubernetes.io/configmap/28ab06aa-cd4d-4312-a466-a577db6f04a4-content") on node "k3s-master" DevicePath ""”
May 30 13:33:47 k3s-master k3s[1682]: I0530 13:33:47.682836 1682 pod_container_deletor.go:80] “Container not found in pod’s containers” containerID=“c96bc9af7813c64b32926962c8c9efb304bd9be94e15a943937837e7942c3d9f”
May 30 13:33:47 k3s-master systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod28ab06aa_cd4d_4312_a466_a577db6f04a4.slice.
May 30 13:33:47 k3s-master systemd[1]: kubepods-besteffort-pod28ab06aa_cd4d_4312_a466_a577db6f04a4.slice: Consumed 1.756s CPU time.
May 30 13:33:47 k3s-master k3s[1682]: I0530 13:33:47.694191 1682 job_controller.go:514] enqueueing job kube-system/helm-install-traefik
May 30 13:33:48 k3s-master k3s[1682]: I0530 13:33:48.697967 1682 job_controller.go:514] enqueueing job kube-system/helm-install-traefik
May 30 13:33:48 k3s-master k3s[1682]: I0530 13:33:48.705250 1682 job_controller.go:514] enqueueing job kube-system/helm-install-traefik
May 30 13:33:48 k3s-master k3s[1682]: I0530 13:33:48.708182 1682 job_controller.go:514] enqueueing job kube-system/helm-install-traefik
May 30 13:33:48 k3s-master k3s[1682]: I0530 13:33:48.708274 1682 event.go:294] “Event occurred” object=“kube-system/helm-install-traefik” fieldPath=”” kind=“Job” apiVersion=“batch/v1” type=“Normal” reason=“Completed” message=“Job completed”
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.313082 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for tlsstores.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.313947 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for middlewares.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.314231 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for tlsoptions.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.314489 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingressrouteudps.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.314712 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serverstransports.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.314940 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for middlewaretcps.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.315159 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for traefikservices.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.315372 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingressroutes.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.315585 1682 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingressroutetcps.traefik.containo.us
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.315841 1682 shared_informer.go:270] Waiting for caches to sync for resource quota
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.417086 1682 shared_informer.go:277] Caches are synced for resource quota
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.750403 1682 shared_informer.go:270] Waiting for caches to sync for garbage collector
May 30 13:34:06 k3s-master k3s[1682]: I0530 13:34:06.750465 1682 shared_informer.go:277] Caches are synced for garbage collector
May 30 13:34:24 k3s-master k3s[1682]: E0530 13:34:24.447259 1682 remote_runtime.go:415] “ContainerStatus from runtime service failed” err=“rpc error: code = NotFound desc = an error occurred when try to find container "8e666644e1742941b3ffa3748ebe08db080e66933f2528e4501feaae55323a49": not found” containerID=“8e666644e1742941b3ffa3748ebe08db080e66933f2528e4501feaae55323a49”
May 30 13:34:24 k3s-master k3s[1682]: I0530 13:34:24.447293 1682 kuberuntime_gc.go:362] “Error getting ContainerStatus for containerID” containerID=“8e666644e1742941b3ffa3748ebe08db080e66933f2528e4501feaae55323a49” err=“rpc error: code = NotFound desc = an error occurred when try to find container "8e666644e1742941b3ffa3748ebe08db080e66933f2528e4501feaae55323a49": not found”
May 30 13:34:24 k3s-master k3s[1682]: E0530 13:34:24.448268 1682 remote_runtime.go:415] “ContainerStatus from runtime service failed” err=“rpc error: code = NotFound desc = an error occurred when try to find container "4e01d9b8a7c07c63cca4dfe382a1fa5d5cb3abd62cce5eeda5e383ef391e6d3d": not found” containerID=“4e01d9b8a7c07c63cca4dfe382a1fa5d5cb3abd62cce5eeda5e383ef391e6d3d”
May 30 13:34:24 k3s-master k3s[1682]: I0530 13:34:24.448606 1682 kuberuntime_gc.go:362] “Error getting ContainerStatus for containerID” containerID=“4e01d9b8a7c07c63cca4dfe382a1fa5d5cb3abd62cce5eeda5e383ef391e6d3d” err=“rpc error: code = NotFound desc = an error occurred when try to find container "4e01d9b8a7c07c63cca4dfe382a1fa5d5cb3abd62cce5eeda5e383ef391e6d3d": not found”
May 30 13:34:24 k3s-master k3s[1682]: E0530 13:34:24.449192 1682 remote_runtime.go:415] “ContainerStatus from runtime service failed” err=“rpc error: code = NotFound desc = an error occurred when try to find container "5513f2a905ba74ff8f47aa2c35a62be96300fcf3897e4a67041d5ce4cd234aa2": not found” containerID=“5513f2a905ba74ff8f47aa2c35a62be96300fcf3897e4a67041d5ce4cd234aa2”
May 30 13:34:24 k3s-master k3s[1682]: I0530 13:34:24.449219 1682 kuberuntime_gc.go:362] “Error getting ContainerStatus for containerID” containerID=“5513f2a905ba74ff8f47aa2c35a62be96300fcf3897e4a67041d5ce4cd234aa2” err=“rpc error: code = NotFound desc = an error occurred when try to find container "5513f2a905ba74ff8f47aa2c35a62be96300fcf3897e4a67041d5ce4cd234aa2": not found”
May 30 13:34:24 k3s-master k3s[1682]: E0530 13:34:24.449574 1682 remote_runtime.go:415] “ContainerStatus from runtime service failed” err=“rpc error: code = NotFound desc = an error occurred when try to find container "3ea05c6ed6ca763aa01642cb810050df3d06bfdd9ca32b04d7bcd9c5b7b2008c": not found” containerID=“3ea05c6ed6ca763aa01642cb810050df3d06bfdd9ca32b04d7bcd9c5b7b2008c”
May 30 13:34:24 k3s-master k3s[1682]: I0530 13:34:24.449599 1682 kuberuntime_gc.go:362] “Error getting ContainerStatus for containerID” containerID=“3ea05c6ed6ca763aa01642cb810050df3d06bfdd9ca32b04d7bcd9c5b7b2008c” err=“rpc error: code = NotFound desc = an error occurred when try to find container "3ea05c6ed6ca763aa01642cb810050df3d06bfdd9ca32b04d7bcd9c5b7b2008c": not found”
May 30 13:36:14 k3s-master dbus-daemon[904]: [system] Activating via systemd: service name=‘org.freedesktop.timedate1’ unit=‘dbus-org.freedesktop.timedate1.service’ requested by ‘:1.11’ (uid=0 pid=917 comm=”/usr/lib/snapd/snapd " label=“unconfined”)
May 30 13:36:14 k3s-master systemd[1]: Starting Time & Date Service…
May 30 13:36:14 k3s-master dbus-daemon[904]: [system] Successfully activated service ‘org.freedesktop.timedate1’
May 30 13:36:14 k3s-master systemd[1]: Started Time & Date Service.
May 30 13:36:21 k3s-master snapd[917]: storehelpers.go:916: cannot refresh: snap has no updates available: “core20”, “lxd”, “snapd”
May 30 13:36:44 k3s-master systemd[1]: systemd-timedated.service: Succeeded.
May 30 13:39:48 k3s-master k3s[1682]: I0530 13:39:48.046279 1682 trace.go:236] Trace[2058330633]: “Get” accept:application/json, /,audit-id:0605e3c2-6341-44c1-ac99-a439d1b3c1f4,client:127.0.0.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/traefik-57c84cf78d-tkdkk/log,user-agent:kubectl/v1.26.9+k3s1 (linux/amd64) kubernetes/4e21728,verb:GET (30-May-2025 13:39:46.353) (total time: 1692ms):
May 30 13:39:48 k3s-master k3s[1682]: Trace[2058330633]: —“Writing http response done” 1691ms (13:39:48.046)
May 30 13:39:48 k3s-master k3s[1682]: Trace[2058330633]: [1.692864526s] [1.692864526s] END
May 30 13:40:23 k3s-master k3s[1682]: I0530 13:40:23.480609 1682 trace.go:236] Trace[835957571]: “Get” accept:application/json, /,audit-id:0369ff25-8aa0-4749-a530-0a99f59f051e,client:127.0.0.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/metrics-server-68cf49699b-6lf9c/log,user-agent:kubectl/v1.26.9+k3s1 (linux/amd64) kubernetes/4e21728,verb:GET (30-May-2025 13:40:00.670) (total time: 22810ms):
May 30 13:40:23 k3s-master k3s[1682]: Trace[835957571]: —“Writing http response done” 22808ms (13:40:23.480)
May 30 13:40:23 k3s-master k3s[1682]: Trace[835957571]: [22.81003241s] [22.81003241s] END
May 30 13:43:18 k3s-master k3s[1682]: {“level”:“info”,“ts”:“2025-05-30T13:43:18.420884+0800”,“caller”:“mvcc/index.go:214”,“msg”:“compact tree index”,“revision”:1723}
May 30 13:43:18 k3s-master k3s[1682]: {“level”:“info”,“ts”:“2025-05-30T13:43:18.436526+0800”,“caller”:“mvcc/kvstore_compaction.go:66”,“msg”:“finished scheduled compaction”,“compact-revision”:1723,“took”:“14.421645ms”,“hash”:3835480941}
May 30 13:43:18 k3s-master k3s[1682]: {“level”:“info”,“ts”:“2025-05-30T13:43:18.436689+0800”,“caller”:“mvcc/hash.go:137”,“msg”:“storing new hash”,“hash”:3835480941,“revision”:1723,“compact-revision”:-1}