Rancher新建基于v1.28.9+rke2r1的k8s集群,创建node节点失败(Oracle Linux8.9)

Rancher Server 设置

  • Rancher 版本:v1.28.9+k3s1
  • 安装选项 (Docker install/Helm Chart): Helm Chart 安装
    • 如果是 Helm Chart 安装,需要提供 Local 集群的类型(RKE1, RKE2, k3s, EKS, 等)和版本:
  • 在线或离线部署:在线

下游集群信息

  • Kubernetes 版本: v1.28.9+rke2r1
  • Cluster Type (Local/Downstream): Downstream
    • 如果 Downstream,是什么类型的集群?(自定义/导入或为托管 等): 自定义

用户信息

  • 登录用户的角色是什么? (管理员/集群所有者/集群成员/项目所有者/项目成员/自定义):admin
    • 如果自定义,自定义权限集:admin

主机操作系统:
Oracle Linux Server release 8.9
问题描述:
新建集群一直未成功。

重现步骤:
都是默认值

结果:

预期结果:

截图:

其他上下文信息:

日志

Master:

[root@rancher-k8s-m01 ~]# journalctl -eu rancher-system-agent -f
-- Logs begin at Sat 2024-05-18 22:13:05 CST. --
May 18 22:35:23 rancher-k8s-m01 systemd[1]: Started Rancher System Agent.
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Rancher System Agent version v0.3.6 (41c07d0) is starting"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Using directory /var/lib/rancher/agent/work for work"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Starting remote watch of plans"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Starting /v1, Kind=Secret controller"

[root@rancher-k8s-m01 ~]# journalctl -xefu rancher-system-agent.service
-- Logs begin at Sat 2024-05-18 22:13:05 CST. --
May 18 22:35:23 rancher-k8s-m01 systemd[1]: Started Rancher System Agent.
-- Subject: Unit rancher-system-agent.service has finished start-up
-- Defined-By: systemd
-- Support: https://support.oracle.com
-- 
-- Unit rancher-system-agent.service has finished starting up.
-- 
-- The start-up result is done.
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Rancher System Agent version v0.3.6 (41c07d0) is starting"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Using directory /var/lib/rancher/agent/work for work"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Starting remote watch of plans"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Starting /v1, Kind=Secret controller"
[root@rancher-k8s-m01 ~]# systemctl status rancher-system-agent.service -l
● rancher-system-agent.service - Rancher System Agent
   Loaded: loaded (/etc/systemd/system/rancher-system-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2024-05-18 22:35:23 CST; 11h ago
     Docs: https://www.rancher.com
 Main PID: 1807 (rancher-system-)
    Tasks: 12 (limit: 102046)
   Memory: 36.8M
   CGroup: /system.slice/rancher-system-agent.service
           └─1807 /usr/local/bin/rancher-system-agent sentinel

May 18 22:35:23 rancher-k8s-m01 systemd[1]: Started Rancher System Agent.
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Rancher System Agent version v0.3.6 (41c07d0) is starting"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Using directory /var/lib/rancher/agent/work for work"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Starting remote watch of plans"
May 18 22:35:23 rancher-k8s-m01 rancher-system-agent[1807]: time="2024-05-18T22:35:23+08:00" level=info msg="Starting /v1, Kind=Secret controller"

Worker:

[root@rancher-k8s-w01 ~]# journalctl -eu rancher-system-agent -f
-- Logs begin at Sat 2024-05-18 22:15:51 CST. --
May 18 22:36:55 rancher-k8s-w01 systemd[1]: Started Rancher System Agent.
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Rancher System Agent version v0.3.6 (41c07d0) is starting"
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Using directory /var/lib/rancher/agent/work for work"
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Starting remote watch of plans"
May 18 22:36:56 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:56+08:00" level=info msg="Starting /v1, Kind=Secret controller"
^Z
[1]+  Stopped                 journalctl -eu rancher-system-agent -f
[root@rancher-k8s-w01 ~]# journalctl -xefu rancher-system-agent.service
-- Logs begin at Sat 2024-05-18 22:15:51 CST. --
May 18 22:36:55 rancher-k8s-w01 systemd[1]: Started Rancher System Agent.
-- Subject: Unit rancher-system-agent.service has finished start-up
-- Defined-By: systemd
-- Support: https://support.oracle.com
-- 
-- Unit rancher-system-agent.service has finished starting up.
-- 
-- The start-up result is done.
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Rancher System Agent version v0.3.6 (41c07d0) is starting"
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Using directory /var/lib/rancher/agent/work for work"
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Starting remote watch of plans"
May 18 22:36:56 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:56+08:00" level=info msg="Starting /v1, Kind=Secret controller"
^Z
[2]+  Stopped                 journalctl -xefu rancher-system-agent.service
[root@rancher-k8s-w01 ~]# systemctl status rancher-system-agent.service -l
● rancher-system-agent.service - Rancher System Agent
   Loaded: loaded (/etc/systemd/system/rancher-system-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2024-05-18 22:36:55 CST; 11h ago
     Docs: https://www.rancher.com
 Main PID: 1807 (rancher-system-)
    Tasks: 13 (limit: 102046)
   Memory: 36.2M
   CGroup: /system.slice/rancher-system-agent.service
           └─1807 /usr/local/bin/rancher-system-agent sentinel

May 18 22:36:55 rancher-k8s-w01 systemd[1]: Started Rancher System Agent.
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Rancher System Agent version v0.3.6 (41c07d0) is starting"
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Using directory /var/lib/rancher/agent/work for work"
May 18 22:36:55 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:55+08:00" level=info msg="Starting remote watch of plans"
May 18 22:36:56 rancher-k8s-w01 rancher-system-agent[1807]: time="2024-05-18T22:36:56+08:00" level=info msg="Starting /v1, Kind=Secret controller"

Rancher:

[root@k3s1-lb ~]# kubectl logs -f -l app=rancher -n cattle-system
W0519 02:19:36.105141      34 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:19:36.105196      34 transport.go:301] Unable to cancel request for *client.addQuery
2024/05/19 02:20:02 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:22:02 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:24:02 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
W0519 02:24:07.982224      34 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:24:12.266114      34 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:24:35.911570      34 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:24:35.922906      34 transport.go:301] Unable to cancel request for *client.addQuery
2024/05/19 02:26:02 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
W0519 02:22:52.108637      33 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:22:52.111345      33 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:22:52.111708      33 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:22:52.113049      33 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:22:52.114068      33 transport.go:301] Unable to cancel request for *client.addQuery
2024/05/19 02:08:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:10:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:12:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:14:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:16:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:18:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:20:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
W0519 02:22:52.114835      33 transport.go:301] Unable to cancel request for *client.addQuery
W0519 02:23:55.437655      33 warnings.go:80] cluster.x-k8s.io/v1alpha3 Machine is deprecated; use cluster.x-k8s.io/v1beta1 Machine
2024/05/19 02:24:39 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
W0519 02:25:12.856881      33 warnings.go:80] cluster.x-k8s.io/v1alpha3 MachineHealthCheck is deprecated; use cluster.x-k8s.io/v1beta1 MachineHealthCheck
W0519 02:25:36.873816      33 warnings.go:80] cluster.x-k8s.io/v1alpha3 Cluster is deprecated; use cluster.x-k8s.io/v1beta1 Cluster
2024/05/19 02:22:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:24:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:26:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:26:39 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
W0519 02:27:47.944821      33 warnings.go:80] cluster.x-k8s.io/v1alpha3 MachineDeployment is deprecated; use cluster.x-k8s.io/v1beta1 MachineDeployment
W0519 02:28:00.862572      33 warnings.go:80] cluster.x-k8s.io/v1alpha3 MachineSet is deprecated; use cluster.x-k8s.io/v1beta1 MachineSet
2024/05/19 02:28:02 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing
2024/05/19 02:28:00 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-wxhvwlnf: ClusterUnavailable 503: cluster not found, requeuing

看看下游集群的 cluster-agent pod 的日志

下游集群还没有Ready的状态

[root@worknode rancnher-k8s-rke2]# kubectl get node
E0519 21:35:34.059829  209782 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:35:34.099082  209782 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:35:34.111562  209782 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:35:34.148558  209782 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:35:34.160393  209782 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding

[root@worknode rancnher-k8s-rke2]# kubectl get pod -A
E0519 21:36:04.092624  209917 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:36:04.105236  209917 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:36:04.118785  209917 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:36:04.130655  209917 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
E0519 21:36:04.142132  209917 memcache.go:265] couldn't get current server API group list: an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
[root@worknode rancnher-k8s-rke2]# 

那就参考 RKE2 commands 来排查 rke2 相关的日志

[root@rancher-k8s-m01 ~]# systemctl status rke2-server
Unit rke2-server.service could not be found.

[root@rancher-k8s-m01 ~]# ls -al  /etc/rancher/rke2/rke2.yaml
ls: cannot access '/etc/rancher/rke2/rke2.yaml': No such file or directory

[root@rancher-k8s-m01 ~]# journalctl -f -u rke2-server
-- Logs begin at Sat 2024-05-18 22:13:05 CST. --

[root@rancher-k8s-m01 ~]# tree /var/lib/rancher/rke2/agent/
/var/lib/rancher/rke2/agent/ [error opening dir]

0 directories, 0 files
[root@rancher-k8s-m03 ~]# systemctl status rke2-server
Unit rke2-server.service could not be found.

[root@rancher-k8s-m03 ~]# ls -al  /etc/rancher/rke2/rke2.yaml
ls: cannot access '/etc/rancher/rke2/rke2.yaml': No such file or directory

[root@rancher-k8s-m03 ~]# journalctl -f -u rke2-server
-- Logs begin at Sat 2024-05-18 22:14:54 CST. --

[root@rancher-k8s-m03 ~]# tree /var/lib/rancher/rke2/agent/
/var/lib/rancher/rke2/agent/ [error opening dir]

0 directories, 0 files
[root@rancher-k8s-m02 ~]# systemctl status rke2-server
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
   Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2024-05-18 22:42:28 CST; 1 day 11h ago
     Docs: https://github.com/rancher/rke2#readme
  Process: 1943 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 1941 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 1931 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
 Main PID: 1945 (rke2)
    Tasks: 225
   Memory: 5.4G
   CGroup: /system.slice/rke2-server.service
           ├─1945 /usr/local/bin/rke2 server
           ├─1986 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
           ├─2051 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd ->
           ├─2099 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 51944ab036a0039cc112aa86dc0ff967850a8eca452de5674ff7ac1325e054ce -address /run/k3s/containerd/containerd.sock
           ├─2188 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id fc95b1905823423a733a90f6e68e5a0b9afeaa62b97dd1c3a91cf89e9cf0294e -address /run/k3s/containerd/containerd.sock
           ├─2277 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2bf718354ba4a9da8429b712e5fc8f105dd6464bdfa355753e1235350e269076 -address /run/k3s/containerd/containerd.sock
           ├─2289 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3eb40360ee551de3513d8e9a9bcc00fcd57d3ea12d8344e3acdfaa1dc007ff67 -address /run/k3s/containerd/containerd.sock
           ├─2445 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0521c27d237080f1f7246ff80bde90b26f9856325618735e227cb7685e45a2c8 -address /run/k3s/containerd/containerd.sock
           ├─2529 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 63cc94323924556aa57ab969637cb72d431f6e5efc372480502948c70591adb8 -address /run/k3s/containerd/containerd.sock
           ├─3991 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0d330b8b0a9ff03882a9ee809d9fac75affc2f50fe55f4609c699b7c9a52bc95 -address /run/k3s/containerd/containerd.sock
           ├─4327 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id ecd52551884a1050df4487cf642b07f0c08d586e52fe81f3de49f6f7a99ca796 -address /run/k3s/containerd/containerd.sock
           ├─5165 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 77c55fd22eecde60994fe0caa26c5a746067915b3e10234c4de964c2bec2ef0e -address /run/k3s/containerd/containerd.sock
           ├─6871 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id ce4df40280e9c83cf20c8ef129c1861c07c4ec9456c65ba3341e7b8d13e0ca7c -address /run/k3s/containerd/containerd.sock
           ├─7025 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3c24bc59edd66a5954cf1bdc99c25300ebe1a11974541752e8cc934b61bafc87 -address /run/k3s/containerd/containerd.sock
           ├─7237 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 66852685b717537d0d006839dd1ff80853ef744b542a511557f7dfd29919dd92 -address /run/k3s/containerd/containerd.sock
           └─7369 /var/lib/rancher/rke2/data/v1.28.9-rke2r1-fc5244538cf2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 710c48ac0614ed07b819915c50da1fbf19538dd5091c0c8accc0bb230c4cb6c8 -address /run/k3s/containerd/containerd.sock

May 20 10:00:00 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:00.873908+0800","logger":"etcd-client","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:01.119826+0800","logger":"etcd-client.client","caller":"v3@v3.5.9-k3s1/maintenance.go:220","msg":"completed snapshot read; closing"}
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:01.133393+0800","logger":"etcd-client","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"12 MB","took":"now"}
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:01.133654+0800","logger":"etcd-client","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/rancher/rke2/server/db/snapshots/etcd-snapshot-rancher-k8s-m02-1>
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Saving snapshot metadata to /var/lib/rancher/rke2/server/db/.metadata/etcd-snapshot-rancher-k8s-m02-1716170401"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Applying snapshot retention=5 to local snapshots with prefix etcd-snapshot in /var/lib/rancher/rke2/server/db/snapshots"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Removing local snapshot /var/lib/rancher/rke2/server/db/snapshots/etcd-snapshot-rancher-k8s-m02-1716084003"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Reconciling ETCDSnapshotFile resources"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Deleting ETCDSnapshotFile for etcd-snapshot-rancher-k8s-m02-1716084003"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Reconciliation of ETCDSnapshotFile resources complete"
[root@rancher-k8s-m02 ~]# export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
[root@rancher-k8s-m02 ~]# /var/lib/rancher/rke2/bin/kubectl get nodes
NAME              STATUS   ROLES                       AGE   VERSION
rancher-k8s-m02   Ready    control-plane,etcd,master   35h   v1.28.9+rke2r1
[root@rancher-k8s-m02 ~]# /var/lib/rancher/rke2/bin/crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
aa734c02f9d6a       3e4fd05c0c1c0       36 hours ago        Running             calico-kube-controllers    0                   710c48ac0614e       calico-kube-controllers-55cf96477-tjt9t
d5c59185e78ed       00df8b41cfd2e       36 hours ago        Running             coredns                    0                   ce4df40280e9c       rke2-coredns-rke2-coredns-84b9cb946c-l2wbr
c4783578fafcf       cd00dc5289588       36 hours ago        Running             autoscaler                 0                   3c24bc59edd66       rke2-coredns-rke2-coredns-autoscaler-b49765765-df5js
509a77b664552       5c6ffd2b2a1d0       36 hours ago        Running             calico-node                0                   ecd52551884a1       calico-node-z42f7
cb73147b76556       b542f80277bc5       36 hours ago        Running             calico-typha               0                   77c55fd22eecd       calico-typha-55486c7598-vztxb
d014349e8e466       ac4b460566ae9       36 hours ago        Running             tigera-operator            0                   0d330b8b0a9ff       tigera-operator-795545875-rmklj
750ab1237289f       8fae8e1e0c868       36 hours ago        Running             kube-proxy                 0                   63cc943239245       kube-proxy-rancher-k8s-m02
3b2bdc80f0f20       3525a3daa55c9       36 hours ago        Running             cloud-controller-manager   0                   0521c27d23708       cloud-controller-manager-rancher-k8s-m02
d60380f5eb503       8fae8e1e0c868       36 hours ago        Running             kube-controller-manager    0                   3eb40360ee551       kube-controller-manager-rancher-k8s-m02
a942de505129c       8fae8e1e0c868       36 hours ago        Running             kube-scheduler             0                   2bf718354ba4a       kube-scheduler-rancher-k8s-m02
76821fc02ee0f       8fae8e1e0c868       36 hours ago        Running             kube-apiserver             0                   fc95b19058234       kube-apiserver-rancher-k8s-m02
1048ffd3c32c6       7893f7425a52a       36 hours ago        Running             etcd                       0                   51944ab036a00       etcd-rancher-k8s-m02
[root@rancher-k8s-m02 ~]# journalctl -f -u rke2-server
-- Logs begin at Sat 2024-05-18 22:14:05 CST. --
May 20 10:00:00 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:00.873908+0800","logger":"etcd-client","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:01.119826+0800","logger":"etcd-client.client","caller":"v3@v3.5.9-k3s1/maintenance.go:220","msg":"completed snapshot read; closing"}
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:01.133393+0800","logger":"etcd-client","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"12 MB","took":"now"}
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: {"level":"info","ts":"2024-05-20T10:00:01.133654+0800","logger":"etcd-client","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/rancher/rke2/server/db/snapshots/etcd-snapshot-rancher-k8s-m02-1716170401"}
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Saving snapshot metadata to /var/lib/rancher/rke2/server/db/.metadata/etcd-snapshot-rancher-k8s-m02-1716170401"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Applying snapshot retention=5 to local snapshots with prefix etcd-snapshot in /var/lib/rancher/rke2/server/db/snapshots"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Removing local snapshot /var/lib/rancher/rke2/server/db/snapshots/etcd-snapshot-rancher-k8s-m02-1716084003"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Reconciling ETCDSnapshotFile resources"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Deleting ETCDSnapshotFile for etcd-snapshot-rancher-k8s-m02-1716084003"
May 20 10:00:01 rancher-k8s-m02 rke2[1945]: time="2024-05-20T10:00:01+08:00" level=info msg="Reconciliation of ETCDSnapshotFile resources complete"

[root@rancher-k8s-m02 ~]# tail -f /var/lib/rancher/rke2/agent/containerd/containerd.log
time="2024-05-20T10:22:14.939364726+08:00" level=info msg="RemoveContainer for \"4bd33d5e5b7b72c179aa4113de92aa897dd1f9ea8e7ace27ed0a982fbc01b4d3\" returns successfully"
time="2024-05-20T10:27:17.015949619+08:00" level=info msg="CreateContainer within sandbox \"66852685b717537d0d006839dd1ff80853ef744b542a511557f7dfd29919dd92\" for container &ContainerMetadata{Name:cluster-register,Attempt:422,}"
time="2024-05-20T10:27:17.050847667+08:00" level=info msg="CreateContainer within sandbox \"66852685b717537d0d006839dd1ff80853ef744b542a511557f7dfd29919dd92\" for &ContainerMetadata{Name:cluster-register,Attempt:422,} returns container id \"1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41\""
time="2024-05-20T10:27:17.052234328+08:00" level=info msg="StartContainer for \"1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41\""
time="2024-05-20T10:27:17.468192328+08:00" level=info msg="StartContainer for \"1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41\" returns successfully"
time="2024-05-20T10:27:17.792791289+08:00" level=info msg="shim disconnected" id=1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41 namespace=k8s.io
time="2024-05-20T10:27:17.792927597+08:00" level=warning msg="cleaning up after shim disconnected" id=1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41 namespace=k8s.io
time="2024-05-20T10:27:17.792953517+08:00" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2024-05-20T10:27:18.158088481+08:00" level=info msg="RemoveContainer for \"86347f12583cec5fe0a1fa926d28f70d19118e60a2efe7c6c0dcba9f7f8a9ac8\""
time="2024-05-20T10:27:18.175221474+08:00" level=info msg="RemoveContainer for \"86347f12583cec5fe0a1fa926d28f70d19118e60a2efe7c6c0dcba9f7f8a9ac8\" returns successfully"
[root@rancher-k8s-m02 ~]# tail -f /var/lib/rancher/rke2/agent/logs/kubelet.log
I0520 10:30:23.009335    2051 scope.go:117] "RemoveContainer" containerID="1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41"
E0520 10:30:23.011425    2051 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-register\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-857d946c9b-ffccw_cattle-system(830c86c0-06e2-486d-92e5-7d03f5fa02fe)\"" pod="cattle-system/cattle-cluster-agent-857d946c9b-ffccw" podUID="830c86c0-06e2-486d-92e5-7d03f5fa02fe"
I0520 10:30:34.009035    2051 scope.go:117] "RemoveContainer" containerID="1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41"
E0520 10:30:34.010357    2051 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-register\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-857d946c9b-ffccw_cattle-system(830c86c0-06e2-486d-92e5-7d03f5fa02fe)\"" pod="cattle-system/cattle-cluster-agent-857d946c9b-ffccw" podUID="830c86c0-06e2-486d-92e5-7d03f5fa02fe"
I0520 10:30:46.009223    2051 scope.go:117] "RemoveContainer" containerID="1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41"
E0520 10:30:46.010027    2051 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-register\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-857d946c9b-ffccw_cattle-system(830c86c0-06e2-486d-92e5-7d03f5fa02fe)\"" pod="cattle-system/cattle-cluster-agent-857d946c9b-ffccw" podUID="830c86c0-06e2-486d-92e5-7d03f5fa02fe"
I0520 10:31:01.008646    2051 scope.go:117] "RemoveContainer" containerID="1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41"
E0520 10:31:01.009409    2051 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-register\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-857d946c9b-ffccw_cattle-system(830c86c0-06e2-486d-92e5-7d03f5fa02fe)\"" pod="cattle-system/cattle-cluster-agent-857d946c9b-ffccw" podUID="830c86c0-06e2-486d-92e5-7d03f5fa02fe"
I0520 10:31:13.008384    2051 scope.go:117] "RemoveContainer" containerID="1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41"
E0520 10:31:13.009449    2051 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-register\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-857d946c9b-ffccw_cattle-system(830c86c0-06e2-486d-92e5-7d03f5fa02fe)\"" pod="cattle-system/cattle-cluster-agent-857d946c9b-ffccw" podUID="830c86c0-06e2-486d-92e5-7d03f5fa02fe"
I0520 10:31:28.010242    2051 scope.go:117] "RemoveContainer" containerID="1c680df4859cb9427df859d2a0f950fd14de65f77d54b761c34f4df81b6a9a41"
E0520 10:31:28.011895    2051 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-register\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-857d946c9b-ffccw_cattle-system(830c86c0-06e2-486d-92e5-7d03f5fa02fe)\"" pod="cattle-system/cattle-cluster-agent-857d946c9b-ffccw" podUID="830c86c0-06e2-486d-92e5-7d03f5fa02fe"

[root@rancher-k8s-m02 ~]# /var/lib/rancher/rke2/bin/kubectl get pod -A
NAMESPACE         NAME                                                   READY   STATUS             RESTARTS        AGE
calico-system     calico-kube-controllers-55cf96477-tjt9t                1/1     Running            0               35h
calico-system     calico-node-z42f7                                      1/1     Running            0               35h
calico-system     calico-typha-55486c7598-vztxb                          1/1     Running            0               35h
cattle-system     cattle-cluster-agent-857d946c9b-ffccw                  0/1     CrashLoopBackOff   423 (40s ago)   35h
kube-system       cloud-controller-manager-rancher-k8s-m02               1/1     Running            0               35h
kube-system       etcd-rancher-k8s-m02                                   1/1     Running            0               35h
kube-system       helm-install-rke2-calico-crd-d8r8v                     0/1     Completed          0               35h
kube-system       helm-install-rke2-calico-z978n                         0/1     Completed          1               35h
kube-system       helm-install-rke2-coredns-fjsdr                        0/1     Completed          0               35h
kube-system       helm-install-rke2-ingress-nginx-rqxbb                  0/1     Pending            0               35h
kube-system       helm-install-rke2-metrics-server-5xg8n                 0/1     Pending            0               35h
kube-system       helm-install-rke2-snapshot-controller-6rgn2            0/1     Pending            0               35h
kube-system       helm-install-rke2-snapshot-controller-crd-pbl6v        0/1     Pending            0               35h
kube-system       helm-install-rke2-snapshot-validation-webhook-fgdqw    0/1     Pending            0               35h
kube-system       kube-apiserver-rancher-k8s-m02                         1/1     Running            0               35h
kube-system       kube-controller-manager-rancher-k8s-m02                1/1     Running            0               35h
kube-system       kube-proxy-rancher-k8s-m02                             1/1     Running            0               35h
kube-system       kube-scheduler-rancher-k8s-m02                         1/1     Running            0               35h
kube-system       rke2-coredns-rke2-coredns-84b9cb946c-l2wbr             1/1     Running            0               35h
kube-system       rke2-coredns-rke2-coredns-autoscaler-b49765765-df5js   1/1     Running            0               35h
tigera-operator   tigera-operator-795545875-rmklj                        1/1     Running            0               35h

[root@rancher-k8s-m02 ~]# /var/lib/rancher/rke2/bin/kubectl -n cattle-system logs -f cattle-cluster-agent-857d946c9b-ffccw 
INFO: Environment: CATTLE_ADDRESS=10.42.252.195 CATTLE_CA_CHECKSUM=5898fcb618f12fd4bc2468a9c68c41f6003c1819caefa43577c2f401417c1d61 CATTLE_CLUSTER=true CATTLE_CLUSTER_AGENT_PORT=tcp://10.43.117.88:80 CATTLE_CLUSTER_AGENT_PORT_443_TCP=tcp://10.43.117.88:443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_ADDR=10.43.117.88 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PORT=443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_PORT_80_TCP=tcp://10.43.117.88:80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_ADDR=10.43.117.88 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PORT=80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_SERVICE_HOST=10.43.117.88 CATTLE_CLUSTER_AGENT_SERVICE_PORT=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTP=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTPS_INTERNAL=443 CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES=embedded-cluster-api=false,fleet=false,monitoringv1=false,multi-cluster-management=false,multi-cluster-management-agent=true,provisioningv2=false,rke2=false CATTLE_INGRESS_IP_DOMAIN=sslip.io CATTLE_INSTALL_UUID=9da93727-da23-44a8-9e4e-197e73c7f41b CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-857d946c9b-ffccw CATTLE_RANCHER_WEBHOOK_VERSION= CATTLE_SERVER=https://rancher.dxc1s.com CATTLE_SERVER_VERSION=v2.8.3
INFO: Using resolv.conf: search cattle-system.svc.cluster.local svc.cluster.local cluster.local nameserver 10.43.0.10 options ndots:5
ERROR: https://rancher.dxc1s.com/ping is not accessible (Could not resolve host: rancher.dxc1s.com)
[

最后定位到是无法解析Rancher的域名,当前是用hosts解析的;该用DNS解析该域名,问题解决。

1 个赞