rancher重启失败,反复Running k3s server --cluster-init --cluster-reset

docker单机部署rancher,启动脚本为:
docker run -d --restart=unless-stopped
-p 10080:80 -p 10443:443
–privileged
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=“docker.m.daocloud.io
-e CATTLE_SERVER_URL=“https://10.16.228.11:10443
–name rancher
-v /opt/data/rancher_data:/var/lib/rancher
docker.m.daocloud.io/rancher/rancher:v2.13.3

一开始正常进入并配置了账号密码,然后想重启试试能不能正常
于是执行了
docker stop rancher
docker start rancher

则启动不了了,以下是日志:
docker logs -f rancher
2026/02/26 08:57:14 [INFO] Rancher version v2.13.3 (0ce54ba2a45d79e7a51ce6b35ccfd353413ab352) is starting
2026/02/26 08:57:14 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLogLevel:0 AuditLogEnabled:false Features: ClusterRegistry: AggregationRegistrationTimeout:5m0s}
2026/02/26 08:57:14 [INFO] Listening on /tmp/log.sock
2026/02/26 08:57:14 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6444/version?timeout=15m0s”: dial tcp 127.0.0.1:6444: connect: connection refused
2026/02/26 08:57:16 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6444/version?timeout=15m0s”: dial tcp 127.0.0.1:6444: connect: connection refused
2026/02/26 08:57:18 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6444/version?timeout=15m0s”: dial tcp 127.0.0.1:6444: connect: connection refused
2026/02/26 08:57:20 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6444/version?timeout=15m0s”: dial tcp 127.0.0.1:6444: connect: connection refused
2026/02/26 08:57:22 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6444/version?timeout=15m0s”: dial tcp 127.0.0.1:6444: connect: connection refused
2026/02/26 08:57:24 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6444/version?timeout=15m0s”: dial tcp 127.0.0.1:6444: connect: connection refused
2026/02/26 08:57:26 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6444/version?timeout=15m0s”: dial tcp 127.0.0.1:6444: connect: connection refused
2026/02/26 08:57:30 [FATAL] k3s exited with: exit status 1
Restoring git repositories:

  • /var/lib/rancher-data/local-catalogs/v2/rancher-charts/4b40cac650031b74776e87c1a726b0484d0877c3ec137da0872547ff9b73a721/.git
    Your branch is up to date with ‘origin/release-v2.13’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-rke2-charts/675f1b63a0a83905972dcab2794479ed599a6f41b86cd6193d69472d0fa889c9/.git
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-partner-charts/8f17acdce9bffd6e05a58a3798840e408c4ea71783381ecd2e9af30baad65974/.git
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
    INFO: Running k3s server --cluster-init --cluster-reset
    ERROR:
    time=“2026-02-26T08:57:31Z” level=info msg=“Starting k3s v1.34.1+k3s1 (24fc436e)”
    time=“2026-02-26T08:57:31Z” level=info msg=“Managed etcd cluster bootstrap already complete and initialized”
    time=“2026-02-26T08:57:31Z” level=info msg=“certificate CN=kube-apiserver signed by CN=k3s-server-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:31 +0000 UTC”
    time=“2026-02-26T08:57:31Z” level=info msg=“certificate CN=etcd-peer signed by CN=etcd-peer-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:31 +0000 UTC”
    time=“2026-02-26T08:57:31Z” level=info msg=“certificate CN=etcd-server signed by CN=etcd-server-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:31 +0000 UTC”
    time=“2026-02-26T08:57:31Z” level=info msg=“certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:31 +0000 UTC”
    time=“2026-02-26T08:57:31Z” level=warning msg=“dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request”
    time=“2026-02-26T08:57:31Z” level=info msg=“Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-02d6d602dfad:02d6d602dfad listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.88.0.37:10.88.0.37 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=A9F5F12B36BE944AD14360DD2789110C6B8B0F36]”
    time=“2026-02-26T08:57:32Z” level=info msg=“Updated load balancer k3s-agent-load-balancer default server: 127.0.0.1:6443”
    time=“2026-02-26T08:57:32Z” level=info msg=“Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 → [default: 127.0.0.1:6443]”
    time=“2026-02-26T08:57:32Z” level=warning msg=“Failed to get apiserver address from etcd: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"”
    time=“2026-02-26T08:57:33Z” level=info msg=“Password verified locally for node 02d6d602dfad”
    time=“2026-02-26T08:57:33Z” level=info msg=“certificate CN=02d6d602dfad signed by CN=k3s-server-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:33 +0000 UTC”
    time=“2026-02-26T08:57:33Z” level=info msg=“certificate CN=system:node:02d6d602dfad,O=system:nodes signed by CN=k3s-client-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:33 +0000 UTC”
    time=“2026-02-26T08:57:33Z” level=info msg=“certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:33 +0000 UTC”
    time=“2026-02-26T08:57:34Z” level=info msg=“certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1772094497: notBefore=2026-02-26 08:28:17 +0000 UTC notAfter=2027-02-26 08:57:34 +0000 UTC”
    time=“2026-02-26T08:57:34Z” level=fatal msg=“Error: starting kubernetes: failed to start cluster: start managed database: Managed etcd cluster membership was previously reset, please remove the cluster-reset flag and start k3s normally. If you need to perform another cluster reset, you must first manually delete the file at /var/lib/rancher/k3s/server/db/reset-flag”
    Restoring git repositories:
  • /var/lib/rancher-data/local-catalogs/v2/rancher-charts/4b40cac650031b74776e87c1a726b0484d0877c3ec137da0872547ff9b73a721/.git
    Your branch is up to date with ‘origin/release-v2.13’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-rke2-charts/675f1b63a0a83905972dcab2794479ed599a6f41b86cd6193d69472d0fa889c9/.git
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-partner-charts/8f17acdce9bffd6e05a58a3798840e408c4ea71783381ecd2e9af30baad65974/.git
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
    INFO: Running k3s server --cluster-init --cluster-reset
    2026/02/26 08:57:48 [INFO] Rancher version v2.13.3 (0ce54ba2a45d79e7a51ce6b35ccfd353413ab352) is starting
    2026/02/26 08:57:48 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLogLevel:0 AuditLogEnabled:false Features: ClusterRegistry: AggregationRegistrationTimeout:5m0s}
    2026/02/26 08:57:48 [INFO] Listening on /tmp/log.sock

尝试过删除/var/lib/rancher/k3s/server/db/reset-flag重启依旧无效

附上k3s.log部分错误日志


目前还是不清楚是什么原因导致的,望各位大佬们解答一下😂

提供下 k3s.log 在 rancher 崩溃前的日志,就是查看k3s.log,然后一直等待 rancher server 崩溃

time=“2026-02-27T03:13:01Z” level=info msg=“Starting k3s v1.34.1+k3s1 (24fc436e)”
time=“2026-02-27T03:13:01Z” level=info msg=“Managed etcd cluster bootstrap already complete and initialized”
time=“2026-02-27T03:13:01Z” level=info msg=“Starting temporary etcd to reconcile with datastore”
{“level”:“info”,“ts”:“2026-02-27T03:13:01.884688Z”,“caller”:“embed/etcd.go:132”,“msg”:“configuring socket options”,“reuse-address”:true,“reuse-port”:true}
{“level”:“info”,“ts”:“2026-02-27T03:13:01.884775Z”,“caller”:“embed/etcd.go:138”,“msg”:“configuring peer listeners”,“listen-peer-urls”:[“http://127.0.0.1:2400”]}
{“level”:“info”,“ts”:“2026-02-27T03:13:01.885106Z”,“caller”:“embed/etcd.go:146”,“msg”:“configuring client listeners”,“listen-client-urls”:[“http://127.0.0.1:2399”]}
{“level”:“info”,“ts”:“2026-02-27T03:13:01.885259Z”,“caller”:“embed/etcd.go:323”,“msg”:“starting an etcd server”,“etcd-version”:“3.6.4”,“git-sha”:“HEAD”,“go-version”:“go1.24.6”,“go-os”:“linux”,“go-arch”:“amd64”,“max-cpu-set”:80,“max-cpu-available”:80,“member-initialized”:true,“name”:“96515ca796bf-f0affc01”,“data-dir”:“/var/lib/rancher/k3s/server/db/etcd-tmp”,“wal-dir”:“”,“wal-dir-dedicated”:“”,“member-dir”:“/var/lib/rancher/k3s/server/db/etcd-tmp/member”,“force-new-cluster”:true,“heartbeat-interval”:“500ms”,“election-timeout”:“5s”,“initial-election-tick-advance”:true,“snapshot-count”:10000,“max-wals”:0,“max-snapshots”:0,“snapshot-catchup-entries”:5000,“initial-advertise-peer-urls”:[“http://127.0.0.1:2400”],“listen-peer-urls”:[“http://127.0.0.1:2400”],“advertise-client-urls”:[“http://127.0.0.1:2399”],“listen-client-urls”:[“http://127.0.0.1:2399”],“listen-metrics-urls”:,“experimental-local-address”:“”,“cors”:[““],“host-whitelist”:[””],“initial-cluster”:“”,“initial-cluster-state”:“new”,“initial-cluster-token”:“”,“quota-backend-bytes”:2147483648,“max-request-bytes”:1572864,“max-concurrent-streams”:4294967295,“pre-vote”:true,“feature-gates”:“InitialCorruptCheck=true”,“initial-corrupt-check”:false,“corrupt-check-time-interval”:“0s”,“compact-check-time-interval”:“1m0s”,“auto-compaction-mode”:“periodic”,“auto-compaction-retention”:“0s”,“auto-compaction-interval”:“0s”,“discovery-url”:“”,“discovery-proxy”:“”,“discovery-token”:“”,“discovery-endpoints”:“”,“discovery-dial-timeout”:“2s”,“discovery-request-timeout”:“5s”,“discovery-keepalive-time”:“2s”,“discovery-keepalive-timeout”:“6s”,“discovery-insecure-transport”:true,“discovery-insecure-skip-tls-verify”:false,“discovery-cert”:“”,“discovery-key”:“”,“discovery-cacert”:“”,“discovery-user”:“”,“downgrade-check-interval”:“5s”,“max-learners”:1,“v2-deprecation”:“write-only”}
{“level”:“info”,“ts”:“2026-02-27T03:13:01.885665Z”,“logger”:“bbolt”,“caller”:“backend/backend.go:203”,“msg”:“Opening db file (/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000c81a40}”}
{“level”:“info”,“ts”:“2026-02-27T03:13:01.889009Z”,“logger”:“bbolt”,“caller”:“bbolt@v1.4.2/db.go:321”,“msg”:“Opening bbolt db (/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db) successfully”}
{“level”:“info”,“ts”:“2026-02-27T03:13:01.889062Z”,“caller”:“storage/backend.go:80”,“msg”:“opened backend db”,“path”:“/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db”,“took”:“3.527115ms”}
{“level”:“info”,“ts”:“2026-02-27T03:13:01.889098Z”,“caller”:“etcdserver/bootstrap.go:220”,“msg”:“restore consistentIndex”,“index”:391962}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.164429Z”,“caller”:“etcdserver/bootstrap.go:413”,“msg”:“recovered v2 store from snapshot”,“snapshot-index”:390046,“snapshot-size”:“7.1 kB”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.164569Z”,“caller”:“storage/backend.go:108”,“msg”:“Skipping snapshot backend”,“consistent-index”:391962,“snapshot-index”:390046}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.164610Z”,“caller”:“etcdserver/bootstrap.go:232”,“msg”:“recovered v3 backend”,“backend-size-bytes”:10260480,“backend-size”:“10 MB”,“backend-size-in-use-bytes”:10248192,“backend-size-in-use”:“10 MB”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.164830Z”,“caller”:“etcdserver/bootstrap.go:90”,“msg”:“Bootstrapping WAL from snapshot”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.455625Z”,“caller”:“etcdserver/bootstrap.go:591”,“msg”:“forcing restart member”,“cluster-id”:“ffa3ef52f8ea6d01”,“local-member-id”:“51be9e926333dcd0”,“wal-commit-index”:391962,“commit-index”:391962}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.455829Z”,“caller”:“etcdserver/bootstrap.go:94”,“msg”:“bootstrapping cluster”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456043Z”,“caller”:“etcdserver/bootstrap.go:101”,“msg”:“bootstrapping storage”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456471Z”,“caller”:“api/capability.go:76”,“msg”:“enabled capabilities for version”,“cluster-version”:“3.6”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456505Z”,“caller”:“membership/cluster.go:297”,“msg”:“recovered/added member from store”,“cluster-id”:“ffa3ef52f8ea6d01”,“local-member-id”:“51be9e926333dcd0”,“recovered-remote-peer-id”:“51be9e926333dcd0”,“recovered-remote-peer-urls”:[“https://10.88.0.39:2380”],“recovered-remote-peer-is-learner”:false}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456534Z”,“caller”:“membership/cluster.go:307”,“msg”:“set cluster version from store”,“cluster-version”:“3.6”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456566Z”,“caller”:“etcdserver/bootstrap.go:109”,“msg”:“bootstrapping raft”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456680Z”,“caller”:“etcdserver/server.go:312”,“msg”:“bootstrap successfully”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456818Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:1981”,“msg”:“51be9e926333dcd0 switched to configuration voters=(5890319714213944528)”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456905Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:897”,“msg”:“51be9e926333dcd0 became follower at term 3”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.456938Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:493”,“msg”:“newRaft 51be9e926333dcd0 [peers: [51be9e926333dcd0], term: 3, commit: 391962, applied: 390046, lastindex: 391962, lastterm: 3]”}
{“level”:“warn”,“ts”:“2026-02-27T03:13:03.457879Z”,“caller”:“auth/store.go:1135”,“msg”:“simple token is not cryptographically signed”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.458217Z”,“caller”:“mvcc/kvstore.go:334”,“msg”:“restored last compact revision”,“meta-bucket-name-key”:“finishedCompactRev”,“restored-compact-revision”:374762}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.479901Z”,“caller”:“mvcc/kvstore.go:408”,“msg”:“kvstore restored”,“current-rev”:377989}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.481069Z”,“caller”:“storage/quota.go:93”,“msg”:“enabled backend quota with default value”,“quota-name”:“v3-applier”,“quota-size-bytes”:2147483648,“quota-size”:“2.1 GB”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.481145Z”,“caller”:“etcdserver/corrupt.go:91”,“msg”:“starting initial corruption check”,“local-member-id”:“51be9e926333dcd0”,“timeout”:“15s”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.485766Z”,“caller”:“etcdserver/corrupt.go:172”,“msg”:“initial corruption checking passed; no corruption”,“local-member-id”:“51be9e926333dcd0”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.485853Z”,“caller”:“etcdserver/server.go:589”,“msg”:“starting etcd server”,“local-member-id”:“51be9e926333dcd0”,“local-server-version”:“3.6.4”,“cluster-id”:“ffa3ef52f8ea6d01”,“cluster-version”:“3.6”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.486135Z”,“caller”:“etcdserver/server.go:483”,“msg”:“started as single-node; fast-forwarding election ticks”,“local-member-id”:“51be9e926333dcd0”,“forward-ticks”:9,“forward-duration”:“4.5s”,“election-ticks”:10,“election-timeout”:“5s”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.486369Z”,“caller”:“embed/etcd.go:292”,“msg”:“now serving peer/client/metrics”,“local-member-id”:“51be9e926333dcd0”,“initial-advertise-peer-urls”:[“http://127.0.0.1:2400”],“listen-peer-urls”:[“http://127.0.0.1:2400”],“advertise-client-urls”:[“http://127.0.0.1:2399”],“listen-client-urls”:[“http://127.0.0.1:2399”],“listen-metrics-urls”:}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.486260Z”,“caller”:“embed/etcd.go:640”,“msg”:“serving peer traffic”,“address”:“127.0.0.1:2400”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.486469Z”,“caller”:“embed/etcd.go:611”,“msg”:“cmux::serve”,“address”:“127.0.0.1:2400”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.503434Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:1981”,“msg”:“51be9e926333dcd0 switched to configuration voters=(5890319714213944528)”}
{“level”:“info”,“ts”:“2026-02-27T03:13:03.503616Z”,“caller”:“membership/cluster.go:650”,“msg”:“ignored already updated member”,“cluster-id”:“ffa3ef52f8ea6d01”,“local-member-id”:“51be9e926333dcd0”,“updated-remote-peer-id”:“51be9e926333dcd0”,“updated-remote-peer-urls”:[“https://10.88.0.39:2380”],“updated-remote-peer-is-learner”:false}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.957913Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:988”,“msg”:“51be9e926333dcd0 is starting a new election at term 3”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.957997Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:930”,“msg”:“51be9e926333dcd0 became pre-candidate at term 3”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.958092Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:1077”,“msg”:“51be9e926333dcd0 received MsgPreVoteResp from 51be9e926333dcd0 at term 3”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.958124Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:1693”,“msg”:“51be9e926333dcd0 has received 1 MsgPreVoteResp votes and 0 vote rejections”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.958161Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:912”,“msg”:“51be9e926333dcd0 became candidate at term 4”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.995353Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:1077”,“msg”:“51be9e926333dcd0 received MsgVoteResp from 51be9e926333dcd0 at term 4”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.995397Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:1693”,“msg”:“51be9e926333dcd0 has received 1 MsgVoteResp votes and 0 vote rejections”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.995417Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:970”,“msg”:“51be9e926333dcd0 became leader at term 4”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.995428Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/node.go:370”,“msg”:“raft.node: 51be9e926333dcd0 elected leader 51be9e926333dcd0 at term 4”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.995761Z”,“caller”:“etcdserver/server.go:1806”,“msg”:“published local member to cluster through raft”,“local-member-id”:“51be9e926333dcd0”,“local-member-attributes”:“{Name:96515ca796bf-f0affc01 ClientURLs:[http://127.0.0.1:2399]}”,“cluster-id”:“ffa3ef52f8ea6d01”,“publish-timeout”:“15s”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.995804Z”,“caller”:“embed/serve.go:138”,“msg”:“ready to serve client requests”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.995802Z”,“caller”:“embed/serve.go:138”,“msg”:“ready to serve client requests”}
{“level”:“warn”,“ts”:“2026-02-27T03:13:06.996897Z”,“caller”:“v3rpc/grpc.go:52”,“msg”:“etcdserver: failed to register grpc metrics”,“error”:“descriptor Desc{fqName: "grpc_server_msg_sent_total", help: "Total number of gRPC stream messages sent by the server.", constLabels: {}, variableLabels: {grpc_type,grpc_service,grpc_method}} already exists with the same fully-qualified name and const label values”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.996929Z”,“caller”:“embed/serve.go:220”,“msg”:“serving client traffic insecurely; this is strongly discouraged!”,“traffic”:“http”,“address”:“127.0.0.1:2402”}
{“level”:“info”,“ts”:“2026-02-27T03:13:06.997276Z”,“caller”:“v3rpc/health.go:63”,“msg”:“grpc service status changed”,“service”:“”,“status”:“SERVING”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.002140Z”,“caller”:“embed/serve.go:220”,“msg”:“serving client traffic insecurely; this is strongly discouraged!”,“traffic”:“grpc”,“address”:“127.0.0.1:2399”}
time=“2026-02-27T03:13:07Z” level=info msg=“Connected to etcd v3.6.4 - datastore using 10248192 of 10260480 bytes”
time=“2026-02-27T03:13:07Z” level=info msg=“Defragmenting etcd database”
{“level”:“info”,“ts”:“2026-02-27T03:13:07.010548Z”,“caller”:“v3rpc/maintenance.go:110”,“msg”:“starting defragment”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.011747Z”,“caller”:“backend/backend.go:522”,“msg”:“defragmenting”,“path”:“/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db”,“current-db-size-bytes”:10260480,“current-db-size”:“10 MB”,“current-db-size-in-use-bytes”:10248192,“current-db-size-in-use”:“10 MB”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.212254Z”,“logger”:“bbolt”,“caller”:“backend/backend.go:574”,“msg”:“Opening db file (/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000c81a40}”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.220311Z”,“logger”:“bbolt”,“caller”:“bbolt@v1.4.2/db.go:321”,“msg”:“Opening bbolt db (/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db) successfully”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.220371Z”,“caller”:“backend/backend.go:592”,“msg”:“finished defragmenting directory”,“path”:“/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db”,“current-db-size-bytes-diff”:-8192,“current-db-size-bytes”:10252288,“current-db-size”:“10 MB”,“current-db-size-in-use-bytes-diff”:-4096,“current-db-size-in-use-bytes”:10244096,“current-db-size-in-use”:“10 MB”,“took”:“209.587651ms”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.220404Z”,“caller”:“v3rpc/maintenance.go:118”,“msg”:“finished defragment”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.220419Z”,“caller”:“v3rpc/health.go:63”,“msg”:“grpc service status changed”,“service”:“”,“status”:“SERVING”}
time=“2026-02-27T03:13:07Z” level=info msg=“Datastore using 10244096 of 10252288 bytes after defragment”
time=“2026-02-27T03:13:07Z” level=info msg=“etcd temporary data store connection OK”
time=“2026-02-27T03:13:07Z” level=info msg=“Reconciling bootstrap data between datastore and disk”
time=“2026-02-27T03:13:07Z” level=info msg=“stopping etcd”
{“level”:“info”,“ts”:“2026-02-27T03:13:07.234677Z”,“caller”:“embed/etcd.go:426”,“msg”:“closing etcd server”,“name”:“96515ca796bf-f0affc01”,“data-dir”:“/var/lib/rancher/k3s/server/db/etcd-tmp”,“advertise-peer-urls”:[“http://127.0.0.1:2400”],“advertise-client-urls”:[“http://127.0.0.1:2399”]}
{“level”:“error”,“ts”:“2026-02-27T03:13:07.235483Z”,“caller”:“embed/etcd.go:912”,“msg”:“setting up serving from embedded etcd failed.”,“error”:“http: Server closed”,“stacktrace”:“go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/serve.go:90”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.235596Z”,“caller”:“etcdserver/server.go:1281”,“msg”:“skipped leadership transfer for single voting member cluster”,“local-member-id”:“51be9e926333dcd0”,“current-leader-member-id”:“51be9e926333dcd0”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.235723Z”,“caller”:“etcdserver/server.go:2321”,“msg”:“server has stopped; stopping cluster version’s monitor”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.235732Z”,“caller”:“etcdserver/server.go:2344”,“msg”:“server has stopped; stopping storage version’s monitor”}
{“level”:“error”,“ts”:“2026-02-27T03:13:07.235755Z”,“caller”:“embed/etcd.go:912”,“msg”:“setting up serving from embedded etcd failed.”,“error”:“accept tcp 127.0.0.1:2402: use of closed network connection”,“stacktrace”:“go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:906”}
time=“2026-02-27T03:13:07Z” level=info msg=“certificate CN=kube-apiserver signed by CN=k3s-server-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:07 +0000 UTC”
time=“2026-02-27T03:13:07Z” level=info msg=“certificate CN=etcd-peer signed by CN=etcd-peer-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:07 +0000 UTC”
time=“2026-02-27T03:13:07Z” level=info msg=“certificate CN=etcd-server signed by CN=etcd-server-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:07 +0000 UTC”
{“level”:“info”,“ts”:“2026-02-27T03:13:07.245690Z”,“caller”:“embed/etcd.go:621”,“msg”:“stopping serving peer traffic”,“address”:“127.0.0.1:2400”}
{“level”:“error”,“ts”:“2026-02-27T03:13:07.245901Z”,“caller”:“embed/etcd.go:912”,“msg”:“setting up serving from embedded etcd failed.”,“error”:“accept tcp 127.0.0.1:2400: use of closed network connection”,“stacktrace”:“go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:906”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.245969Z”,“caller”:“embed/etcd.go:626”,“msg”:“stopped serving peer traffic”,“address”:“127.0.0.1:2400”}
{“level”:“info”,“ts”:“2026-02-27T03:13:07.245991Z”,“caller”:“embed/etcd.go:428”,“msg”:“closed etcd server”,“name”:“96515ca796bf-f0affc01”,“data-dir”:“/var/lib/rancher/k3s/server/db/etcd-tmp”,“advertise-peer-urls”:[“http://127.0.0.1:2400”],“advertise-client-urls”:[“http://127.0.0.1:2399”]}
time=“2026-02-27T03:13:07Z” level=info msg=“certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:07 +0000 UTC”
time=“2026-02-27T03:13:07Z” level=warning msg=“dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request”
time=“2026-02-27T03:13:07Z” level=info msg=“Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.88.0.39:10.88.0.39 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-local-node:local-node listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=41E49611D01D03BE0650BC7B6EF701146B7AA183]”
time=“2026-02-27T03:13:08Z” level=info msg=“Password verified locally for node local-node”
time=“2026-02-27T03:13:08Z” level=info msg=“certificate CN=local-node signed by CN=k3s-server-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:08 +0000 UTC”
time=“2026-02-27T03:13:09Z” level=info msg=“certificate CN=system:node:local-node,O=system:nodes signed by CN=k3s-client-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:09 +0000 UTC”
time=“2026-02-27T03:13:09Z” level=info msg=“certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:09 +0000 UTC”
time=“2026-02-27T03:13:09Z” level=info msg=“certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1772097151: notBefore=2026-02-26 09:12:31 +0000 UTC notAfter=2027-02-27 03:13:09 +0000 UTC”
time=“2026-02-27T03:13:09Z” level=info msg=“Module overlay was already loaded”
time=“2026-02-27T03:13:09Z” level=info msg=“Module nf_conntrack was already loaded”
time=“2026-02-27T03:13:09Z” level=info msg=“Module br_netfilter was already loaded”
time=“2026-02-27T03:13:09Z” level=info msg=“Module iptable_nat was already loaded”
time=“2026-02-27T03:13:09Z” level=info msg=“Module iptable_filter was already loaded”
time=“2026-02-27T03:13:09Z” level=warning msg=“Failed to load kernel module nft-expr-counter with modprobe”
time=“2026-02-27T03:13:09Z” level=warning msg=“Failed to load kernel module nfnetlink-subsys-11 with modprobe”
time=“2026-02-27T03:13:09Z” level=warning msg=“Failed to load kernel module nft-chain-2-nat with modprobe”
time=“2026-02-27T03:13:09Z” level=info msg=“Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_established’ to 86400”
time=“2026-02-27T03:13:09Z” level=info msg=“Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_close_wait’ to 3600”
time=“2026-02-27T03:13:09Z” level=info msg=“Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log”
time=“2026-02-27T03:13:09Z” level=info msg=“Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml”
time=“2026-02-27T03:13:09Z” level=info msg=“Polling for API server readiness: GET /readyz failed: the server is currently unable to handle the request”
time=“2026-02-27T03:13:10Z” level=info msg=“Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"”
time=“2026-02-27T03:13:11Z” level=info msg=“Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"”
time=“2026-02-27T03:13:12Z” level=info msg=“containerd is now running”
time=“2026-02-27T03:13:12Z” level=info msg=“Importing images from /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar”
time=“2026-02-27T03:13:14Z” level=info msg=“Connecting to proxy” url=“wss://127.0.0.1:6443/v1-k3s/connect”
time=“2026-02-27T03:13:14Z” level=info msg=“Creating k3s-cert-monitor event broadcaster”
time=“2026-02-27T03:13:14Z” level=info msg=“Running kubelet --cloud-provider=external --config-dir=/var/lib/rancher/k3s/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=local-node --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=10.88.0.39 --node-labels= --read-only-port=0 --runtime-cgroups=/k3s”
time=“2026-02-27T03:13:14Z” level=info msg=“Handling backend connection request [local-node]”
time=“2026-02-27T03:13:14Z” level=info msg=“Connected to proxy” url=“wss://127.0.0.1:6443/v1-k3s/connect”
time=“2026-02-27T03:13:14Z” level=info msg=“Remotedialer connected to proxy” url=“wss://127.0.0.1:6443/v1-k3s/connect”
time=“2026-02-27T03:13:14Z” level=info msg=“Starting etcd for existing cluster member”
time=“2026-02-27T03:13:14Z” level=info msg=start
time=“2026-02-27T03:13:14Z” level=info msg=“schedule, now=2026-02-27T03:13:14Z, entry=1, next=2026-02-27T12:00:00Z”
time=“2026-02-27T03:13:14Z” level=info msg=“Failed to test etcd connection: failed to get etcd status: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"”

这是崩溃前的日志,大佬请看


重启后会先创建k3s-cluster-reset.log日志,这是正常的吗?

发现有以下错误日志

I0227 05:52:53.314493 277 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0227 05:52:53.428025 277 handler_proxy.go:99] no RequestInfo found in the context
E0227 05:52:53.428116 277 controller.go:113] “Unhandled Error” err=“loading OpenAPI spec for "v1.ext.cattle.io" failed with: Error, could not get list of group versions for APIService” logger=“UnhandledError”
W0227 05:52:53.428118 277 handler_proxy.go:99] no RequestInfo found in the context
I0227 05:52:53.428145 277 controller.go:126] OpenAPI AggregationController: action for item v1.ext.cattle.io: Rate Limited Requeue.
E0227 05:52:53.428251 277 controller.go:102] “Unhandled Error” err=<
loading OpenAPI spec for “v1.ext.cattle.io” failed with: failed to download v1.ext.cattle.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]

logger=“UnhandledError”
I0227 05:52:53.429213 277 controller.go:109] OpenAPI AggregationController: action for item v1.ext.cattle.io: Rate Limited Requeue.
time=“2026-02-27T05:52:53Z” level=error msg=“Sending HTTP/1.1 502 response to 127.0.0.1:33838: dial tcp 10.42.0.9:6666: connect: no route to host”
W0227 05:52:53.740961 277 lease.go:265] Resetting endpoints for master service “kubernetes” to [10.88.0.38 10.88.0.39 10.88.0.40 10.88.0.41 10.88.0.42]
time=“2026-02-27T05:52:55Z” level=info msg=“Kube API server is now running”
time=“2026-02-27T05:52:55Z” level=info msg=“k3s is up and running”
time=“2026-02-27T05:52:55Z” level=info msg=“Waiting for untainted node”
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
time=“2026-02-27T05:52:55Z” level=info msg=“Waiting for cloud-controller-manager privileges to become available”
time=“2026-02-27T05:52:55Z” level=info msg=“Creating k3s-supervisor event broadcaster”
I0227 05:52:55.170880 277 shared_informer.go:349] “Waiting for caches to sync” controller=“node informer cache”
I0227 05:52:55.171912 277 event.go:389] “Event occurred” object=“local-node” fieldPath=“” kind=“Node” apiVersion=“” type=“Normal” reason=“CertificateExpirationOK” message=“Node and Certificate Authority certificates managed by k3s are OK”
time=“2026-02-27T05:52:55Z” level=warning msg=“Failed to list nodes with etcd role: runtime core not ready”
I0227 05:52:55.180705 277 server.go:525] “Kubelet version” kubeletVersion=“v1.34.1+k3s1”
I0227 05:52:55.180768 277 server.go:527] “Golang settings” GOGC=“” GOMAXPROCS=“” GOTRACEBACK=“”
I0227 05:52:55.180843 277 watchdog_linux.go:95] “Systemd watchdog is not enabled”
I0227 05:52:55.180861 277 watchdog_linux.go:137] “Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started.”
time=“2026-02-27T05:52:55Z” level=info msg=“Annotations and labels have already set on node: local-node”
time=“2026-02-27T05:52:55Z” level=info msg=“Starting flannel with backend vxlan”
I0227 05:52:55.187732 277 dynamic_cafile_content.go:161] “Starting controller” name=“client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt”
I0227 05:52:55.195757 277 server.go:1419] “Using cgroup driver setting received from the CRI runtime” cgroupDriver=“cgroupfs”
time=“2026-02-27T05:52:55Z” level=info msg=“Flannel found PodCIDR assigned for node local-node”
time=“2026-02-27T05:52:55Z” level=info msg=“The interface eth0 with ipv4 address 10.88.0.42 will be used by flannel”
I0227 05:52:55.211467 277 kube.go:139] Waiting 10m0s for node controller to sync
I0227 05:52:55.211534 277 kube.go:469] Starting kube subnet manager
I0227 05:52:55.215984 277 kube.go:490] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24]
I0227 05:52:55.271037 277 shared_informer.go:356] “Caches are synced” controller=“node informer cache”
I0227 05:52:55.271087 277 server.go:219] “Successfully retrieved NodeIPs” NodeIPs=[“10.88.0.38”]
E0227 05:52:55.271213 277 server.go:256] “Kube-proxy configuration may be incomplete or incorrect” err=“nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using --nodeport-addresses primary
I0227 05:52:55.278611 277 server_linux.go:103] “No iptables support for family” ipFamily=“IPv6” error=<
error listing chain “POSTROUTING” in table “nat”: exit status 3: ip6tables v1.8.11 (legacy): can’t initialize ip6tables table `nat’: Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.

I0227 05:52:55.278781 277 server.go:267] “kube-proxy running in single-stack mode” ipFamily=“IPv4”
I0227 05:52:55.278841 277 server_linux.go:132] “Using iptables Proxier”
W0227 05:52:55.283625 277 info.go:53] Couldn’t collect info from any of the files in “/etc/machine-id,/var/lib/dbus/machine-id”
I0227 05:52:55.283963 277 server.go:777] “–cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /”
I0227 05:52:55.284021 277 server.go:838] “NoSwap is set due to memorySwapBehavior not specified” memorySwapBehavior=“” FailSwapOn=false
I0227 05:52:55.284366 277 container_manager_linux.go:270] “Container manager verified user specified cgroup-root exists” cgroupRoot=
I0227 05:52:55.284400 277 container_manager_linux.go:275] “Creating Container Manager object based on Node Config” nodeConfig={“NodeName”:“local-node”,“RuntimeCgroupsName”:“/k3s”,“SystemCgroupsName”:“”,“KubeletCgroupsName”:“/k3s”,“KubeletOOMScoreAdj”:-999,“ContainerRuntime”:“”,“CgroupsPerQOS”:true,“CgroupRoot”:“/”,“CgroupDriver”:“cgroupfs”,“KubeletRootDir”:“/var/lib/kubelet”,“ProtectKernelDefaults”:false,“KubeReservedCgroupName”:“”,“SystemReservedCgroupName”:“”,“ReservedSystemCPUs”:{},“EnforceNodeAllocatable”:{“pods”:{}},“KubeReserved”:null,“SystemReserved”:null,“HardEvictionThresholds”:[{“Signal”:“imagefs.available”,“Operator”:“LessThan”,“Value”:{“Quantity”:null,“Percentage”:0.05},“GracePeriod”:0,“MinReclaim”:null},{“Signal”:“nodefs.available”,“Operator”:“LessThan”,“Value”:{“Quantity”:null,“Percentage”:0.05},“GracePeriod”:0,“MinReclaim”:null}],“QOSReserved”:{},“CPUManagerPolicy”:“none”,“CPUManagerPolicyOptions”:null,“TopologyManagerScope”:“container”,“CPUManagerReconcilePeriod”:10000000000,“MemoryManagerPolicy”:“None”,“MemoryManagerReservedMemory”:null,“PodPidsLimit”:-1,“EnforceCPULimits”:true,“CPUCFSQuotaPeriod”:100000000,“TopologyManagerPolicy”:“none”,“TopologyManagerPolicyOptions”:null,“CgroupVersion”:2}
I0227 05:52:55.284831 277 topology_manager.go:138] “Creating topology manager with none policy”
I0227 05:52:55.284868 277 container_manager_linux.go:306] “Creating device plugin manager”
I0227 05:52:55.284924 277 container_manager_linux.go:315] “Creating Dynamic Resource Allocation (DRA) manager”
I0227 05:52:55.286148 277 proxier.go:242] “Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (–iptables-localhost-nodeports) or set nodePortAddresses (–nodeport-addresses) to filter loopback addresses”
I0227 05:52:55.286652 277 state_mem.go:36] “Initialized new in-memory state store”
I0227 05:52:55.286881 277 server.go:527] “Version info” version=“v1.34.1+k3s1”
I0227 05:52:55.286918 277 server.go:529] “Golang settings” GOGC=“” GOMAXPROCS=“” GOTRACEBACK=“”
I0227 05:52:55.287559 277 kubelet.go:475] “Attempting to sync node with API server”
I0227 05:52:55.287589 277 kubelet.go:376] “Adding static pod path” path=“/var/lib/rancher/k3s/agent/pod-manifests”
I0227 05:52:55.287615 277 kubelet.go:387] “Adding apiserver pod source”
I0227 05:52:55.287648 277 apiserver.go:42] “Waiting for node sync before watching apiserver pods”
I0227 05:52:55.290156 277 kuberuntime_manager.go:291] “Container runtime initialized” containerRuntime=“containerd” version=“v2.1.4-k3s2” apiVersion=“v1”
I0227 05:52:55.291140 277 kubelet.go:940] “Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled”
I0227 05:52:55.291255 277 kubelet.go:964] “Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled”
I0227 05:52:55.292858 277 server.go:1257] “Started kubelet”
I0227 05:52:55.292945 277 server.go:180] “Starting to listen” address=“0.0.0.0” port=10250
I0227 05:52:55.293002 277 ratelimit.go:56] “Setting rate limiting for endpoint” service=“podresources” qps=100 burstTokens=10
I0227 05:52:55.293218 277 server_v1.go:49] “podresources” method=“list” useActivePods=true
I0227 05:52:55.293533 277 config.go:200] “Starting service config controller”
I0227 05:52:55.293567 277 shared_informer.go:349] “Waiting for caches to sync” controller=“service config”
I0227 05:52:55.293654 277 server.go:249] “Starting to serve the podresources API” endpoint=“unix:/var/lib/kubelet/pod-resources/kubelet.sock”
I0227 05:52:55.293698 277 config.go:403] “Starting serviceCIDR config controller”
I0227 05:52:55.293739 277 shared_informer.go:349] “Waiting for caches to sync” controller=“serviceCIDR config”
I0227 05:52:55.293767 277 config.go:106] “Starting endpoint slice config controller”
I0227 05:52:55.293793 277 shared_informer.go:349] “Waiting for caches to sync” controller=“endpoint slice config”
I0227 05:52:55.293856 277 fs_resource_analyzer.go:67] “Starting FS ResourceAnalyzer”
I0227 05:52:55.293946 277 dynamic_serving_content.go:135] “Starting controller” name=“kubelet-server-cert-files::/var/lib/rancher/k3s/agent/serving-kubelet.crt::/var/lib/rancher/k3s/agent/serving-kubelet.key”
I0227 05:52:55.294033 277 volume_manager.go:313] “Starting Kubelet Volume Manager”
I0227 05:52:55.294061 277 config.go:309] “Starting node config controller”
I0227 05:52:55.294084 277 shared_informer.go:349] “Waiting for caches to sync” controller=“node config”
I0227 05:52:55.294104 277 shared_informer.go:356] “Caches are synced” controller=“node config”
I0227 05:52:55.294336 277 desired_state_of_world_populator.go:146] “Desired state populator starts to run”
I0227 05:52:55.294371 277 reconciler.go:29] “Reconciler: start to sync state”
I0227 05:52:55.296134 277 factory.go:223] Registration of the systemd container factory successfully
I0227 05:52:55.296285 277 factory.go:221] Registration of the crio container factory failed: Get “http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info”: dial unix /var/run/crio/crio.sock: connect: no such file or directory
I0227 05:52:55.298248 277 factory.go:223] Registration of the containerd container factory successfully
I0227 05:52:55.298938 277 server.go:310] “Adding debug handlers to kubelet server”
E0227 05:52:55.302672 277 kubelet.go:1615] “Image garbage collection failed once. Stats initialization may not have completed yet” err=“invalid capacity 0 on image filesystem”
I0227 05:52:55.308105 277 cpu_manager.go:221] “Starting CPU manager” policy=“none”
I0227 05:52:55.308135 277 cpu_manager.go:222] “Reconciling” reconcilePeriod=“10s”
I0227 05:52:55.308180 277 state_mem.go:36] “Initialized new in-memory state store”
I0227 05:52:55.308538 277 state_mem.go:88] “Updated default CPUSet” cpuSet=“”
I0227 05:52:55.308615 277 state_mem.go:96] “Updated CPUSet assignments” assignments={}
I0227 05:52:55.308663 277 policy_none.go:49] “None policy: Start”
I0227 05:52:55.308698 277 memory_manager.go:187] “Starting memorymanager” policy=“None”
I0227 05:52:55.308763 277 state_mem.go:36] “Initializing new in-memory state store”
I0227 05:52:55.309077 277 state_mem.go:77] “Updated machine memory state”
I0227 05:52:55.309098 277 policy_none.go:47] “Start”
I0227 05:52:55.309070 277 kubelet_network_linux.go:54] “Initialized iptables rules.” protocol=“IPv4”
I0227 05:52:55.309136 277 status_manager.go:244] “Starting to sync pod status with apiserver”
I0227 05:52:55.309223 277 kubelet.go:2427] “Starting kubelet main sync loop”
E0227 05:52:55.309389 277 kubelet.go:2451] “Skipping pod synchronization” err=“[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]”
time=“2026-02-27T05:52:55Z” level=fatal msg=“Failed to start networking: unable to initialize network policy controller: error getting node subnet: failed to find interface with specified node ip”
time=“2026-02-27T05:53:18Z” level=info msg=“Starting k3s v1.34.1+k3s1 (24fc436e)”
time=“2026-02-27T05:53:18Z” level=info msg=“Managed etcd cluster bootstrap already complete and initialized”
time=“2026-02-27T05:53:18Z” level=info msg=“Starting temporary etcd to reconcile with datastore”

主要有这一段,不知道是否与node ip错误有关,实在有点困扰
Failed to start networking: unable to initialize network policy controller: error getting node subnet: failed to find interface with specified node ip

你用的什么操作系统?

ubuntu-24.04.3-live-server-amd64