Rancher内置的k3s怎么能清楚干净,rancher2.5.16

我这边单节点安装的rancher2.5.16,rancher内置的k3s坏了,但是我把rancher的数据导出来,在重新部署一个新的rancher2.5.16,并挂载之前的rancher数据,发现k3s还是坏的,请问怎么能吧k3s清楚干净呢? 我换一台机器就没问题了。
错误如下:2024/07/12 05:07:04 [INFO] Rancher version v2.5.16 (3f4e21bca) is starting
2024/07/12 05:07:04 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Agent:false Features: ClusterRegistry:}
2024/07/12 05:07:04 [INFO] Listening on /tmp/log.sock
2024/07/12 05:07:04 [INFO] Running etcd --data-dir=management-state/etcd --heartbeat-interval=500 --election-timeout=5000
2024-07-12 05:07:04.224043 W | pkg/flags: unrecognized environment variable ETCD_URL_arm64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-arm64.tar.gz
2024-07-12 05:07:04.224088 W | pkg/flags: unrecognized environment variable ETCD_URL_amd64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
2024-07-12 05:07:04.224094 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=amd64
2024-07-12 05:07:04.224100 W | pkg/flags: unrecognized environment variable ETCD_URL=ETCD_URL_amd64
[WARNING] Deprecated ‘–logger=capnslog’ flag is set; use ‘–logger=zap’ flag instead
2024-07-12 05:07:04.224154 I | etcdmain: etcd Version: 3.4.3
2024-07-12 05:07:04.224164 I | etcdmain: Git SHA: 3cf2f69b5
2024-07-12 05:07:04.224168 I | etcdmain: Go Version: go1.12.12
2024-07-12 05:07:04.224395 I | etcdmain: Go OS/Arch: linux/amd64
2024-07-12 05:07:04.224402 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2024-07-12 05:07:04.227793 N | etcdmain: the server is already initialized as member before, starting as etcd member…
[WARNING] Deprecated ‘–logger=capnslog’ flag is set; use ‘–logger=zap’ flag instead
2024-07-12 05:07:04.234958 I | embed: name = default
2024-07-12 05:07:04.234974 I | embed: data dir = management-state/etcd
2024-07-12 05:07:04.234980 I | embed: member dir = management-state/etcd/member
2024-07-12 05:07:04.234984 I | embed: heartbeat = 500ms
2024-07-12 05:07:04.234988 I | embed: election = 5000ms
2024-07-12 05:07:04.234993 I | embed: snapshot count = 100000
2024-07-12 05:07:04.235009 I | embed: advertise client URLs = http://localhost:2379
2024-07-12 05:07:04.235017 I | embed: initial advertise peer URLs = http://localhost:2380
2024-07-12 05:07:04.235025 I | embed: initial cluster =
2024-07-12 05:07:04.470146 I | etcdserver: recovered store from snapshot at index 147001477
2024-07-12 05:07:04.471789 I | mvcc: restore compact to 137683435
2024-07-12 05:07:05.506566 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 147076882
raft2024/07/12 05:07:05 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
raft2024/07/12 05:07:05 INFO: 8e9e05c52164694d became follower at term 744
raft2024/07/12 05:07:05 INFO: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 744, commit: 147076882, applied: 147001477, lastindex: 147076882, lastterm: 744]
2024-07-12 05:07:05.512574 I | etcdserver/api: enabled capabilities for version 3.4
2024-07-12 05:07:05.512604 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2024-07-12 05:07:05.512615 I | etcdserver/membership: set the cluster version to 3.4 from store
2024-07-12 05:07:05.518231 I | mvcc: restore compact to 137683435
2024-07-12 05:07:05.560596 W | auth: simple token is not cryptographically signed
2024-07-12 05:07:05.568211 I | etcdserver: starting server… [version: 3.4.3, cluster version: 3.4]
2024-07-12 05:07:05.569653 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2024-07-12 05:07:05.570887 I | embed: listening for peers on 127.0.0.1:2380
raft2024/07/12 05:07:09 INFO: 8e9e05c52164694d is starting a new election at term 744
raft2024/07/12 05:07:09 INFO: 8e9e05c52164694d became candidate at term 745
raft2024/07/12 05:07:09 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 745
raft2024/07/12 05:07:09 INFO: 8e9e05c52164694d became leader at term 745
raft2024/07/12 05:07:09 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 745
2024-07-12 05:07:09.520333 I | embed: ready to serve client requests
2024-07-12 05:07:09.520480 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2024-07-12 05:07:09.521156 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2024/07/12 05:07:09 [INFO] Waiting for server to become available: Get “https://127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connection refused
2024/07/12 05:07:11 [INFO] Waiting for server to become available: the server is currently unable to handle the request
W0712 05:07:13.609220 7 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0712 05:07:13.721729 7 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0712 05:07:13.775139 7 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
exit status 255
2024/07/12 05:07:20 [FATAL] k3s exited with: exit status 255

docker run 启动的 rancher 里面内置的 K3s 集群是没有办法清理或者更换的,你应该看看 rancher 容器内的 k3s.log ,看看为什么 k3s 没启动成功。

都是这些错误imitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time=“2024-07-12T06:52:56.265956934Z” level=info msg=“Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0”
I0712 06:52:56.266350 33 registry.go:173] Registering SelectorSpread plugin
I0712 06:52:56.266368 33 registry.go:173] Registering SelectorSpread plugin
time=“2024-07-12T06:52:56.266746272Z” level=info msg=“Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true”
time=“2024-07-12T06:52:56.268613116Z” level=info msg=“Node token is available at /var/lib/rancher/k3s/server/token”
time=“2024-07-12T06:52:56.268643574Z” level=info msg=“To join node to cluster: k3s agent -s https://172.18.0.3:6443 -t ${NODE_TOKEN}”
time=“2024-07-12T06:52:56.270172607Z” level=info msg=“Wrote kubeconfig /etc/rancher/k3s/k3s.yaml”
time=“2024-07-12T06:52:56.270205183Z” level=info msg=“Run: k3s kubectl”
time=“2024-07-12T06:52:56.270644999Z” level=info msg=“Waiting for API server to become available”
time=“2024-07-12T06:52:56.290470438Z” level=info msg=“Cluster-Http-Server 2024/07/12 06:52:56 http: TLS handshake error from 127.0.0.1:36448: remote error: tls: bad certificate”
time=“2024-07-12T06:52:56.294954272Z” level=info msg=“Cluster-Http-Server 2024/07/12 06:52:56 http: TLS handshake error from 127.0.0.1:36458: remote error: tls: bad certificate”
time=“2024-07-12T06:52:56.306647723Z” level=info msg=“certificate CN=local-node signed by CN=k3s-server-ca@1689214732: notBefore=2023-07-13 02:18:52 +0000 UTC notAfter=2025-07-12 06:52:56 +0000 UTC”
time=“2024-07-12T06:52:56.309737972Z” level=info msg=“certificate CN=system:node:local-node,O=system:nodes signed by CN=k3s-client-ca@1689214732: notBefore=2023-07-13 02:18:52 +0000 UTC notAfter=2025-07-12 06:52:56 +0000 UTC”
time=“2024-07-12T06:52:56.315550470Z” level=info msg=“Module overlay was already loaded”
time=“2024-07-12T06:52:56.315576462Z” level=info msg=“Module nf_conntrack was already loaded”
time=“2024-07-12T06:52:56.315591325Z” level=info msg=“Module br_netfilter was already loaded”
time=“2024-07-12T06:52:56.315605256Z” level=info msg=“Module iptable_nat was already loaded”
time=“2024-07-12T06:52:56.316961312Z” level=info msg=“Set sysctl ‘net/ipv6/conf/all/forwarding’ to 1”
time=“2024-07-12T06:52:56.317042150Z” level=info msg=“Set sysctl ‘net/bridge/bridge-nf-call-iptables’ to 1”
time=“2024-07-12T06:52:56.317061249Z” level=error msg=“Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory”
time=“2024-07-12T06:52:56.317082284Z” level=info msg=“Set sysctl ‘net/bridge/bridge-nf-call-ip6tables’ to 1”
time=“2024-07-12T06:52:56.317095924Z” level=error msg=“Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory”
time=“2024-07-12T06:52:56.317129622Z” level=info msg=“Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_established’ to 86400”
time=“2024-07-12T06:52:56.317179284Z” level=info msg=“Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_close_wait’ to 3600”
time=“2024-07-12T06:52:56.318231980Z” level=info msg=“Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log”
time=“2024-07-12T06:52:56.318527706Z” level=info msg=“Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd”
time=“2024-07-12T06:52:57.343602786Z” level=info msg=“Containerd is now running”
time=“2024-07-12T06:52:59.262977919Z” level=info msg=“Connecting to proxy” url=“wss://127.0.0.1:6443/v1-k3s/connect”
time=“2024-07-12T06:52:59.266290583Z” level=info msg=“Handling backend connection request [local-node]”
time=“2024-07-12T06:52:59.266919977Z” level=warning msg=“Disabling CPU quotas due to missing cpu.cfs_period_us”
time=“2024-07-12T06:52:59.266987364Z” level=info msg=“Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/2d753699589478b1821bd86b3efed6baafd0388c616e89c9d32f1842d4f31eb6/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=local-node --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key”
time=“2024-07-12T06:52:59.268068055Z” level=info msg=“Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=local-node --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables”
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
I0712 06:52:59.272105 33 server.go:407] Version: v1.19.13+k3s1
W0712 06:52:59.284573 33 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0712 06:52:59.285181 33 proxier.go:639] Failed to read file /lib/modules/3.10.0-1062.18.1.el7.x86_64/modules.builtin with error open /lib/modules/3.10.0-1062.18.1.el7.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0712 06:52:59.286072 33 proxier.go:649] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0712 06:52:59.286733 33 proxier.go:649] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0712 06:52:59.291765 33 proxier.go:649] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0712 06:52:59.314809 33 proxier.go:649] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0712 06:52:59.318751 33 proxier.go:649] Failed to load kernel module nf_conntrack_ipv4 with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0712 06:52:59.328575 33 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E0712 06:52:59.333552 33 node.go:125] Failed to retrieve node info: nodes “local-node” is forbidden: User “system:kube-proxy” cannot get resource “nodes” in API group “” at the cluster scope
time=“2024-07-12T06:52:59.336744850Z” level=info msg=“Node CIDR assigned for: local-node”
I0712 06:52:59.336928 33 flannel.go:92] Determining IP address of default interface
I0712 06:52:59.337183 33 flannel.go:105] Using interface with name eth0 and address 172.18.0.3
I0712 06:52:59.340005 33 kube.go:117] Waiting 10m0s for node controller to sync
I0712 06:52:59.340046 33 kube.go:300] Starting kube subnet manager
time=“2024-07-12T06:52:59.348052059Z” level=info msg=“labels have already set on node: local-node”
I0712 06:52:59.354255 33 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0712 06:52:59.354291 33 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0712 06:52:59.354311 33 secure_serving.go:197] Serving securely on 127.0.0.1:6444
I0712 06:52:59.354372 33 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0712 06:52:59.354401 33 tlsconfig.go:240] Starting DynamicServingCertificateController
I0712 06:52:59.354705 33 customresource_discovery_controller.go:209] Starting DiscoveryController
I0712 06:52:59.354993 33 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0712 06:52:59.355008 33 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0712 06:52:59.355028 33 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
I0712 06:52:59.355041 33 autoregister_controller.go:141] Starting autoregister controller
I0712 06:52:59.355050 33 cache.go:32] Waiting for caches to sync for autoregister controller
I0712 06:52:59.355116 33 available_controller.go:475] Starting AvailableConditionController
I0712 06:52:59.355121 33 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0712 06:52:59.355142 33 controller.go:83] Starting OpenAPI AggregationController
I0712 06:52:59.355944 33 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0712 06:52:59.355956 33 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0712 06:52:59.358860 33 controller.go:86] Starting OpenAPI controller
I0712 06:52:59.358894 33 naming_controller.go:291] Starting NamingConditionController
I0712 06:52:59.358913 33 establishing_controller.go:76] Starting EstablishingController
I0712 06:52:59.358936 33 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0712 06:52:59.358957 33 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0712 06:52:59.358975 33 crd_finalizer.go:266] Starting CRDFinalizer
I0712 06:52:59.358999 33 crdregistration_controller.go:111] Starting crd-autoregister controller
I0712 06:52:59.359005 33 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0712 06:52:59.369385 33 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0712 06:52:59.369426 33 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
time=“2024-07-12T06:52:59.443792234Z” level=info msg=“Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m --profiling=false --secure-port=0”
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I0712 06:52:59.451158 33 controllermanager.go:127] Version: v1.19.13+k3s1
W0712 06:52:59.451180 33 controllermanager.go:139] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
I0712 06:52:59.451228 33 leaderelection.go:243] attempting to acquire leader lease kube-system/cloud-controller-manager…
I0712 06:52:59.455543 33 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0712 06:52:59.468497 33 cache.go:39] Caches are synced for autoregister controller
E0712 06:52:59.492539 33 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0712 06:52:59.555168 33 cache.go:39] Caches are synced for AvailableConditionController controller
I0712 06:52:59.556182 33 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0712 06:52:59.563906 33 shared_informer.go:247] Caches are synced for crd-autoregister
I0712 06:52:59.909245 33 network_policy_controller.go:149] Starting network policy controller
I0712 06:53:00.355703 33 kube.go:124] Node controller sync successful
I0712 06:53:00.355812 33 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0712 06:53:00.359951 33 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0712 06:53:00.359992 33 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0712 06:53:00.475614 33 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
I0712 06:53:00.475631 33 flannel.go:82] Running backend.
I0712 06:53:00.475642 33 vxlan_network.go:60] watching for new subnet leases
I0712 06:53:00.479081 33 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0712 06:53:00.479096 33 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0712 06:53:00.479917 33 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0712 06:53:00.480935 33 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0712 06:53:00.481767 33 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0712 06:53:00.483089 33 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0712 06:53:00.485055 33 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0712 06:53:00.486942 33 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0712 06:53:00.488076 33 node.go:136] Successfully retrieved node IP: 172.18.0.3
I0712 06:53:00.488110 33 server_others.go:143] kube-proxy node IP is an IPv4 address (172.18.0.3), assume IPv4 operation
I0712 06:53:00.495198 33 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0712 06:53:00.506554 33 server_others.go:186] Using iptables Proxier.
I0712 06:53:00.506814 33 server.go:650] Version: v1.19.13+k3s1
I0712 06:53:00.506888 33 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0712 06:53:00.506896 33 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0712 06:53:00.510009 33 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0712 06:53:00.511185 33 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0712 06:53:00.512699 33 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0712 06:53:00.516801 33 config.go:315] Starting service config controller
I0712 06:53:00.516817 33 shared_informer.go:240] Waiting for caches to sync for service config
I0712 06:53:00.516854 33 config.go:224] Starting endpoint slice config controller
I0712 06:53:00.516859 33 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0712 06:53:00.556687 33 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0712 06:53:00.616860 33 shared_informer.go:247] Caches are synced for service config
I0712 06:53:00.616924 33 shared_informer.go:247] Caches are synced for endpoint slice config
Flag --address has been deprecated, see --bind-address instead.
time=“2024-07-12T06:53:01.787547808Z” level=info msg=“Kube API server is now running”
time=“2024-07-12T06:53:01.787674723Z” level=info msg=“k3s is up and running”
I0712 06:53:02.144107 33 controllermanager.go:175] Version: v1.19.13+k3s1
I0712 06:53:02.144494 33 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
I0712 06:53:02.144536 33 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager…
I0712 06:53:02.220795 33 registry.go:173] Registering SelectorSpread plugin
I0712 06:53:02.220819 33 registry.go:173] Registering SelectorSpread plugin
W0712 06:53:02.280784 33 authorization.go:47] Authorization is disabled
W0712 06:53:02.280806 33 authentication.go:40] Authentication is disabled
I0712 06:53:02.280820 33 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
I0712 06:53:02.481603 33 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler…
time=“2024-07-12T06:53:02.637608199Z” level=info msg=“Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz”
time=“2024-07-12T06:53:02.706751740Z” level=info msg=“Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml”
time=“2024-07-12T06:53:02.709649240Z” level=info msg=“Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml”
time=“2024-07-12T06:53:02.709842570Z” level=info msg=“Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml”
I0712 06:53:03.043196 33 leaderelection.go:243] attempting to acquire leader lease kube-system/k3s…
time=“2024-07-12T06:53:03.043557617Z” level=info msg=“Starting k3s.cattle.io/v1, Kind=Addon controller”
time=“2024-07-12T06:53:03.045334680Z” level=info msg=“Starting /v1, Kind=Secret controller”
time=“2024-07-12T06:53:03.056967692Z” level=info msg=“Cluster dns configmap already exists”
I0712 06:53:03.133015 33 controller.go:609] quota admission added evaluator for: addons.k3s.cattle.io
I0712 06:53:03.191261 33 controller.go:609] quota admission added evaluator for: deployments.apps
time=“2024-07-12T06:53:04.265734256Z” level=info msg=“Stopped tunnel to 127.0.0.1:6443”
time=“2024-07-12T06:53:04.265784113Z” level=info msg=“Connecting to proxy” url=“wss://172.18.0.3:6443/v1-k3s/connect”
time=“2024-07-12T06:53:04.265948283Z” level=info msg=“Proxy done” err=“context canceled” url=“wss://127.0.0.1:6443/v1-k3s/connect”
time=“2024-07-12T06:53:04.266060486Z” level=info msg=“error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF”
time=“2024-07-12T06:53:04.271861746Z” level=info msg=“Handling backend connection request [local-node]”
I0712 06:53:04.409484 33 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0712 06:53:04.421292 33 container_manager_linux.go:281] container manager verified user specified cgroup-root exists:
I0712 06:53:04.421329 33 container_manager_linux.go:286] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map SystemReserved:map HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I0712 06:53:04.422749 33 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
I0712 06:53:04.422764 33 container_manager_linux.go:316] [topologymanager] Initializing Topology Manager with none policy
I0712 06:53:04.422770 33 container_manager_linux.go:321] Creating device plugin manager: true
W0712 06:53:04.430490 33 util_unix.go:103] Using “/run/k3s/containerd/containerd.sock” as endpoint is deprecated, please consider using full url format “unix:///run/k3s/containerd/containerd.sock”.
W0712 06:53:04.430598 33 util_unix.go:103] Using “/run/k3s/containerd/containerd.sock” as endpoint is deprecated, please consider using full url format “unix:///run/k3s/containerd/containerd.sock”.
I0712 06:53:04.430682 33 kubelet.go:394] Attempting to sync node with API server
I0712 06:53:04.430708 33 kubelet.go:261] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
I0712 06:53:04.430747 33 kubelet.go:273] Adding apiserver pod source
I0712 06:53:04.430769 33 apiserver.go:43] Waiting for node sync before watching apiserver pods
I0712 06:53:04.442226 33 kuberuntime_manager.go:214] Container runtime containerd initialized, version: v1.4.8-k3s1, apiVersion: v1alpha2
I0712 06:53:04.451642 33 server.go:1148] Started kubelet
I0712 06:53:04.451777 33 server.go:152] Starting to listen on 0.0.0.0:10250
I0712 06:53:04.454745 33 server.go:425] Adding debug handlers to kubelet server.
I0712 06:53:04.458595 33 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0712 06:53:04.460845 33 volume_manager.go:265] Starting Kubelet Volume Manager
I0712 06:53:04.461353 33 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0712 06:53:04.497197 33 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint “/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs”: unable to find data in memory cache.
E0712 06:53:04.497231 33 kubelet.go:1238] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0712 06:53:04.519513 33 status_manager.go:158] Starting to sync pod status with apiserver
I0712 06:53:04.519559 33 kubelet.go:1770] Starting kubelet main sync loop.
E0712 06:53:04.520160 33 kubelet.go:1794] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0712 06:53:04.529209 33 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
I0712 06:53:04.560420 33 cpu_manager.go:184] [cpumanager] starting with none policy
I0712 06:53:04.560436 33 cpu_manager.go:185] [cpumanager] reconciling every 10s
I0712 06:53:04.560468 33 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0712 06:53:04.560942 33 kuberuntime_manager.go:992] updating runtime config through cri with podcidr 10.42.0.0/24
I0712 06:53:04.563701 33 state_mem.go:88] [cpumanager] updated default cpuset: “”
I0712 06:53:04.563715 33 state_mem.go:96] [cpumanager] updated cpuset assignments: “map
I0712 06:53:04.563736 33 policy_none.go:43] [cpumanager] none policy: Start
I0712 06:53:04.563813 33 kubelet_network.go:77] Setting Pod CIDR: → 10.42.0.0/24
I0712 06:53:04.563971 33 kubelet_node_status.go:71] Attempting to register node local-node
E0712 06:53:04.625571 33 kubelet.go:1794] skipping pod synchronization - container runtime status check may not have completed yet
I0712 06:53:04.632372 33 kubelet_node_status.go:109] Node local-node was previously registered
I0712 06:53:04.633579 33 kubelet_node_status.go:74] Successfully registered node local-node
F0712 06:53:04.635406 33 kubelet.go:1316] Failed to start ContainerManager failed to initialize top level QOS containers: failed to create top level BestEffort QOS cgroup : mkdir /sys/fs/cgroup/memory/kubepods/besteffort: cannot allocate memory
goroutine 6198 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0021fb8c0, 0xfa, 0x212)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).output(0x70ff1e0, 0xc000000003, 0x0, 0x0, 0xc00017a7e0, 0x6da9e41, 0xa, 0x524, 0x0)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:945 +0x191
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).printf(0x70ff1e0, 0xc000000003, 0x0, 0x0, 0x46b9ded, 0x23, 0xc00e679cb0, 0x1, 0x1)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
github.com/rancher/k3s/vendor/k8s.io/klog/v2.Fatalf(...)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1456
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0xc0099faa80)
/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1316 +0x385
sync.(*Once).doSlow(0xc0099fb2c0, 0xc001d43dd0)
/usr/local/go/src/sync/once.go:66 +0xec
sync.(*Once).Do(…)
/usr/local/go/src/sync/once.go:57
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0xc0099faa80)
/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2145 +0x554
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc01291f990)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc01291f990, 0x4d30ea0, 0xc01282b500, 0x1, 0xc00007e780)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc01291f990, 0x12a05f200, 0x0, 0x1, 0xc00007e780)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc01291f990, 0x12a05f200, 0xc00007e780)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run
/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1363 +0x16a

goroutine 1 [chan receive]:
github.com/rancher/k3s/pkg/agent.run(0x4df2660, 0xc0009320c0, 0xc006432540, 0x6a, 0x0, 0x0, 0x0, 0x0, 0xc000c68c00, 0x16, …)
/go/src/github.com/rancher/k3s/pkg/agent/run.go:126 +0x2eb
github.com/rancher/k3s/pkg/agent.Run(0x4df2660, 0xc0009320c0, 0xc006432540, 0x6a, 0x0, 0x0, 0x0, 0x0, 0xc000c68c00, 0x16, …)
/go/src/github.com/rancher/k3s/pkg/agent/run.go:218 +0x438
github.com/rancher/k3s/pkg/cli/server.run(0xc000786f20, 0x71001a0, 0xc0010d4d58, 0x0, 0x0, 0xc0010d4d58, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/k3s/pkg/cli/server/server.go:347 +0x1b98
github.com/rancher/k3s/pkg/cli/server.Run(0xc000786f20, 0x0, 0x0)
/go/src/github.com/rancher/k3s/pkg/cli/server/server.go:45 +0x85
github.com/rancher/k3s/vendor/github.com/urfave/cli.HandleAction(0x3ed21c0, 0x4842738, 0xc000786f20, 0xc000786f20, 0x0)
/go/src/github.com/rancher/k3s/vendor/github.com/urfave/cli/app.go:523 +0xfd
github.com/rancher/k3s/vendor/github.com/urfave/cli.Command.Run(0x463349b, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x466a5cc, 0x15, 0xc00021d1c0, …)
/go/src/github.com/rancher/k3s/vendor/github.com/urfave/cli/command.go:174 +0x58e
github.com/rancher/k3s/vendor/github.com/urfave/cli.(*App).Run(0xc000762540, 0xc000070d80, 0x9, 0x9, 0x0, 0x0)
/go/src/github.com/rancher/k3s/vendor/github.com/urfave/cli/app.go:276 +0x7d4
main.main()
/go/src/github.com/rancher/k3s/cmd/server/main.go:49 +0x69a

goroutine 6 [chan receive]:
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70ff1e0)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by github.com/rancher/k3s/vendor/k8s.io/klog/v2.init.0
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

goroutine 182 [chan receive]:
github.com/rancher/k3s/vendor/go.etcd.io/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc0014312c0)
/go/src/github.com/rancher/k3s/vendor/go.etcd.io/etcd/pkg/logutil/merge_logger.go:173 +0x3b3
created by github.com/rancher/k3s/vendor/go.etcd.io/etcd/pkg/logutil.NewMergeLogger
/go/src/github.com/rancher/k3s/vendor/go.etcd.io/etcd/pkg/logutil/merge_logger.go:91 +0x85

goroutine 66 [chan receive]:
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x70ff020)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by github.com/rancher/k3s/vendor/k8s.io/klog.init.0
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:411 +0xd8

你看这些没用,你得跟踪到最后,看看 最后崩溃的日志,也就是 K3s 退出前的日志

应该是这个evel=info msg=“Connecting to proxy” url=“wss://172.18.0.3:6443/v1-k3s/connect”
time=“2024-07-12T06:53:04.265948283Z” level=info msg=“Proxy done” err=“context canceled” url=“wss://127.0.0.1:6443/v1-k3s/connect”
time=“2024-07-12T06:53:04.266060486Z” level=info msg=“error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF”
time=“2024-07-12T06:53:04.271861746Z” level=info msg=“Handling backend connection request [local-node]”
I0712 06:53:04.409484 33 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0712 06:53:04.421292 33 container_manager_linux.go:281] container manager verified user specified cgroup-root exists:
I0712 06:53:04.421329 33 container_manager_linux.go:286] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map SystemReserved:map HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I0712 06:53:04.422749 33 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
I0712 06:53:04.422764 33 container_manager_linux.go:316] [topologymanager] Initializing Topology Manager with none policy
I0712 06:53:04.422770 33 container_manager_linux.go:321] Creating device plugin manager: true
W0712 06:53:04.430490 33 util_unix.go:103] Using “/run/k3s/containerd/containerd.sock” as endpoint is deprecated, please consider using full url format “unix:///run/k3s/containerd/containerd.sock”.
W0712 06:53:04.430598 33 util_unix.go:103] Using “/run/k3s/containerd/containerd.sock” as endpoint is deprecated, please consider using full url format “unix:///run/k3s/containerd/containerd.sock”.
I0712 06:53:04.430682 33 kubelet.go:394] Attempting to sync node with API server
I0712 06:53:04.430708 33 kubelet.go:261] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
I0712 06:53:04.430747 33 kubelet.go:273] Adding apiserver pod source
I0712 06:53:04.430769 33 apiserver.go:43] Waiting for node sync before watching apiserver pods
I0712 06:53:04.442226 33 kuberuntime_manager.go:214] Container runtime containerd initialized, version: v1.4.8-k3s1, apiVersion: v1alpha2
I0712 06:53:04.451642 33 server.go:1148] Started kubelet
I0712 06:53:04.451777 33 server.go:152] Starting to listen on 0.0.0.0:10250
I0712 06:53:04.454745 33 server.go:425] Adding debug handlers to kubelet server.
I0712 06:53:04.458595 33 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0712 06:53:04.460845 33 volume_manager.go:265] Starting Kubelet Volume Manager
I0712 06:53:04.461353 33 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0712 06:53:04.497197 33 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint “/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs”: unable to find data in memory cache.
E0712 06:53:04.497231 33 kubelet.go:1238] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0712 06:53:04.519513 33 status_manager.go:158] Starting to sync pod status with apiserver
I0712 06:53:04.519559 33 kubelet.go:1770] Starting kubelet main sync loop.
E0712 06:53:04.520160 33 kubelet.go:1794] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0712 06:53:04.529209 33 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
I0712 06:53:04.560420 33 cpu_manager.go:184] [cpumanager] starting with none policy
I0712 06:53:04.560436 33 cpu_manager.go:185] [cpumanager] reconciling every 10s
I0712 06:53:04.560468 33 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0712 06:53:04.560942 33 kuberuntime_manager.go:992] updating runtime config through cri with podcidr 10.42.0.0/24
I0712 06:53:04.563701 33 state_mem.go:88] [cpumanager] updated default cpuset: “”
I0712 06:53:04.563715 33 state_mem.go:96] [cpumanager] updated cpuset assignments: “map
I0712 06:53:04.563736 33 policy_none.go:43] [cpumanager] none policy: Start
I0712 06:53:04.563813 33 kubelet_network.go:77] Setting Pod CIDR: → 10.42.0.0/24
I0712 06:53:04.563971 33 kubelet_node_status.go:71] Attempting to register node local-node
E0712 06:53:04.625571 33 kubelet.go:1794] skipping pod synchronization - container runtime status check may not have completed yet
I0712 06:53:04.632372 33 kubelet_node_status.go:109] Node local-node was previously registered
I0712 06:53:04.633579 33 kubelet_node_status.go:74] Successfully registered node local-node
F0712 06:53:04.635406 33 kubelet.go:1316] Failed to start ContainerManager failed to initialize top level QOS containers: failed to create top level BestEffort QOS cgroup : mkdir /sys/fs/cgroup/memory/kubepods/besteffort: cannot allocate memory
goroutine 6198 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0021fb8c0, 0xfa, 0x212)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).output(0x70ff1e0, 0xc000000003, 0x0, 0x0, 0xc00017a7e0, 0x6da9e41, 0xa, 0x524, 0x0)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:945 +0x191
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).printf(0x70ff1e0, 0xc000000003, 0x0, 0x0, 0x46b9ded, 0x23, 0xc00e679cb0, 0x1, 0x1)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
github.com/rancher/k3s/vendor/k8s.io/klog/v2.Fatalf(...)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1456
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0xc0099faa80)
/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1316 +0x385
sync.(*Once).doSlow(0xc0099fb2c0, 0xc001d43dd0)
/usr/local/go/src/sync/once.go:66 +0xec
sync.(*Once).Do(…)
/usr/local/go/src/sync/once.go:57
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0xc0099faa80)
/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2145 +0x554
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc01291f990)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc01291f990, 0x4d30ea0, 0xc01282b500, 0x1, 0xc00007e780)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc01291f990, 0x12a05f200, 0x0, 0x1, 0xc00007e780)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc01291f990, 0x12a05f200, 0xc00007e780)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachine

能给下如果k3s坏了,获取集群配置文件的命令不?

哪些配置文件?

重启就解决了,应该是内核版本不兼容导致k3spod不断地重启,不知道吧什么占满了

1 个赞

我的意思是如果我的rancher坏了启动不起来了 我怎么获取下游集群的config文件,然后通过kubectl去接管下游集群