版本变更v2.4.17--v2.6.6--v2.5.8失败

Rancher Server 设置

  • Rancher 版本:V2.4.17
  • 安装选项 (Docker install/Helm Chart):
    • 如果是 Helm Chart 安装,需要提供 Local 集群的类型(RKE1, RKE2, k3s, EKS, 等)和版本:
  • 在线或离线部署:

docker stop v2.4.17

docker create --volumes-from zealous_blackburn --name rancher-data rancher/rancher:v2.4.17

docker run -d --volumes-from rancher-data --restart=unless-stopped -p 8881:80 -p 443:443 --privileged rancher/rancher:v2.5.8

下游集群信息

  • Kubernetes 版本: 1.18.20
  • Cluster Type (Local/Downstream):
    • 如果 Downstream,是什么类型的集群?(自定义/导入或为托管 等):

用户信息

  • 登录用户的角色是什么? (管理员/集群所有者/集群成员/项目所有者/项目成员/自定义):
    • 如果自定义,自定义权限集:

问题描述:

之前2.4版本证书过期过,通过官方文档进行了证书更换,目前V2.4.17 UI一切正常

刚才准备升级V2.5.8 报错

重现步骤:

结果:

{“level”:“warn”,“ts”:“2022-07-05T04:30:01.152Z”,“caller”:“grpclog/grpclog.go:60”,“msg”:“grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = “transport: authentication handshake failed: EOF”. Reconnecting…”}
2022/07/05 04:30:01 [INFO] Waiting for k3s to start

容器里面一直循环着这个错误

预期结果:

截图:

其他上下文信息:

日志
=====k3s.log  日志 =====
I0702 03:53:46.920202      42 server.go:652] external host was not specified, using 172.17.0.2
I0702 03:53:46.927282      42 server.go:177] Version: v1.19.13+k3s1
I0702 03:53:46.961175      42 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0702 03:53:46.961191      42 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0702 03:53:46.969284      42 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0702 03:53:46.969301      42 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0702 03:53:47.020789      42 master.go:271] Using reconciler: lease
W0702 03:53:47.651388      42 genericapiserver.go:418] Skipping API batch/v2alpha1 because it has no resources.
W0702 03:53:47.666022      42 genericapiserver.go:418] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0702 03:53:47.683303      42 genericapiserver.go:418] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0702 03:53:47.702134      42 genericapiserver.go:418] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0702 03:53:47.705698      42 genericapiserver.go:418] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0702 03:53:47.719812      42 genericapiserver.go:418] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0702 03:53:47.751773      42 genericapiserver.go:418] Skipping API apps/v1beta2 because it has no resources.
W0702 03:53:47.751800      42 genericapiserver.go:418] Skipping API apps/v1beta1 because it has no resources.
I0702 03:53:47.774377      42 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0702 03:53:47.774396      42 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2022-07-02T03:53:48.393457909Z" level=info msg="Waiting for API server to become available"
time="2022-07-02T03:53:48.397703026Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0"
I0702 03:53:48.398026      42 registry.go:173] Registering SelectorSpread plugin
I0702 03:53:48.398039      42 registry.go:173] Registering SelectorSpread plugin
time="2022-07-02T03:53:48.398303488Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2022-07-02T03:53:48.426221404Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2022-07-02T03:53:48.426274302Z" level=info msg="To join node to cluster: k3s agent -s https://172.17.0.2:6443 -t ${NODE_TOKEN}"
time="2022-07-02T03:53:48.443325993Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2022-07-02T03:53:48.443370201Z" level=info msg="Run: k3s kubectl"
time="2022-07-02T03:53:48.466192637Z" level=info msg="Cluster-Http-Server 2022/07/02 03:53:48 http: TLS handshake error from 127.0.0.1:59850: remote error: tls: bad certificate"
time="2022-07-02T03:53:48.473041108Z" level=info msg="Cluster-Http-Server 2022/07/02 03:53:48 http: TLS handshake error from 127.0.0.1:59856: remote error: tls: bad certificate"
time="2022-07-02T03:53:48.569450007Z" level=info msg="certificate CN=local-node signed by CN=k3s-server-ca@1593533859: notBefore=2020-06-30 16:17:39 +0000 UTC notAfter=2023-07-02 03:53:48 +0000 UTC"
time="2022-07-02T03:53:48.635462142Z" level=info msg="certificate CN=system:node:local-node,O=system:nodes signed by CN=k3s-client-ca@1593533859: notBefore=2020-06-30 16:17:39 +0000 UTC notAfter=2023-07-02 03:53:48 +0000 UTC"
time="2022-07-02T03:53:48.804068054Z" level=info msg="Module overlay was already loaded"
time="2022-07-02T03:53:48.804097083Z" level=info msg="Module nf_conntrack was already loaded"
time="2022-07-02T03:53:48.804109825Z" level=info msg="Module br_netfilter was already loaded"
time="2022-07-02T03:53:48.804120852Z" level=info msg="Module iptable_nat was already loaded"
time="2022-07-02T03:53:48.805457355Z" level=info msg="Set sysctl 'net/ipv6/conf/all/forwarding' to 1"
time="2022-07-02T03:53:48.805525962Z" level=info msg="Set sysctl 'net/bridge/bridge-nf-call-iptables' to 1"
time="2022-07-02T03:53:48.805545811Z" level=error msg="Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory"
time="2022-07-02T03:53:48.805595244Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2022-07-02T03:53:48.805645978Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2022-07-02T03:53:48.805708135Z" level=info msg="Set sysctl 'net/bridge/bridge-nf-call-ip6tables' to 1"
time="2022-07-02T03:53:48.805724005Z" level=error msg="Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory"
time="2022-07-02T03:53:48.868404089Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2022-07-02T03:53:48.868549336Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
I0702 03:53:49.570250      42 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0702 03:53:49.570252      42 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0702 03:53:49.585740      42 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0702 03:53:49.586162      42 secure_serving.go:197] Serving securely on 127.0.0.1:6444
I0702 03:53:49.586216      42 autoregister_controller.go:141] Starting autoregister controller
I0702 03:53:49.586233      42 cache.go:32] Waiting for caches to sync for autoregister controller
I0702 03:53:49.586283      42 customresource_discovery_controller.go:209] Starting DiscoveryController
I0702 03:53:49.586672      42 tlsconfig.go:240] Starting DynamicServingCertificateController
I0702 03:53:49.586886      42 available_controller.go:475] Starting AvailableConditionController
I0702 03:53:49.586893      42 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0702 03:53:49.586919      42 controller.go:83] Starting OpenAPI AggregationController
I0702 03:53:49.587054      42 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0702 03:53:49.587062      42 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0702 03:53:49.587087      42 controller.go:86] Starting OpenAPI controller
I0702 03:53:49.587108      42 naming_controller.go:291] Starting NamingConditionController
I0702 03:53:49.587131      42 establishing_controller.go:76] Starting EstablishingController
I0702 03:53:49.587150      42 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0702 03:53:49.587167      42 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0702 03:53:49.587184      42 crd_finalizer.go:266] Starting CRDFinalizer
I0702 03:53:49.588400      42 crdregistration_controller.go:111] Starting crd-autoregister controller
I0702 03:53:49.588412      42 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0702 03:53:49.586672      42 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
I0702 03:53:49.596568      42 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0702 03:53:49.633595      42 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0702 03:53:49.633612      42 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0702 03:53:49.633645      42 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
time="2022-07-02T03:53:49.638545249Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m --profiling=false --secure-port=0"
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
E0702 03:53:49.659123      42 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0702 03:53:49.686616      42 cache.go:39] Caches are synced for autoregister controller
I0702 03:53:49.687026      42 cache.go:39] Caches are synced for AvailableConditionController controller
I0702 03:53:49.687604      42 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0702 03:53:49.695905      42 shared_informer.go:247] Caches are synced for crd-autoregister 
I0702 03:53:49.733717      42 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
time="2022-07-02T03:53:49.870405675Z" level=info msg="Containerd is now running"
I0702 03:53:50.558787      42 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0702 03:53:50.558817      42 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0702 03:53:50.590189      42 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
time="2022-07-02T03:53:50.825148697Z" level=info msg="Connecting to proxy" url="wss://172.17.0.2:6443/v1-k3s/connect"
time="2022-07-02T03:53:50.827195358Z" level=info msg="Handling backend connection request [local-node]"
time="2022-07-02T03:53:50.827695895Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2022-07-02T03:53:50.827756628Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/2d753699589478b1821bd86b3efed6baafd0388c616e89c9d32f1842d4f31eb6/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=local-node --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2022-07-02T03:53:50.828434869Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=local-node --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
I0702 03:53:50.837060      42 server.go:407] Version: v1.19.13+k3s1
W0702 03:53:50.838836      42 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0702 03:53:50.839498      42 proxier.go:639] Failed to read file /lib/modules/3.10.0-1062.9.1.el7.x86_64/modules.builtin with error open /lib/modules/3.10.0-1062.9.1.el7.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0702 03:53:50.853512      42 proxier.go:649] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0702 03:53:50.864043      42 proxier.go:649] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0702 03:53:50.875734      42 proxier.go:649] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0702 03:53:50.885842      42 proxier.go:649] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0702 03:53:50.897654      42 proxier.go:649] Failed to load kernel module nf_conntrack_ipv4 with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Flag --address has been deprecated, see --bind-address instead.
time="2022-07-02T03:53:51.645149242Z" level=info msg="Kube API server is now running"
time="2022-07-02T03:53:51.645218813Z" level=info msg="k3s is up and running"
time="2022-07-02T03:53:53.028157129Z" level=info msg="Node CIDR assigned for: local-node"
I0702 03:53:53.028294      42 flannel.go:92] Determining IP address of default interface
I0702 03:53:53.028554      42 flannel.go:105] Using interface with name eth0 and address 172.17.0.2
time="2022-07-02T03:53:53.030053736Z" level=info msg="labels have already set on node: local-node"
I0702 03:53:53.893155      42 controllermanager.go:127] Version: v1.19.13+k3s1
W0702 03:53:53.893176      42 controllermanager.go:139] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
I0702 03:53:53.893224      42 leaderelection.go:243] attempting to acquire leader lease  kube-system/cloud-controller-manager...
I0702 03:53:55.955570      42 kube.go:117] Waiting 10m0s for node controller to sync
I0702 03:53:55.955611      42 kube.go:300] Starting kube subnet manager
I0702 03:53:55.991602      42 node.go:136] Successfully retrieved node IP: 172.17.0.2
I0702 03:53:55.991643      42 server_others.go:143] kube-proxy node IP is an IPv4 address (172.17.0.2), assume IPv4 operation
I0702 03:53:56.013209      42 server_others.go:186] Using iptables Proxier.
I0702 03:53:56.013437      42 server.go:650] Version: v1.19.13+k3s1
I0702 03:53:56.014076      42 config.go:315] Starting service config controller
I0702 03:53:56.014090      42 shared_informer.go:240] Waiting for caches to sync for service config
I0702 03:53:56.014613      42 config.go:224] Starting endpoint slice config controller
I0702 03:53:56.014624      42 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0702 03:53:56.114251      42 shared_informer.go:247] Caches are synced for service config 
I0702 03:53:56.114700      42 shared_informer.go:247] Caches are synced for endpoint slice config 
I0702 03:53:56.127839      42 network_policy_controller.go:149] Starting network policy controller
I0702 03:53:56.863251      42 controllermanager.go:175] Version: v1.19.13+k3s1
I0702 03:53:56.863551      42 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
I0702 03:53:56.863576      42 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-controller-manager...
I0702 03:53:56.955800      42 kube.go:124] Node controller sync successful
I0702 03:53:56.955893      42 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0702 03:53:57.006756      42 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
I0702 03:53:57.006769      42 flannel.go:82] Running backend.
I0702 03:53:57.006779      42 vxlan_network.go:60] watching for new subnet leases
I0702 03:53:57.033060      42 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0702 03:53:57.033075      42 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0702 03:53:57.038738      42 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0702 03:53:57.038755      42 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0702 03:53:57.046770      42 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0702 03:53:57.052349      42 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0702 03:53:57.060378      42 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0702 03:53:57.065806      42 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0702 03:53:57.073382      42 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0702 03:53:57.085742      42 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0702 03:53:57.091762      42 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0702 03:53:57.111157      42 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0702 03:53:57.136309      42 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0702 03:53:57.160033      42 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0702 03:53:57.882433      42 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
I0702 03:53:58.046104      42 registry.go:173] Registering SelectorSpread plugin
I0702 03:53:58.046121      42 registry.go:173] Registering SelectorSpread plugin
W0702 03:53:58.047534      42 authorization.go:47] Authorization is disabled
W0702 03:53:58.047546      42 authentication.go:40] Authentication is disabled
I0702 03:53:58.047558      42 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
I0702 03:53:58.148513      42 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-scheduler...



======容器日志========
[[A2022/07/05 04:29:57 [INFO] Rancher version v2.5.8 (cf16ca13d) is starting
2022/07/05 04:29:57 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Agent:false Features: ClusterRegistry:}
2022/07/05 04:29:57 [INFO] Listening on /tmp/log.sock
2022/07/05 04:29:57 [INFO] Running etcd --data-dir=management-state/etcd --heartbeat-interval=500 --election-timeout=5000
2022-07-05 04:29:57.057725 W | pkg/flags: unrecognized environment variable ETCD_URL_arm64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-arm64.tar.gz
2022-07-05 04:29:57.057764 W | pkg/flags: unrecognized environment variable ETCD_URL_amd64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
2022-07-05 04:29:57.057768 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=amd64
2022-07-05 04:29:57.057772 W | pkg/flags: unrecognized environment variable ETCD_URL=ETCD_URL_amd64
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-07-05 04:29:57.057813 I | etcdmain: etcd Version: 3.4.3
2022-07-05 04:29:57.057820 I | etcdmain: Git SHA: 3cf2f69b5
2022-07-05 04:29:57.057823 I | etcdmain: Go Version: go1.12.12
2022-07-05 04:29:57.057827 I | etcdmain: Go OS/Arch: linux/amd64
2022-07-05 04:29:57.057831 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2022-07-05 04:29:57.063599 W | etcdmain: found invalid file/dir config under data dir management-state/etcd (Ignore this if you are upgrading etcd)
2022-07-05 04:29:57.063612 W | etcdmain: found invalid file/dir name under data dir management-state/etcd (Ignore this if you are upgrading etcd)
2022-07-05 04:29:57.063622 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-07-05 04:29:57.087083 I | embed: name = default
2022-07-05 04:29:57.087099 I | embed: data dir = management-state/etcd
2022-07-05 04:29:57.087104 I | embed: member dir = management-state/etcd/member
2022-07-05 04:29:57.087115 I | embed: heartbeat = 500ms
2022-07-05 04:29:57.087117 I | embed: election = 5000ms
2022-07-05 04:29:57.087121 I | embed: snapshot count = 100000
2022-07-05 04:29:57.087136 I | embed: advertise client URLs = http://localhost:2379
2022-07-05 04:29:57.087141 I | embed: initial advertise peer URLs = http://localhost:2380
2022-07-05 04:29:57.087147 I | embed: initial cluster = 
2022-07-05 04:29:57.662193 I | etcdserver: recovered store from snapshot at index 285302857
2022-07-05 04:29:57.665947 I | mvcc: restore compact to 271810580
2022-07-05 04:29:58.089823 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 285384768
raft2022/07/05 04:29:58 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
raft2022/07/05 04:29:58 INFO: 8e9e05c52164694d became follower at term 710
raft2022/07/05 04:29:58 INFO: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 710, commit: 285384768, applied: 285302857, lastindex: 285384768, lastterm: 710]
2022-07-05 04:29:58.095676 I | etcdserver/api: enabled capabilities for version 3.4
2022-07-05 04:29:58.095693 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2022-07-05 04:29:58.095699 I | etcdserver/membership: set the cluster version to 3.4 from store
2022-07-05 04:29:58.112931 I | mvcc: restore compact to 271810580
2022-07-05 04:29:58.126724 W | auth: simple token is not cryptographically signed
2022-07-05 04:29:58.133578 I | etcdserver: starting server... [version: 3.4.3, cluster version: 3.4]
2022-07-05 04:29:58.134094 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2022-07-05 04:29:58.136290 I | embed: listening for peers on 127.0.0.1:2380
raft2022/07/05 04:29:59 INFO: 8e9e05c52164694d is starting a new election at term 710
raft2022/07/05 04:29:59 INFO: 8e9e05c52164694d became candidate at term 711
raft2022/07/05 04:29:59 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 711
raft2022/07/05 04:29:59 INFO: 8e9e05c52164694d became leader at term 711
raft2022/07/05 04:29:59 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 711
2022-07-05 04:29:59.606038 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2022-07-05 04:29:59.606189 I | embed: ready to serve client requests
2022-07-05 04:29:59.607032 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2022/07/05 04:29:59 [INFO] Waiting for k3s to start
time="2022-07-05T04:29:59.830402260Z" level=info msg="Starting k3s v1.19.8+k3s1 (95fc76b2)"
time="2022-07-05T04:29:59.916201733Z" level=info msg="Managed etcd cluster bootstrap already complete and initialized"
{"level":"info","ts":"2022-07-05T04:30:00.140Z","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.0.2:2380"]}
{"level":"info","ts":"2022-07-05T04:30:00.140Z","caller":"embed/etcd.go:468","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-05T04:30:00.150Z","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.0.2:2379"]}
{"level":"info","ts":"2022-07-05T04:30:00.151Z","caller":"embed/etcd.go:363","msg":"closing etcd server","name":"default","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://172.17.0.2:2379"]}
{"level":"info","ts":"2022-07-05T04:30:00.151Z","caller":"embed/etcd.go:367","msg":"closed etcd server","name":"default","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://172.17.0.2:2379"]}
{"level":"warn","ts":"2022-07-05T04:30:00.151Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = \"transport: authentication handshake failed: read tcp 127.0.0.1:42902->127.0.0.1:2379: read: connection reset by peer\". Reconnecting..."}
time="2022-07-05T04:30:00.153188869Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
2022/07/05 04:30:00 [INFO] Waiting for k3s to start
{"level":"warn","ts":"2022-07-05T04:30:01.115Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = \"transport: authentication handshake failed: EOF\". Reconnecting..."}
{"level":"warn","ts":"2022-07-05T04:30:01.152Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = \"transport: authentication handshake failed: EOF\". Reconnecting..."}
2022/07/05 04:30:01 [INFO] Waiting for k3s to start
2022/07/05 04:30:02 [INFO] Waiting for k3s to start

查看容器里面的k3s-cluster-reset.log 显示如下错误
root@21b4511dc5e2:/var/lib/rancher# tail -f k3s-cluster-reset.log
{“level”:“info”,“ts”:“2022-07-02T18:15:01.123Z”,“caller”:“raft/raft.go:713”,“msg”:“8e9e05c52164694d became candidate at term 699”}
{“level”:“info”,“ts”:“2022-07-02T18:15:01.123Z”,“caller”:“raft/raft.go:824”,“msg”:“8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 699”}
{“level”:“info”,“ts”:“2022-07-02T18:15:01.123Z”,“caller”:“raft/raft.go:765”,“msg”:“8e9e05c52164694d became leader at term 699”}
{“level”:“info”,“ts”:“2022-07-02T18:15:01.123Z”,“caller”:“raft/node.go:325”,“msg”:“raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 699”}
{“level”:“info”,“ts”:“2022-07-02T18:15:01.123Z”,“caller”:“etcdserver/server.go:2039”,“msg”:“published local member to cluster through raft”,“local-member-id”:“8e9e05c52164694d”,“local-member-attributes”:"{Name:default ClientURLs:[http://localhost:2379]}",“request-path”:"/0/members/8e9e05c52164694d/attributes",“cluster-id”:“cdf818194e3a8c32”,“publish-timeout”:“15s”}
{“level”:“info”,“ts”:“2022-07-02T18:15:01.124Z”,“caller”:“embed/serve.go:139”,“msg”:“serving client traffic insecurely; this is strongly discouraged!”,“address”:“127.0.0.1:2399”}
time=“2022-07-02T18:15:01.129755151Z” level=warning msg=“bootstrap data encrypted with empty string, deleting and resaving with token”
time=“2022-07-02T18:15:01.138036769Z” level=info msg=“created bootstrap key /bootstrap/4218acb560cf”
time=“2022-07-02T18:15:01.151638128Z” level=info msg=“Migrating bootstrap data to new format”
time=“2022-07-02T18:15:01.166955577Z” level=fatal msg="/var/lib/rancher/k3s/server/tls/client-ca.key newer than datastore and could cause cluster outage. Remove the file from disk and restart to be recreated from datastore."

我怎么更新我的证书呢,我试着删除,也是没有用,重启后又有了

k3s-cluster-reset.log ? 我印象中只会在2.6上才会生成这个文件。你只是单纯升级到2.5.8么?没有其他操作?

2.5.8 版本只是单纯把k3s logs输出到stdout

是的,我只是升级到2.5.8,但/var/lib/rancher下就有2个日志文件,一个k3s.log 一个k3s-cluster-reset.log,我一般也是按照你和ksd老大说得看k3s.log,但昨天看k3s.log没啥信息才看另外一个文件得

我从来没升级过2.6版本,image都没拉过

我刚才问过我同事,他说他升级过rancher:stable ,但没成功,是不是那个版本产生的。
那暂时不管这个问题

但升级时候报握手验证信息失败得,这个怎么解决呢?

rancher:stable 是一个动态的tag,不是固定版本,意思是最新的稳定版,目前就是2.6。

从2.4到2.6的升级跨度太大了,然后又折腾回2.5,很多问题都缠绕在一起。

那现在还有什么办法升级到2.5嘛?

因为你使用的是Single Docker安装模式,这种模式下Rancher server容器内置的K3s版本是跟随变更的。

当你们把Rancher升级到2.6时,其内置的K3s版本更新,这时候所有的证书格式都会刷新支持当前的K3s。

如果你再次切换回2.5,相当于对内置K3s和etcd进行了降级,这种操作是很难兼容。基本上很少有软件能够应该版本降级。

我能想到的办法,就是直接升级到2.6 stable版本。

Single Docker 模式本身就是为quick-start设计的,不适合长期运行维护的环境。

1 个赞

好得,升级到2.6也行,那我晚上试试看吧。

升级到stable ,rancher 容器直接没任何日志,反复重启
升级到V2.6.0 和升级2.5.X 一样的错误。错误如下

9a0ae8598edf:/var/lib/rancher # tail k3s.log
{“level”:“warn”,“ts”:“2022-07-06T14:39:40.573Z”,“caller”:“grpclog/grpclog.go:60”,“msg”:“grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = “transport: authentication handshake failed: EOF”. Reconnecting…”}
{“level”:“warn”,“ts”:“2022-07-06T14:39:40.804Z”,“caller”:“grpclog/grpclog.go:60”,“msg”:“grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = “transport: authentication handshake failed: EOF”. Reconnecting…”}
{“level”:“warn”,“ts”:“2022-07-06T14:39:42.869Z”,“caller”:“grpclog/grpclog.go:60”,“msg”:“grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = “transport: authentication handshake failed: EOF”. Reconnecting…”}
{“level”:“warn”,“ts”:“2022-07-06T14:39:43.713Z”,“caller”:“grpclog/grpclog.go:60”,“msg”:“grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = “transport: authentication handshake failed: EOF”. Reconnecting…”}
{“level”:“warn”,“ts”:“2022-07-06T14:39:46.387Z”,“caller”:“grpclog/grpclog.go:60”,“msg”:“grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = “transport: authentication handshake failed: EOF”. Reconnecting…”}
{“level”:“warn”,“ts”:“2022-07-06T14:39:47.283Z”,“caller”:“grpclog/grpclog.go:60”,“msg”:“grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = “transport: authentication handshake failed: EOF”. Reconnecting…”}
{“level”:“warn”,“ts”:“2022-07-06T14:39:48.052Z”,“caller”:“clientv3/retry_interceptor.go:62”,“msg”:“retrying of unary invoker failed”,“target”:“passthrough:///https://127.0.0.1:2379”,“attempt”:0,“error”:“rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = “transport: authentication handshake failed: EOF””}
time=“2022-07-06T14:39:48.052616970Z” level=error msg=“Failed to check local etcd status for learner management: context deadline exceeded”
{“level”:“warn”,“ts”:“2022-07-06T14:39:48.085Z”,“caller”:“clientv3/retry_interceptor.go:62”,“msg”:“retrying of unary invoker failed”,“target”:“passthrough:///https://127.0.0.1:2379”,“attempt”:0,“error”:“rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = “transport: authentication handshake failed: EOF””}
time=“2022-07-06T14:39:48.085846913Z” level=info msg=“Failed to test data store connection: context deadline exceeded”