Rancher 2.6.6 Node添加失败

Rancher Server 设置

  • Rancher 版本:2.6.6
  • 安装选项 (Docker install/Helm Chart): RKE1
  • 在线或离线部署:在线部署

下游集群信息

  • Kubernetes 版本: v1.21.13-rancher1-1
  • Cluster Type (Local/Downstream):
    • 如果 Downstream,是什么类型的集群?(自定义/导入或为托管 等): local

主机操作系统:

Centos7.8

问题描述:
在阿里云上有一套正在使用的集群环境。 现在想添加一个在优刻得云上的worker。
两边的服务器防火墙都打开了,可以相互访问。现在节点worker一直注册不到集群上面。

添加以后 rancher一直显示:Waiting to register with Kubernetes

截图:

其他上下文信息:

日志

kublet的错误日志:

E1205 06:26:36.535328    2725 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ucloud-k8s-node-1.172dcf7d9d453bf1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ucloud-k8s-node-1", UID:"ucloud-k8s-node-1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ucloud-k8s-node-1 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ucloud-k8s-node-1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0db7f49ce5975f1, ext:6778575214, loc:(*time.Location)(0x750fc20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0db7f4cfb59fbb5, ext:19533584178, loc:(*time.Location)(0x750fc20)}}, Count:8, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://127.0.0.1:6443/api/v1/namespaces/default/events/ucloud-k8s-node-1.172dcf7d9d453bf1": EOF'(may retry after sleeping)
E1205 06:26:36.606656    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:36.707672    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:36.808659    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:36.909299    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.009804    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.110349    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.210870    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.311882    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.412511    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.513515    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
I1205 06:26:37.561401    2725 trace.go:205] Trace[1996720711]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:26:27.551) (total time: 10009ms):
Trace[1996720711]: [10.009784298s] [10.009784298s] END
E1205 06:26:37.561418    2725 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
E1205 06:26:37.613955    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.714441    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.815382    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:37.915892    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:38.016402    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:38.117021    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:38.217231    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:38.317856    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:38.418301    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:38.519005    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
E1205 06:26:38.619898    2725 kubelet.go:2291] "Error getting node" err="node \"ucloud-k8s-node-1\" not found"
I1205 06:26:38.685237    2725 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
I1205 06:26:38.708618    2725 kubelet_node_status.go:554] "Recording event message for node" node="ucloud-k8s-node-1" event="NodeHasSufficientMemory"
I1205 06:26:38.708655    2725 kubelet_node_status.go:554] "Recording event message for node" node="ucloud-k8s-node-1" event="NodeHasNoDiskPressure"
I1205 06:26:38.708666    2725 kubelet_node_status.go:554] "Recording event message for node" node="ucloud-k8s-node-1" event="NodeHasSufficientPID"
I1205 06:26:38.708687    2725 kubelet_node_status.go:71] "Attempting to register node" node="ucloud-k8s-node-1"



agent的日志:

time="2022-12-05T05:46:02Z" level=info msg="{\"status\":\"Pull complete\",\"progressDetail\":{},\"id\":\"e0e32a624e9e\"}"
time="2022-12-05T05:46:02Z" level=info msg="{\"status\":\"Digest: sha256:62ffa8e1e1dfc088b40c12438edafc1d34f92f00e5fa6c0342156298136f4d2b\"}"
time="2022-12-05T05:46:02Z" level=info msg="{\"status\":\"Status: Downloaded newer image for rancher/hyperkube:v1.21.13-rancher1\"}"
time="2022-12-05T05:46:10Z" level=info msg="Option requestedHostname=ucloud-k8s-node-1"
time="2022-12-05T05:46:10Z" level=info msg="Option dockerInfo={NVHT:AWK2:DQEH:44FU:EGGA:NWH2:AHRV:KCA7:MPHW:D3D2:5JQ3:LEWW 6 5 0 1 3 overlay2 [[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] [] {[local] [bridge host ipvlan macvlan null overlay] [] [awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} true true true true true true true true true true true true false 59 true 62 2022-12-05T13:46:10.372237867+08:00 json-file cgroupfs 1 0 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core) 7 linux x86_64 https://index.docker.io/v1/ 0xc000fe1c00 2 8167714816 [] /var/lib/docker    ucloud-k8s-node-1 [] false 20.10.21   map[io.containerd.runc.v2:{runc [] <nil>} io.containerd.runtime.v1.linux:{runc [] <nil>} runc:{runc [] <nil>}] runc {  inactive false  [] 0 0 <nil> []} false  docker-init {770bd0108c32f3fb5c73ae1264f7e503fe7b2661 770bd0108c32f3fb5c73ae1264f7e503fe7b2661} {v1.1.4-0-g5fd4c4d v1.1.4-0-g5fd4c4d} {de40ad0 de40ad0} [name=seccomp,profile=default]  [] []}"
time="2022-12-05T05:46:10Z" level=info msg="Option customConfig=map[address:10.23.192.244 internalAddress: label:map[] roles:[worker] taints:[]]"
time="2022-12-05T05:46:10Z" level=info msg="Option etcd=false"
time="2022-12-05T05:46:10Z" level=info msg="Option controlPlane=false"
time="2022-12-05T05:46:10Z" level=info msg="Option worker=true"
time="2022-12-05T05:46:10Z" level=info msg="Option dockerInfo={NVHT:AWK2:DQEH:44FU:EGGA:NWH2:AHRV:KCA7:MPHW:D3D2:5JQ3:LEWW 6 5 0 1 3 overlay2 [[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] [] {[local] [bridge host ipvlan macvlan null overlay] [] [awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} true true true true true true true true true true true true false 59 true 62 2022-12-05T13:46:10.500005219+08:00 json-file cgroupfs 1 0 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core) 7 linux x86_64 https://index.docker.io/v1/ 0xc00046c3f0 2 8167714816 [] /var/lib/docker    ucloud-k8s-node-1 [] false 20.10.21   map[io.containerd.runc.v2:{runc [] <nil>} io.containerd.runtime.v1.linux:{runc [] <nil>} runc:{runc [] <nil>}] runc {  inactive false  [] 0 0 <nil> []} false  docker-init {770bd0108c32f3fb5c73ae1264f7e503fe7b2661 770bd0108c32f3fb5c73ae1264f7e503fe7b2661} {v1.1.4-0-g5fd4c4d v1.1.4-0-g5fd4c4d} {de40ad0 de40ad0} [name=seccomp,profile=default]  [] []}"
time="2022-12-05T05:46:10Z" level=info msg="Option customConfig=map[address:10.23.192.244 internalAddress: label:map[] roles:[worker] taints:[]]"
time="2022-12-05T05:46:10Z" level=info msg="Option etcd=false"
time="2022-12-05T05:46:10Z" level=info msg="Option controlPlane=false"
time="2022-12-05T05:46:10Z" level=info msg="Option worker=true"
time="2022-12-05T05:46:10Z" level=info msg="Option requestedHostname=ucloud-k8s-node-1"
time="2022-12-05T05:46:10Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T05:48:10Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T05:50:10Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T05:52:10Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T05:54:10Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T05:56:10Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T05:58:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:00:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:02:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:04:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:06:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:08:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:10:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:12:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:14:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:16:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:18:11Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:20:12Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:22:12Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:24:12Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:26:12Z" level=info msg="Plan monitor checking 120 seconds"
time="2022-12-05T06:28:12Z" level=info msg="Plan monitor checking 120 seconds"


kube-proxy 的日志:

I1205 06:26:13.191134    2594 trace.go:205] Trace[208375511]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:26:03.180) (total time: 10010ms):
Trace[208375511]: [10.010918568s] [10.010918568s] END
E1205 06:26:13.191157    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
I1205 06:26:45.094112    2594 trace.go:205] Trace[1751843228]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:26:31.080) (total time: 14013ms):
Trace[1751843228]: [14.013582455s] [14.013582455s] END
E1205 06:26:45.094133    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.EndpointSlice: failed to list *v1beta1.EndpointSlice: an error on the server ("") has prevented the request from succeeding (get endpointslices.discovery.k8s.io)
I1205 06:27:16.537910    2594 trace.go:205] Trace[663680769]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:27:06.527) (total time: 10009ms):
Trace[663680769]: [10.009936076s] [10.009936076s] END
E1205 06:27:16.537932    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
I1205 06:27:37.546970    2594 trace.go:205] Trace[1099172752]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:27:27.535) (total time: 10011ms):
Trace[1099172752]: [10.011795085s] [10.011795085s] END
E1205 06:27:37.546995    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.EndpointSlice: failed to list *v1beta1.EndpointSlice: an error on the server ("") has prevented the request from succeeding (get endpointslices.discovery.k8s.io)
I1205 06:28:08.681843    2594 trace.go:205] Trace[169825981]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:27:58.670) (total time: 10011ms):
Trace[169825981]: [10.011506703s] [10.011506703s] END
E1205 06:28:08.681867    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
I1205 06:28:34.151320    2594 trace.go:205] Trace[1410168929]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:28:24.139) (total time: 10011ms):
Trace[1410168929]: [10.011741069s] [10.011741069s] END
E1205 06:28:34.151342    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.EndpointSlice: failed to list *v1beta1.EndpointSlice: an error on the server ("") has prevented the request from succeeding (get endpointslices.discovery.k8s.io)
I1205 06:29:18.375901    2594 trace.go:205] Trace[1398075604]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:29:08.365) (total time: 10010ms):
Trace[1398075604]: [10.010592132s] [10.010592132s] END
E1205 06:29:18.375923    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
I1205 06:29:43.981050    2594 trace.go:205] Trace[1859755932]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:29:33.968) (total time: 10012ms):
Trace[1859755932]: [10.01214932s] [10.01214932s] END
E1205 06:29:43.981074    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.EndpointSlice: failed to list *v1beta1.EndpointSlice: an error on the server ("") has prevented the request from succeeding (get endpointslices.discovery.k8s.io)
I1205 06:30:09.684305    2594 trace.go:205] Trace[395613517]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:29:59.673) (total time: 10010ms):
Trace[395613517]: [10.010633085s] [10.010633085s] END
E1205 06:30:09.684328    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)


I1205 06:30:48.973962    2594 trace.go:205] Trace[690036373]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2022 06:30:38.963) (total time: 10010ms):
Trace[690036373]: [10.010200102s] [10.010200102s] END
E1205 06:30:48.973985    2594 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.EndpointSlice: failed to list *v1beta1.EndpointSlice: an error on the server ("") has prevented the request from succeeding (get endpointslices.discovery.k8s.io)


nginx-proxy的日志:

2022/12/05 06:32:03 [error] 13#13: *8448 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:03 [error] 13#13: *8449 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:03 [error] 13#13: *8450 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:04 [error] 13#13: *8451 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:04 [error] 13#13: *8452 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:04 [error] 13#13: *8453 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:04 [error] 13#13: *8454 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:05 [error] 13#13: *8455 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:05 [error] 13#13: *8456 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:05 [error] 13#13: *8457 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:05 [error] 13#13: *8458 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:06 [error] 13#13: *8461 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:06 [error] 13#13: *8462 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:06 [error] 13#13: *8463 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:06 [error] 13#13: *8464 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:07 [error] 13#13: *8465 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:07 [error] 13#13: *8466 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:07 [error] 13#13: *8467 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:08 [error] 13#13: *8459 upstream timed out (110: Operation timed out) while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "172.16.20.104:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:08 [warn] 13#13: *8459 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "172.16.20.104:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:08 [error] 13#13: *8469 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:08 [error] 13#13: *8470 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:09 [error] 13#13: *8471 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:09 [error] 13#13: *8472 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:10 [error] 13#13: *8459 upstream timed out (110: Operation timed out) while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "172.16.20.102:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:10 [warn] 13#13: *8459 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "172.16.20.102:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:10 [error] 13#13: *8473 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:11 [error] 13#13: *8474 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:11 [error] 13#13: *8475 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:12 [error] 13#13: *8476 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:12 [error] 13#13: *8477 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:13 [error] 13#13: *8478 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/12/05 06:32:13 [error] 13#13: *8479 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:6443, upstream: "kube_apiserver", bytes from/to client:0/0, bytes from/to upstream:0/0