双Master高可用外联数据库,节点NotReady后pod不自动销毁问题

环境信息:
K3s 版本: v1.26.15+k3s1

k3s version v1.26.15+k3s1 (13297236)
go version go1.21.8

节点 CPU 架构、操作系统和版本::

Linux master-1 5.10.59-rt52-00009-g2a31fc69e3-dirty #2 SMP PREEMPT_RT Fri Mar 21 17:10:04 +08 2025 aarch64 GNU/Linux
集群配置:

2 server CPU:6C Memory:16G Disk:60G
4 agents CPU:8C Memory:60G Disk:60G
问题描述:

双Master节点Nginx+keeplived负载高可用,外联postgresql数据库,当任何1个节点NotReady后发现该节点上运行的pod已经在其他的节点重新创建,但原pod状态一直为Terminating没有自动销毁,使用kubectl describe pod 查看状态信息为:
Events:
Type Reason Age From Message


Warning NodeNotReady 13m node-controller Node is not ready
kubectl logs -f
Error from server: Get “https://192.168.55.190:10250/containerLogs/default/nginx-deployment-cluster-7b64955fd9-gc4j9/nginx-cluster?follow=true”: proxy error from 127.0.0.1:6443 while dialing 192.168.55.190:10250, code 502: 502 Bad Gateway
复现步骤:

  • 安装 K3s 的命令:
    Postgresql的ip地址:192.168.55.210
    Nginx的vip地址:192.168.55.211

master-1启动命令:
nohup /usr/local/bin/k3s server
–docker
–kube-apiserver-arg service-node-port-range=30000-42767
–advertise-address=192.168.55.100
–advertise-port 8443
–node-name master-1
–node-ip 192.168.55.100
–data-dir /data/k3s/rancher/k3s
–kubelet-arg=root-dir=/data/k3s/k3skubectl
–tls-san jmaster-lb
–tls-san 192.168.55.160
–tls-san 192.168.55.100
–tls-san 192.168.55.101
–cluster-cidr “10.42.0.0/16”
–service-cidr “172.16.0.0/16”
–disable traefik
–log /data/k3s/logs/k3smaster.log
–write-kubeconfig /home/root/.kube/config
–write-kubeconfig-mode 644
–datastore-endpoint=“postgres://k3suser:Password@192.168.55.210:5432/k3s?sslmode=disable”&

master-2启动命令:
nohup /usr/local/bin/k3s server
–server https://192.168.55.211:8443
–token-file /data/k3s/rancher/k3s/server/token
–docker
–kube-apiserver-arg service-node-port-range=30000-42767
–advertise-address=192.168.55.160
–advertise-port 8443
–node-name master-2
–node-ip 192.168.55.160
–data-dir /data/k3s/rancher/k3s
–kubelet-arg=root-dir=/data/k3s/k3skubectl
–tls-san master-lb
–tls-san 192.168.55.160
–tls-san 192.168.55.100
–cluster-cidr “10.42.0.0/16”
–service-cidr “172.16.0.0/16”
–disable traefik
–log /data/k3s/logs/k3smaster2.log
–write-kubeconfig /home/root/.kube/config
–write-kubeconfig-mode 644
–datastore-endpoint=“postgres://k3suser:Password123@192.168.55.210:5432/k3s?sslmode=disable” &

worker节点的启动命令:
nohup /usr/local/bin/k3s agent
–docker
–server https://192.168.55.211:8443
–token-file /data/k3s/rancher/agent-token
–node-name worker-1
–node-ip 192.168.55.190
–data-dir=/data/k3s/k3sdata
–kubelet-arg=root-dir=/data/k3s/k3skubectl
–log /data/k3s/logs/k3sworker.log &

启动后kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready control-plane,master 2d1h v1.26.15+k3s1
master-2 Ready control-plane,master 2d v1.26.15+k3s1
worker-1 Ready 46h v1.26.15+k3s1
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-7d94548cbf-xrxjf 1/1 Running 0 2d
kube-system coredns-59b4f5bbd5-m7wdj 1/1 Running 0 2d
kube-system metrics-server-696ddd9f98-6hqr7 1/1 Running 0 2d
default nginx-deployment-cluster-7b64955fd9-7g885 1/1 Running 0 46h
default nginx-deployment-cluster-7b64955fd9-whd5z 1/1 Running 0 46h

预期结果:
停掉任一Master或者agent节点后pod会自动在其他节点上重建并且销毁原pod

实际结果:
停掉任一Master或者agent节点后pod会自动在其他节点上重建但并未销毁原pod

附加上下文/日志:

日志

Events:
Type Reason Age From Message


Warning NodeNotReady 18m node-controller Node is not ready

Error from server: Get “https://192.168.55.190:10250/containerLogs/default/nginx-deployment-cluster-7b64955fd9-gc4j9/nginx-cluster?follow=true”: proxy error from 127.0.0.1:6443 while dialing 192.168.55.190:10250, code 502: 502 Bad Gateway



节点已经 not ready 了,也就是这个节点的 kubelet 组件已经没法向 apiserver 上报节点上的 任何 pod 状态了,因此你看到 pod 一直是 deleting 的状态

目前的问题是agent节点NotReady后,原pod状态一直为Terminating没有自动销毁,并不是deleting状态

但现在的问题是agent节点NotReady后,原pod状态一直为Terminating,不是deleting状态也没有自动销毁