Rke2部署的k8s,kube-apiserver、etcd、kubelet等组件占用资源过高

环境信息:
RKE2 版本:

hts0000@rke2-node1:~$ rke2 -v
rke2 version v1.28.11+rke2r1 (6b12d7a783238b72da8450fa1b6ec587cebb79ed)
go version go1.21.11 X:boringcrypto

节点 CPU 架构,操作系统和版本:
2C4G 3节点的虚拟机架构。
操作系统:ubuntu server 24.04

集群配置:
总共3台服务器,采用3 server的架构

问题描述:
集群节点资源占用高,这个资源占用是否正常?
如果我想搭建homelab环境,最佳的实践方式是什么?

# 查看集群node资源占用
hts0000@rke2-node1:~$ kubectl top nodes
[sudo] password for hts0000:
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
rke2-node1   1958m        97%    2307Mi          59%
rke2-node2   1824m        91%    2033Mi          52%
rke2-node3   1817m        90%    2036Mi          52%

# 查看pod资源占用情况
hts0000@rke2-node1:~$ kubectl top pods -A
NAMESPACE     NAME                                                CPU(cores)   MEMORY(bytes)
kube-system   cloud-controller-manager-rke2-node1                 22m          12Mi
kube-system   cloud-controller-manager-rke2-node2                 28m          16Mi
kube-system   cloud-controller-manager-rke2-node3                 15m          13Mi
kube-system   etcd-rke2-node1                                     220m         104Mi
kube-system   etcd-rke2-node2                                     244m         120Mi
kube-system   etcd-rke2-node3                                     278m         85Mi
kube-system   kube-apiserver-rke2-node1                           431m         362Mi
kube-system   kube-apiserver-rke2-node2                           476m         414Mi
kube-system   kube-apiserver-rke2-node3                           464m         439Mi
kube-system   kube-controller-manager-rke2-node1                  20m          18Mi
kube-system   kube-controller-manager-rke2-node2                  159m         58Mi
kube-system   kube-controller-manager-rke2-node3                  22m          18Mi
kube-system   kube-proxy-rke2-node1                               8m           64Mi
kube-system   kube-proxy-rke2-node2                               11m          65Mi
kube-system   kube-proxy-rke2-node3                               8m           64Mi
kube-system   kube-scheduler-rke2-node1                           29m          17Mi
kube-system   kube-scheduler-rke2-node2                           30m          15Mi
kube-system   kube-scheduler-rke2-node3                           23m          15Mi
kube-system   rke2-canal-ndm56                                    262m         199Mi
kube-system   rke2-canal-rkbkz                                    256m         200Mi
kube-system   rke2-canal-zwtbk                                    243m         200Mi
kube-system   rke2-coredns-rke2-coredns-559cfcb9c4-9q9cx          15m          63Mi
kube-system   rke2-metrics-server-7cdd8cf4b8-nftc6                26m          22Mi
kube-system   rke2-snapshot-controller-6965b95ffc-fvtsp           4m           39Mi
kube-system   rke2-snapshot-validation-webhook-5fd4d57cdb-8bst2   4m           8Mi

查看kube-apiserver、kubelet、etcd日志,也没看出什么异常。

那种方式安装的rke2集群

使用安装脚本执行的安装

curl -sfL https://rancher-mirror.rancher.cn/rke2/install.sh | \
  INSTALL_RKE2_MIRROR=cn sh -

我把三台服务器硬重启后,资源占用回到正常值了