Rancher Server 设置
- Rancher 版本:2.6.4
- 安装选项 (Docker install/Helm Chart): helm chart
- 如果是 Helm Chart 安装,需要提供 Local 集群的类型(RKE1, RKE2, k3s, EKS, 等)和版本:
rke1: rke版本 1.3.9 k8s版本 1.22.7
- 如果是 Helm Chart 安装,需要提供 Local 集群的类型(RKE1, RKE2, k3s, EKS, 等)和版本:
- 在线或离线部署:
离线部署
下游集群信息
- Kubernetes 版本: 1.22.7
- Cluster Type (Local/Downstream):
- 如果 Downstream,是什么类型的集群?(自定义/导入或为托管 等):
自定义的rke集群
- 如果 Downstream,是什么类型的集群?(自定义/导入或为托管 等):
用户信息
- 登录用户的角色是什么? (管理员/集群所有者/集群成员/项目所有者/项目成员/自定义):
- 如果自定义,自定义权限集:
admin角色
- 如果自定义,自定义权限集:
问题描述:
使用local pv创建本地卷,在启用pod的时候,报pv对应的卷没找到,按照官方的提示,升级了cluster.yaml,还是报同样的错误。
yaml文件如下:
[rancher@fotileappmaster01 ~]$ cat pv-local.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /home/rancher/k8s/localpv
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 10.11.111.137
[rancher@fotileappmaster01 ~]$ cat local-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
[rancher@fotileappmaster01 ~]$ cat pvc-local.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-local
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
[rancher@fotileappmaster01 ~]$ cat pod-local-pv.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginxdemo
spec:
hostname: nginxdemo
volumes:
- name: pvc-local-pv
persistentVolumeClaim:
claimName: pvc-local
containers:
- name: nginx
image: docker.io/nginx:alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: pvc-local-pv
mountPath: /usr/share/nginx/html
[rancher@fotileappmaster01 rke2-cluster]$ cat config.yml
nodes:
- address: 10.11.111.134 # 离线环境节点 IP
internal_address: 10.11.111.134 # 节点内网 IP
user: rancher
role: ["controlplane", "etcd"]
ssh_key_path: /home/rancher/.ssh/id_rsa
- address: 10.11.111.135 # 离线环境节点 IP
internal_address: 10.11.111.135 # 节点内网 IP
user: rancher
role: ["controlplane", "etcd"]
ssh_key_path: /home/rancher/.ssh/id_rsa
- address: 10.11.111.136 # 离线环境节点 IP
internal_address: 10.11.111.136 # 节点内网 IP
user: rancher
role: ["controlplane", "etcd"]
ssh_key_path: /home/rancher/.ssh/id_rsa
- address: 10.11.111.137 # 离线环境节点 IP
internal_address: 10.11.111.137 # 节点内网 IP
user: rancher
role: ["worker"]
ssh_key_path: /home/rancher/.ssh/id_rsa
- address: 10.11.111.138 # 离线环境节点 IP
internal_address: 10.11.111.138 # 节点内网 IP
user: rancher
role: ["worker"]
ssh_key_path: /home/rancher/.ssh/id_rsa
- address: 10.11.111.139 # 离线环境节点 IP
internal_address: 10.11.111.139 # 节点内网 IP
user: rancher
role: ["worker"]
ssh_key_path: /home/rancher/.ssh/id_rsa
network:
plugin: calico
options: {}
mtu: 0
node_selector: {}
update_strategy: null
tolerations: []
private_registries:
-- url: harbor.tkg.com # 私有镜像库地址
user: admin
password: "P@ssw0rd"
is_default: true
services:
kubelet:
extra_binds:
-- "/home/rancher/k8s/localpv:/home/rancher/k8s/localpv"
- "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins:z"
重现步骤:
./rke up --update-only --config config.yml
kubectl apply -f pv-local.yaml
kubectl apply -f local-storageclass.yaml
kubectl apply -f pvc-local.yaml
kubectl apply -f pod-local-pv.yaml
结果:
[rancher@fotileappmaster01 rke2-cluster]$ cat config.rkestate | grep extraBinds -C 5
"scheduler": {
"image": "harbor.tkg.com/rancher/hyperkube:v1.22.7-rancher1"
},
"kubelet": {
"image": "harbor.tkg.com/rancher/hyperkube:v1.22.7-rancher1",
"extraBinds": [
"/home/rancher/k8s/localpv:/home/rancher/k8s/localpv",
"/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins:z"
],
"clusterDomain": "cluster.local",
"infraContainerImage": "harbor.tkg.com/rancher/mirrored-pause:3.6",
--
"scheduler": {
"image": "harbor.tkg.com/rancher/hyperkube:v1.22.7-rancher1"
},
"kubelet": {
"image": "harbor.tkg.com/rancher/hyperkube:v1.22.7-rancher1",
"extraBinds": [
"/home/rancher/k8s/localpv:/home/rancher/k8s/localpv",
"/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins:z"
],
"clusterDomain": "cluster.local",
"infraContainerImage": "harbor.tkg.com/rancher/mirrored-pause:3.6",
[rancher@fotileappmaster01 ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-local 5Gi RWO Delete Bound default/pvc-local local-storage 15m
pvdemo 1Gi RWO,RWX Delete Bound default/pvcdemo 84m
[rancher@fotileappmaster01 ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-local Bound pv-local 5Gi RWO local-storage 15m
pvcdemo Bound pvdemo 1Gi RWO,RWX 84m
[rancher@fotileappmaster01 ~]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 15m
[rancher@fotileappmaster01 ~]$ kubectl get po
NAME READY STATUS RESTARTS AGE
deploydemo-77b64b85bb-vf6kz 1/1 Running 0 84m
nginx-6fdfb68959-8wv6w 1/1 Running 1 (7h33m ago) 12d
nginx-6fdfb68959-kplrs 1/1 Running 1 (7h34m ago) 12d
nginx-6fdfb68959-w47c4 1/1 Running 1 (7h34m ago) 12d
nginxdemo 0/1 ContainerCreating 0 15m
[rancher@fotileappmaster01 ~]$ kubectl describe po nginxdemo
.....
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
pvc-local-pv:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-local
ReadOnly: false
kube-api-access-qfkxv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/nginxdemo to 10.11.111.137
Warning FailedMount 6m44s kubelet Unable to attach or mount volumes: unmounted volumes=[pvc-local-pv], unattached volumes=[kube-api-access-qfkxv pvc-local-pv]: timed out waiting for the condition
Warning FailedMount 2m15s (x5 over 13m) kubelet Unable to attach or mount volumes: unmounted volumes=[pvc-local-pv], unattached volumes=[pvc-local-pv kube-api-access-qfkxv]: timed out waiting for the condition
Warning FailedMount 74s (x15 over 15m) kubelet MountVolume.NewMounter initialization failed for volume "pv-local" : path "/home/rancher/k8s/localpv" does not exist
预期结果:
pod能正常挂载localpv
截图:
其他上下文信息: