工作负载中多个pods的分发策略不为轮询或随机

rancher版本v2.5.7。当前集群的某个工作负载的pods副本数大于1时。pods的分发策略不是轮询或随机分发,只会死盯着其中一个pods分发,其他的pods无法接收到任何请求。我去查询这个工作负载创建好后自动生成的服务发现。其中spec.sessionAffinity: None,即应该是默认轮询分发。目前集群中的工作负载都是这个情况,只要pods大于1了就死盯着其中一个pods分发请求,其他不会分发到。这个问题,找不到原因,找不到相关案例,希望大家指点下。

工作负载:


pods:

服务发现:


这个服务发现的yaml文件:
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/ipAddresses: “null”
field.cattle.io/targetDnsRecordIds: “null”
field.cattle.io/targetWorkloadIds: ‘[“deployment:website:s005-heartbeat”]’
workload.cattle.io/targetWorkloadIdNoop: “true”
workload.cattle.io/workloadPortBased: “true”
creationTimestamp: “2022-10-31T09:40:51Z”
labels:
cattle.io/creator: norman
managedFields:

  • apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:annotations:
    .: {}
    f:field.cattle.io/targetWorkloadIds: {}
    f:workload.cattle.io/targetWorkloadIdNoop: {}
    f:workload.cattle.io/workloadPortBased: {}
    f:labels:
    .: {}
    f:cattle.io/creator: {}
    f:ownerReferences:
    .: {}
    k:{“uid”:“12439c07-c37e-411d-8bde-f5e45e6934a9”}:
    .: {}
    f:apiVersion: {}
    f:controller: {}
    f:kind: {}
    f:name: {}
    f:uid: {}
    f:spec:
    f:ports:
    .: {}
    k:{“port”:3000,“protocol”:“TCP”}:
    .: {}
    f:name: {}
    f:port: {}
    f:protocol: {}
    f:targetPort: {}
    f:selector:
    .: {}
    f:workload.user.cattle.io/workloadselector: {}
    f:type: {}
    manager: rancher
    operation: Update
    time: “2022-10-31T09:40:51Z”
  • apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:annotations:
    f:field.cattle.io/ipAddresses: {}
    f:field.cattle.io/targetDnsRecordIds: {}
    f:spec:
    f:sessionAffinity: {}
    manager: Go-http-client
    operation: Update
    time: “2023-12-10T12:11:33Z”
    name: s005-heartbeat
    namespace: website
    ownerReferences:
  • apiVersion: apps/v1beta2
    controller: true
    kind: deployment
    name: s005-heartbeat
    uid: 12439c07-c37e-411d-8bde-f5e45e6934a9
    resourceVersion: “358515398”
    uid: 94b812ea-2fc6-44d9-9f6e-7fac93c770ad
    spec:
    clusterIP: 10.43.232.76
    clusterIPs:
  • 10.43.232.76
    ports:
  • name: 3000tcp02
    port: 3000
    protocol: TCP
    targetPort: 3000
    selector:
    workload.user.cattle.io/workloadselector: deployment-website-s005-heartbeat
    sessionAffinity: None
    type: ClusterIP
    status:
    loadBalancer: {}

找到问题原因,来做个总结。
System → kube-proxy → 配置映射config.conf 中ipvs:scheduler: “sh”, sh为哈希策略,kube-proxy基于源 IP 地址哈希转发到后端pod,讲来自同一个IP地址的请求始终发往第一次挑中的RS,从而实现会话绑定。可能由于业务中使用了nginx来代理请求上图中的工作服务。ip哈希后就只盯着一个转发到后端pod。后边把配置改为"rr",就变成轮询转发到后端pod
相关kube-proxy原理资料:https://www.cnblogs.com/fuyuteng/p/11598768.html
ipvs scheduler介绍、ipvsadm介绍、NAT模式实现步骤参考、DR模式实现步骤参考、FireWall Mark和持久连接-CSDN博客

1 个赞