Rancher v2.13.0 版本本地docker run 安装报错

[root@master ~]# docker run -itd --name=rancher --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.13.0 --no-cacerts
a4b790c3dbe8b7be8551d401fd31e04ec125400bd096e2d3723f5da58847ef16
[root@master ~]# docker logs -f rancher
Restoring git repositories:

  • /var/lib/rancher-data/local-catalogs/v2/rancher-charts/4b40cac650031b74776e87c1a726b0484d0877c3ec137da0872547ff9b73a721/.git
    Updating files: 100% (48591/48591), done.
    Your branch is up to date with ‘origin/release-v2.13’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-rke2-charts/675f1b63a0a83905972dcab2794479ed599a6f41b86cd6193d69472d0fa889c9/.git
    Updating files: 100% (36759/36759), done.
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-partner-charts/8f17acdce9bffd6e05a58a3798840e408c4ea71783381ecd2e9af30baad65974/.git
    Updating files: 100% (2327/2327), done.
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
    2025/12/03 01:30:04 [INFO] Rancher version v2.13.0 (f94ac947f75e312f1ab9217d21b2770b48b734c8) is starting
    2025/12/03 01:30:04 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:true AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLogLevel:0 AuditLogEnabled:false Features: ClusterRegistry: AggregationRegistrationTimeout:5m0s}
    2025/12/03 01:30:04 [INFO] Listening on /tmp/log.sock
    2025/12/03 01:30:04 [INFO] Waiting for k3s to start
    2025/12/03 01:30:05 [INFO] Waiting for k3s to start
    2025/12/03 01:30:06 [INFO] Waiting for k3s to start
    2025/12/03 01:30:07 [INFO] Waiting for k3s to start
    2025/12/03 01:30:08 [INFO] Waiting for k3s to start
    2025/12/03 01:30:09 [INFO] Waiting for k3s to start
    2025/12/03 01:30:10 [INFO] Waiting for k3s to start
    2025/12/03 01:30:11 [INFO] Waiting for k3s to start
    2025/12/03 01:30:12 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:14 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:16 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:18 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:20 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:22 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:29 [INFO] Running in single server mode, will not peer connections
    2025/12/03 01:30:29 [INFO] Scanning NodeTemplates in namespace: cattle-global-nt, group: nodetemplates.management.cattle.io
    2025/12/03 01:30:29 [INFO] Scanning ClusterTemplates in namespace: cattle-global-data, group: clustertemplates.management.cattle.io
    2025/12/03 01:30:29 [INFO] [deferred-capi - WaitForClient] waiting for CAPI CRDs to be established…
    2025/12/03 01:30:29 [INFO] [deferred-ext] WaitForClient starting waiter for EXT api-service availability
    I1203 01:30:29.663478 43 warnings.go:110] “Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice”
    I1203 01:30:29.666151 43 warnings.go:110] “Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice”
    I1203 01:30:29.668138 43 warnings.go:110] “Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice”
    2025/12/03 01:30:29 [INFO] Applying CRD features.management.cattle.io
    2025/12/03 01:30:29 [INFO] Waiting for CRD features.management.cattle.io to become available
    2025/12/03 01:30:30 [INFO] Done waiting for CRD features.management.cattle.io to become available
    2025/12/03 01:30:30 [FATAL] k3s exited with: exit status 1
    [root@master ~]#
    [root@master ~]# docker run -itd --name=rancher --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.13.0 --no-cacerts
    a4b790c3dbe8b7be8551d401fd31e04ec125400bd096e2d3723f5da58847ef16
    [root@master ~]# docker logs -f rancher
    Restoring git repositories:
  • /var/lib/rancher-data/local-catalogs/v2/rancher-charts/4b40cac650031b74776e87c1a726b0484d0877c3ec137da0872547ff9b73a721/.git
    Updating files: 100% (48591/48591), done.
    Your branch is up to date with ‘origin/release-v2.13’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-rke2-charts/675f1b63a0a83905972dcab2794479ed599a6f41b86cd6193d69472d0fa889c9/.git
    Updating files: 100% (36759/36759), done.
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
  • /var/lib/rancher-data/local-catalogs/v2/rancher-partner-charts/8f17acdce9bffd6e05a58a3798840e408c4ea71783381ecd2e9af30baad65974/.git
    Updating files: 100% (2327/2327), done.
    Your branch is up to date with ‘origin/main’.
    /var/lib/rancher
    2025/12/03 01:30:04 [INFO] Rancher version v2.13.0 (f94ac947f75e312f1ab9217d21b2770b48b734c8) is starting
    2025/12/03 01:30:04 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:true AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLogLevel:0 AuditLogEnabled:false Features: ClusterRegistry: AggregationRegistrationTimeout:5m0s}
    2025/12/03 01:30:04 [INFO] Listening on /tmp/log.sock
    2025/12/03 01:30:04 [INFO] Waiting for k3s to start
    2025/12/03 01:30:05 [INFO] Waiting for k3s to start
    2025/12/03 01:30:06 [INFO] Waiting for k3s to start
    2025/12/03 01:30:07 [INFO] Waiting for k3s to start
    2025/12/03 01:30:08 [INFO] Waiting for k3s to start
    2025/12/03 01:30:09 [INFO] Waiting for k3s to start
    2025/12/03 01:30:10 [INFO] Waiting for k3s to start
    2025/12/03 01:30:11 [INFO] Waiting for k3s to start
    2025/12/03 01:30:12 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:14 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:16 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:18 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:20 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:22 [INFO] Waiting for server to become available: the server is currently unable to handle the request
    2025/12/03 01:30:29 [INFO] Running in single server mode, will not peer connections
    2025/12/03 01:30:29 [INFO] Scanning NodeTemplates in namespace: cattle-global-nt, group: nodetemplates.management.cattle.io
    2025/12/03 01:30:29 [INFO] Scanning ClusterTemplates in namespace: cattle-global-data, group: clustertemplates.management.cattle.io
    2025/12/03 01:30:29 [INFO] [deferred-capi - WaitForClient] waiting for CAPI CRDs to be established…
    2025/12/03 01:30:29 [INFO] [deferred-ext] WaitForClient starting waiter for EXT api-service availability
    I1203 01:30:29.663478 43 warnings.go:110] “Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice”
    I1203 01:30:29.666151 43 warnings.go:110] “Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice”
    I1203 01:30:29.668138 43 warnings.go:110] “Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice”
    2025/12/03 01:30:29 [INFO] Applying CRD features.management.cattle.io
    2025/12/03 01:30:29 [INFO] Waiting for CRD features.management.cattle.io to become available
    2025/12/03 01:30:30 [INFO] Done waiting for CRD features.management.cattle.io to become available
    2025/12/03 01:30:30 [FATAL] k3s exited with: exit status 1
    [root@master ~]#

这是最后的报错,也就是 rancher 容器内内置的 K3s 集群启动失败,所以得看k3s 的日志才能继续排查

docker run 启动的容器内置了一个 K3s 集群,要查看这个 K3s 的日志,需要你 restart rancher 容器,然后 exec 到容器内,查 k3s.log 的日志,直到容器崩溃。

{“level”:“info”,“ts”:“2025-12-19T06:56:58.409511Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/raft.go:970”,“msg”:“f3a63d70aa643d76 became leader at term 12”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.409517Z”,“logger”:“raft”,“caller”:“v3@v3.6.0/node.go:370”,“msg”:“raft.node: f3a63d70aa643d76 elected leader f3a63d70aa643d76 at term 12”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.410329Z”,“caller”:“etcdserver/server.go:1806”,“msg”:“published local member to cluster through raft”,“local-member-id”:“f3a63d70aa643d76”,“local-member-attributes”:“{Name:a8845cde795d-9cfd1972 ClientURLs:[http://127.0.0.1:2399]}”,“cluster-id”:“908a6d98e7123f06”,“publish-timeout”:“15s”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.410374Z”,“caller”:“embed/serve.go:138”,“msg”:“ready to serve client requests”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.410420Z”,“caller”:“embed/serve.go:138”,“msg”:“ready to serve client requests”}
{“level”:“warn”,“ts”:“2025-12-19T06:56:58.410631Z”,“caller”:“v3rpc/grpc.go:52”,“msg”:“etcdserver: failed to register grpc metrics”,“error”:“descriptor Desc{fqName: "grpc_server_msg_sent_total", help: "Total number of gRPC stream messages sent by the server.", constLabels: {}, variableLabels: {grpc_type,grpc_service,grpc_method}} already exists with the same fully-qualified name and const label values”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.410743Z”,“caller”:“embed/serve.go:220”,“msg”:“serving client traffic insecurely; this is strongly discouraged!”,“traffic”:“http”,“address”:“127.0.0.1:2402”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.410869Z”,“caller”:“v3rpc/health.go:63”,“msg”:“grpc service status changed”,“service”:“”,“status”:“SERVING”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.414407Z”,“caller”:“embed/serve.go:220”,“msg”:“serving client traffic insecurely; this is strongly discouraged!”,“traffic”:“grpc”,“address”:“127.0.0.1:2399”}
time=“2025-12-19T06:56:58Z” level=info msg=“Connected to etcd v3.6.4 - datastore using 1531904 of 1544192 bytes”
time=“2025-12-19T06:56:58Z” level=info msg=“Defragmenting etcd database”
{“level”:“info”,“ts”:“2025-12-19T06:56:58.415760Z”,“caller”:“v3rpc/maintenance.go:110”,“msg”:“starting defragment”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.416526Z”,“caller”:“backend/backend.go:522”,“msg”:“defragmenting”,“path”:“/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db”,“current-db-size-bytes”:1544192,“current-db-size”:“1.5 MB”,“current-db-size-in-use-bytes”:1531904,“current-db-size-in-use”:“1.5 MB”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.425952Z”,“logger”:“bbolt”,“caller”:“backend/backend.go:574”,“msg”:“Opening db file (/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc0004b7ce8}”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.426363Z”,“logger”:“bbolt”,“caller”:“bbolt@v1.4.2/db.go:321”,“msg”:“Opening bbolt db (/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db) successfully”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.426479Z”,“caller”:“backend/backend.go:592”,“msg”:“finished defragmenting directory”,“path”:“/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db”,“current-db-size-bytes-diff”:-8192,“current-db-size-bytes”:1536000,“current-db-size”:“1.5 MB”,“current-db-size-in-use-bytes-diff”:-4096,“current-db-size-in-use-bytes”:1527808,“current-db-size-in-use”:“1.5 MB”,“took”:“10.672049ms”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.426495Z”,“caller”:“v3rpc/maintenance.go:118”,“msg”:“finished defragment”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.426503Z”,“caller”:“v3rpc/health.go:63”,“msg”:“grpc service status changed”,“service”:“”,“status”:“SERVING”}
time=“2025-12-19T06:56:58Z” level=info msg=“Datastore using 1527808 of 1536000 bytes after defragment”
time=“2025-12-19T06:56:58Z” level=info msg=“etcd temporary data store connection OK”
time=“2025-12-19T06:56:58Z” level=info msg=“Reconciling bootstrap data between datastore and disk”
time=“2025-12-19T06:56:58Z” level=info msg=“stopping etcd”
{“level”:“info”,“ts”:“2025-12-19T06:56:58.433646Z”,“caller”:“embed/etcd.go:426”,“msg”:“closing etcd server”,“name”:“a8845cde795d-9cfd1972”,“data-dir”:“/var/lib/rancher/k3s/server/db/etcd-tmp”,“advertise-peer-urls”:[“http://127.0.0.1:2400”],“advertise-client-urls”:[“http://127.0.0.1:2399”]}
{“level”:“error”,“ts”:“2025-12-19T06:56:58.433766Z”,“caller”:“embed/etcd.go:912”,“msg”:“setting up serving from embedded etcd failed.”,“error”:“http: Server closed”,“stacktrace”:“go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/serve.go:90”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.434100Z”,“caller”:“etcdserver/server.go:1281”,“msg”:“skipped leadership transfer for single voting member cluster”,“local-member-id”:“f3a63d70aa643d76”,“current-leader-member-id”:“f3a63d70aa643d76”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.434161Z”,“caller”:“etcdserver/server.go:2321”,“msg”:“server has stopped; stopping cluster version’s monitor”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.434184Z”,“caller”:“etcdserver/server.go:2344”,“msg”:“server has stopped; stopping storage version’s monitor”}
{“level”:“error”,“ts”:“2025-12-19T06:56:58.434192Z”,“caller”:“embed/etcd.go:912”,“msg”:“setting up serving from embedded etcd failed.”,“error”:“accept tcp 127.0.0.1:2402: use of closed network connection”,“stacktrace”:“go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:906”}
time=“2025-12-19T06:56:58Z” level=info msg=“certificate CN=kube-apiserver signed by CN=k3s-server-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:58 +0000 UTC”
{“level”:“info”,“ts”:“2025-12-19T06:56:58.436200Z”,“caller”:“embed/etcd.go:621”,“msg”:“stopping serving peer traffic”,“address”:“127.0.0.1:2400”}
{“level”:“error”,“ts”:“2025-12-19T06:56:58.436270Z”,“caller”:“embed/etcd.go:912”,“msg”:“setting up serving from embedded etcd failed.”,“error”:“accept tcp 127.0.0.1:2400: use of closed network connection”,“stacktrace”:“go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\t/go/pkg/mod/github.com/k3s-io/etcd/server/v3@v3.6.4-k3s3/embed/etcd.go:906”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.436310Z”,“caller”:“embed/etcd.go:626”,“msg”:“stopped serving peer traffic”,“address”:“127.0.0.1:2400”}
{“level”:“info”,“ts”:“2025-12-19T06:56:58.436321Z”,“caller”:“embed/etcd.go:428”,“msg”:“closed etcd server”,“name”:“a8845cde795d-9cfd1972”,“data-dir”:“/var/lib/rancher/k3s/server/db/etcd-tmp”,“advertise-peer-urls”:[“http://127.0.0.1:2400”],“advertise-client-urls”:[“http://127.0.0.1:2399”]}
time=“2025-12-19T06:56:58Z” level=info msg=“certificate CN=etcd-peer signed by CN=etcd-peer-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:58 +0000 UTC”
time=“2025-12-19T06:56:58Z” level=info msg=“certificate CN=etcd-server signed by CN=etcd-server-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:58 +0000 UTC”
time=“2025-12-19T06:56:58Z” level=info msg=“certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:58 +0000 UTC”
time=“2025-12-19T06:56:58Z” level=warning msg=“dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request”
time=“2025-12-19T06:56:58Z” level=info msg=“Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.6:172.17.0.6 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-local-node:local-node listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=83E1F8949B37A718649E517F530D6CF0794D674F]”
time=“2025-12-19T06:56:59Z” level=info msg=“Password verified locally for node local-node”
time=“2025-12-19T06:56:59Z” level=info msg=“certificate CN=local-node signed by CN=k3s-server-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:59 +0000 UTC”
time=“2025-12-19T06:56:59Z” level=info msg=“certificate CN=system:node:local-node,O=system:nodes signed by CN=k3s-client-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:59 +0000 UTC”
time=“2025-12-19T06:56:59Z” level=info msg=“certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:59 +0000 UTC”
time=“2025-12-19T06:56:59Z” level=info msg=“certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1766127252: notBefore=2025-12-19 06:54:12 +0000 UTC notAfter=2026-12-19 06:56:59 +0000 UTC”
time=“2025-12-19T06:56:59Z” level=info msg=“Module overlay was already loaded”
time=“2025-12-19T06:56:59Z” level=info msg=“Module nf_conntrack was already loaded”
time=“2025-12-19T06:56:59Z” level=info msg=“Module br_netfilter was already loaded”
time=“2025-12-19T06:56:59Z” level=warning msg=“Failed to load kernel module iptable_nat with modprobe”
time=“2025-12-19T06:56:59Z” level=warning msg=“Failed to load kernel module iptable_filter with modprobe”
time=“2025-12-19T06:56:59Z” level=warning msg=“Failed to load kernel module nft-expr-counter with modprobe”
time=“2025-12-19T06:56:59Z” level=warning msg=“Failed to load kernel module nfnetlink-subsys-11 with modprobe”
time=“2025-12-19T06:56:59Z” level=warning msg=“Failed to load kernel module nft-chain-2-nat with modprobe”
time=“2025-12-19T06:56:59Z” level=info msg=“Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_established’ to 86400”
time=“2025-12-19T06:56:59Z” level=info msg=“Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_close_wait’ to 3600”
time=“2025-12-19T06:56:59Z” level=info msg=“Set sysctl ‘net/netfilter/nf_conntrack_max’ to 262144”
time=“2025-12-19T06:56:59Z” level=error msg=“Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied”
time=“2025-12-19T06:56:59Z” level=info msg=“Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log”
time=“2025-12-19T06:56:59Z” level=info msg=“Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml”
time=“2025-12-19T06:56:59Z” level=info msg=“Polling for API server readiness: GET /readyz failed: the server is currently unable to handle the request”
bash-4.4# cat k3s.log

I1219 06:55:26.260823 172 state_mem.go:96] “Updated CPUSet assignments” assignments={}
I1219 06:55:26.260851 172 policy_none.go:49] “None policy: Start”
I1219 06:55:26.260860 172 memory_manager.go:187] “Starting memorymanager” policy=“None”
I1219 06:55:26.260873 172 state_mem.go:36] “Initializing new in-memory state store”
I1219 06:55:26.261021 172 state_mem.go:77] “Updated machine memory state”
I1219 06:55:26.261138 172 policy_none.go:47] “Start”
E1219 06:55:26.269840 172 manager.go:513] “Failed to read data from checkpoint” err=“checkpoint is not found” checkpoint=“kubelet_internal_checkpoint”
I1219 06:55:26.270031 172 eviction_manager.go:189] “Eviction manager: starting control loop”
I1219 06:55:26.270042 172 container_log_manager.go:146] “Initializing container log rotate workers” workers=1 monitorPeriod=“10s”
I1219 06:55:26.271023 172 plugin_manager.go:118] “Starting Kubelet Plugin Manager”
E1219 06:55:26.274367 172 eviction_manager.go:267] “eviction manager: failed to check if we have separate container filesystem. Ignoring.” err=“no imagefs label for configured runtime”
E1219 06:55:26.280368 172 summary_sys_containers.go:51] “Failed to get system container stats” err=“failed to get cgroup stats for "/k3s": failed to get container info for "/k3s": unknown container "/k3s"” containerName=“/k3s”
I1219 06:55:26.322718 172 shared_informer.go:356] “Caches are synced” controller=“node informer cache”
I1219 06:55:26.322852 172 server.go:219] “Successfully retrieved NodeIPs” NodeIPs=[“172.17.0.6”]
E1219 06:55:26.322887 172 server.go:256] “Kube-proxy configuration may be incomplete or incorrect” err=“nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using --nodeport-addresses primary
E1219 06:55:26.326405 172 server.go:135] “Error running ProxyServer” err=<
iptables is not available on this host : error listing chain “POSTROUTING” in table “nat”: exit status 3: iptables v1.8.11 (legacy): can’t initialize iptables table `nat’: Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

Error: iptables is not available on this host : error listing chain “POSTROUTING” in table “nat”: exit status 3: iptables v1.8.11 (legacy): can’t initialize iptables table `nat’: Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

Usage:
kube-proxy [flags]

Flags:
–add_dir_header If true, adds the file directory to the header of the log messages
–allow_dynamic_housekeeping Whether to allow the housekeeping interval to be dynamic (default true)
–alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
–application_metrics_count_limit int Max number of application metrics to store (per container) (default 100)

参考:https://stackoverflow.com/questions/21983554/iptables-v1-4-14-cant-initialize-iptables-table-nat-table-does-not-exist-d