Поды не создаются в Kubernetes из-за проблем с CNI.

При развертывании kubernetes модули coredns не создаются или на их создание уходит несколько часов, и периодически происходит сбой. Тот же случай и с несколькими другими модулями, созданными пользователем. Похоже, он не может подключиться к самому себе для проверки работоспособности или готовности.

Ошибка показывает

«networkPlugin CNI не удалось настроить модуль «coredns-558bd4d5db-pflq2_kube-system» сеть: невозможно установить для режима шпильки значение true для стороны моста veth vethweplabf4295: операция не поддерживается»

      # kks get po
NAME                                          READY   STATUS              RESTARTS   AGE
coredns-558bd4d5db-ndprk                      0/1     ContainerCreating   0          9m30s
coredns-558bd4d5db-pflq2                      0/1     ContainerCreating   0          9m30s
etcd-cvs-k8s-loganath-05                      1/1     Running             3          42h
kube-apiserver-cvs-k8s-loganath-05            1/1     Running             0          42h
kube-controller-manager-cvs-k8s-loganath-05   1/1     Running             2          42h
kube-proxy-cswbf                              1/1     Running             0          42h
kube-proxy-k72xk                              1/1     Running             0          42h
kube-proxy-m4hqk                              1/1     Running             0          42h
kube-proxy-s9x4m                              1/1     Running             0          42h
kube-proxy-ws2c4                              1/1     Running             0          42h
kube-scheduler-cvs-k8s-loganath-05            1/1     Running             2          42h
weave-net-2r6k7                               2/2     Running             0          42h
weave-net-7ztph                               2/2     Running             1          42h
weave-net-kzwf4                               2/2     Running             0          42h
weave-net-t8dhk                               2/2     Running             0          42h
weave-net-wfpc5                               2/2     Running             0          42h
      # kks describe po coredns-558bd4d5db-pflq2
Name:                 coredns-558bd4d5db-pflq2
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 node-01/10.xx.xx.xx
Start Time:           Wed, 14 Dec 2022 10:05:43 +0000
Labels:               k8s-app=kube-dns
                      pod-template-hash=558bd4d5db
Annotations:          <none>
Status:               Pending
IP:
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-558bd4d5db
Containers:
  coredns:
    Container ID:
    Image:         k8s.gcr.io/coredns/coredns:v1.8.0
    Image ID:
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gv2n6 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-gv2n6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                 From               Message
  ----     ------                  ----                ----               -------
  Normal   Scheduled               42s                 default-scheduler  Successfully assigned kube-system/coredns-558bd4d5db-pflq2 to node-01
  Warning  FailedCreatePodSandBox  40s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f84c3de8ad2fca281208c771c00e610ec65fb03fa47f1b90bc2133cb9499ecf0" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethweplf84c3de: operation not supported
  Warning  FailedCreatePodSandBox  39s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "526431ee63d3db41921426bb8541cefc77f58123de9978518c91359ac42695e7" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethwepl526431e: operation not supported
  Warning  FailedCreatePodSandBox  37s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "33d68bfd32d41ab376d70bc35c68bc034b252390fb5e3713773a21b5ed249926" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethwepl33d68bf: operation not supported
  Warning  FailedCreatePodSandBox  35s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ab5fc84a20aad88c7dccab28357a2eb4ab68b4c69370626ad5cfbba208309a10" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethweplab5fc84: operation not supported
  Warning  FailedCreatePodSandBox  33s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "88990cec41e53123339dc51fd78c3400b00a3ca7a886d063e994fdba03798669" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethwepl88990ce: operation not supported
  Warning  FailedCreatePodSandBox  30s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b017f7db2ea3ba4de2877c2b4e129d0e846f2fb764a433331eebd02511727609" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethweplb017f7d: operation not supported
  Warning  FailedCreatePodSandBox  28s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "56cc826b280f2b7fc363b9f9bd72c3b7762b7869af3d49027f6092c73a73e8c8" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethwepl56cc826: operation not supported
  Warning  FailedCreatePodSandBox  26s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ef2194ec2eadeb840e51b80e4b0e4253fead4bdd7655c4e144e747fda81772c" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethwepl9ef2194: operation not supported
  Warning  FailedCreatePodSandBox  24s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7fd35ab52215a9e9f1c0003f944b99e2c08b51cb3a191c3785f400a8c73eee96" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethwepl7fd35ab: operation not supported
  Normal   SandboxChanged          17s (x12 over 40s)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  16s (x4 over 22s)   kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "abf42958598d5bd5cc45db3f309c6a208b83af748d0a1f7392bfa22ce47a0c5c" network for pod "coredns-558bd4d5db-pflq2": networkPlugin cni failed to set up pod "coredns-558bd4d5db-pflq2_kube-system" network: unable to set hairpin mode to true for bridge side of veth vethweplabf4295: operation not supported
      # k get no -owide
NAME      STATUS   ROLES                  AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
node-01   Ready    <none>                 42h   v1.21.13   10.xx.xx.1      <none>        Ubuntu 18.04.6 LTS   4.15.0-200-generic   docker://18.6.3
node-02   Ready    <none>                 42h   v1.21.13   10.xx.xx.2      <none>        Ubuntu 18.04.6 LTS   4.15.0-200-generic   docker://18.6.3
node-03   Ready    <none>                 42h   v1.21.13   10.xx.xx.3      <none>        Ubuntu 18.04.6 LTS   4.15.0-200-generic   docker://18.6.3
node-04   Ready    <none>                 42h   v1.21.13   10.xx.xx.4      <none>        Ubuntu 18.04.6 LTS   4.15.0-200-generic   docker://18.6.3
node-05   Ready    control-plane,master   42h   v1.21.13   10.xx.xx.5      <none>        Ubuntu 18.04.6 LTS   4.15.0-200-generic   docker://18.6.3

Следовал документу и попытался добавить эту опцию в kubelet.

--hairpin-mode=promiscuous-bridge

но это все равно не сработало.

      # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:8d:30:b9 brd ff:ff:ff:ff:ff:ff
    inet 10.xx.xx.5/20 brd 10.xx.xx.255 scope global dynamic ens192
       valid_lft 63182sec preferred_lft 63182sec
    inet6 fe80::250:56ff:fe8d:30b9/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:df:01:2b:b2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 5e:27:45:17:e4:58 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5c27:45ff:fe17:e458/64 scope link
       valid_lft forever preferred_lft forever
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:07:10:24:cb:9d brd ff:ff:ff:ff:ff:ff
    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave
       valid_lft forever preferred_lft forever
    inet6 fe80::5c07:10ff:fe24:cb9d/64 scope link
       valid_lft forever preferred_lft forever
8: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
    link/ether 66:68:4d:77:1f:e4 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6468:4dff:fe77:1fe4/64 scope link
       valid_lft forever preferred_lft forever
9: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 3a:54:19:3b:99:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3854:19ff:fe3b:994b/64 scope link
       valid_lft forever preferred_lft forever
10: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 5a:23:1a:00:81:23 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5823:1aff:fe00:8123/64 scope link
       valid_lft forever preferred_lft forever

0 ответов

Другие вопросы по тегам