Calico Vxlan 跨节点通信

网友投稿 621 2022-09-11

Calico Vxlan 跨节点通信

安装 calico vxlan

部署模板地址:CrossSubnet 改为 vxlan

- name: CALICO_IPV4POOL_VXLAN value: "CrossSubnet" - name: CALICO_IPV4POOL_VXLAN value: "vxlan"# 修改 CIDP 保持和 kubeconfig 默认的一致            - name: CALICO_IPV4POOL_CIDR              value: "10.244.0.0/16"

需要确认 backend 是否是 vxlan

calico_backend: "vxlan"

部署

kubectl apply -f calico-vxlan.yaml

我们可以确认calico vxlan 模式并未通过 BGP 来进行维护,这一点和 IPIP 有着本质的区别。

[root@master ~]# calicoctl node statusCalico process is running.The BGP backend process (BIRD) is not running.

当前环境

[root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod1 1/1 Running 0 69s 10.244.103.68 node2.whale.com pod3 1/1 Running 0 48s 10.244.42.71 node1.whale.com

pod1 10.244.103.68 node2 192.168.0.82pod3 10.244.42.71 node1 192.168.0.81

pod1 及其对应节点的cali网卡和路由表

# pod1 及其对应节点的cali网卡和路由表[root@master ]# kubectl exec -it pod1 -- ifconfig eth0eth0 Link encap:Ethernet HWaddr 26:7A:E7:8B:C4:48 inet addr:10.244.103.68 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:11 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:810 (810.0 B) TX bytes:364 (364.0 B)[root@master ]# kubectl exec -it pod1 -- ethtool -S eth0NIC statistics: peer_ifindex: 7 rx_queue_0_xdp_packets: 0 rx_queue_0_xdp_bytes: 0 rx_queue_0_xdp_drops: 0[root@master ]# kubectl exec -it pod1 -- route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 169.254.1.1 0.0.0.0 UG 0 0 0 eth0169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0[root@node2 ]# ip link show | grep ^77: calice0906292e2@if3: mtu 1450 qdisc noqueue state UP mode DEFAULT group default[root@node2 ]# ip link show vxlan.calico6: vxlan.calico: mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 66:78:26:30:e8:cf brd ff:ff:ff:ff:ff:ff[root@node2 ]# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface10.244.42.64 10.244.42.64 255.255.255.192 UG 0 0 0 vxlan.calico[root@node2 ~]# ifconfig vxlan.calicovxlan.calico: flags=4163 mtu 1450 inet 10.244.103.64 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::6478:26ff:fe30:e8cf prefixlen 64 scopeid 0x20 ether 66:78:26:30:e8:cf txqueuelen 0 (Ethernet) RX packets 1 bytes 84 (84.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 84 (84.0 B) TX errors 0 dropped 11 overruns 0 carrier 0 collisions 0

pod3 及其对应节点的 cali 网卡和路由表

# pod3 及其对应节点的cali网卡和路由表[root@master ]# kubectl exec -it pod3 -- ifconfig eth0eth0 Link encap:Ethernet HWaddr AE:DE:E7:84:F7:C2 inet addr:10.244.42.71 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:11 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:810 (810.0 B) TX bytes:364 (364.0 B)[root@master ]# kubectl exec -it pod3 -- ethtool -S eth0NIC statistics: peer_ifindex: 10 rx_queue_0_xdp_packets: 0 rx_queue_0_xdp_bytes: 0 rx_queue_0_xdp_drops: 0[root@master ]# kubectl exec -it pod3 -- route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 169.254.1.1 0.0.0.0 UG 0 0 0 eth0169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0[root@node1 ]# ip link show | grep ^1010: cali49778cadcf1@if3: mtu 1450 qdisc noqueue state UP mode DEFAULT group default[root@node1 ]# ip link show vxlan.calico7: vxlan.calico: mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 66:52:2e:2a:6b:f4 brd ff:ff:ff:ff:ff:ff[root@node1 ]# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 ens3310.244.42.64 0.0.0.0 255.255.255.192 U 0 0 0 *10.244.42.68 0.0.0.0 255.255.255.255 UH 0 0 0 calicfa85ffd8bd10.244.42.69 0.0.0.0 255.255.255.255 UH 0 0 0 cali44307f7c2ca10.244.42.70 0.0.0.0 255.255.255.255 UH 0 0 0 cali27794099b3f10.244.42.71 0.0.0.0 255.255.255.255 UH 0 0 0 cali49778cadcf110.244.103.64 10.244.103.64 255.255.255.192 UG 0 0 0 vxlan.calico10.244.152.128 10.244.152.128 255.255.255.192 UG 0 0 0 vxlan.calico172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33[root@node1 ~]# ifconfig vxlan.calicovxlan.calico: flags=4163 mtu 1450 inet 10.244.42.64 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::6452:2eff:fe2a:6bf4 prefixlen 64 scopeid 0x20 ether 66:52:2e:2a:6b:f4 txqueuelen 0 (Ethernet) RX packets 1 bytes 84 (84.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 84 (84.0 B) TX errors 0 dropped 11 overruns 0 carrier 0 collisions 0

数据流向图

pod1 ping pod3

kubectl exec -it pod1 -- ping -c 1 10.244.42.71

pod1.cap

tcpdump -pne -i cali49778cadcf1 -w pod1.cap

具体细节

[root@master ~]# kubectl exec -it pod1 -- route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 169.254.1.1 0.0.0.0 UG 0 0 0 eth0169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0

pod1-vxlan.cap

tcpdump -pne -i vxlan.calico -w pod1-vxlan.cap

具体细节通过主机路由查找网关 和 对应的网卡出口然后 通过发送 arp 请求,找到对端网关的mac 地址,也就是对应 pod3 节点的 vxlan.calico 网卡

[root@node2 ]# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface10.244.42.64 10.244.42.64 255.255.255.192 UG 0 0 0 vxlan.calico[root@node2 ]# ifconfig vxlan.calicovxlan.calico: flags=4163 mtu 1450 inet 10.244.103.64 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::6478:26ff:fe30:e8cf prefixlen 64 scopeid 0x20 ether 66:78:26:30:e8:cf txqueuelen 0 (Ethernet) RX packets 1 bytes 84 (84.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 84 (84.0 B) TX errors 0 dropped 11 overruns 0 carrier 0 collisions 0 [root@node1 ~]# ifconfig vxlan.calicovxlan.calico: flags=4163 mtu 1450 inet 10.244.42.64 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::6452:2eff:fe2a:6bf4 prefixlen 64 scopeid 0x20 ether 66:52:2e:2a:6b:f4 txqueuelen 0 (Ethernet) RX packets 1 bytes 84 (84.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 84 (84.0 B) TX errors 0 dropped 11 overruns 0 carrier 0 collisions 0

pod1-node.cap

tcpdump -pne -i ens33 -w pod1-node.cap

具体细节,从 ip -d link show 可以看到本端的local ip。然后通过fdb表可以看到对端calico.vxlan的网卡所在的节点,即为remote ip。local ip remote ip vni id dstport 这些原素就可以指导封装VxLAN的数据包了。

[root@node1 ]# ip -d link show vxlan.calico 7: vxlan.calico: mtu 1450 qdisc noqueue state DOWN mode DEFAULT group default link/ether 66:52:2e:2a:6b:f4 brd ff:ff:ff:ff:ff:ff promiscuity 0 vxlan id 4096 local 192.168.0.81 dev ens33 srcport 0 0 dstport 4789 nolearning ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535[root@node1 ]# bridge fdb show66:78:26:30:e8:cf dev vxlan.calico dst 192.168.0.82 self permanent66:b1:18:59:9a:ed dev vxlan.calico dst 192.168.0.80 self permanent

pod3.cap

tcpdump -pne -i cali49778cadcf1 -w pod3.cap

pod3-vxlan.cap

tcpdump -pne -i vxlan.calico -w pod3-vxlan.cap

pod3-node.cap

tcpdump -pne -i ens33 -w pod3-node.cap

[root@node2 ]# ip -d link show vxlan.calico 6: vxlan.calico: mtu 1450 qdisc noqueue state DOWN mode DEFAULT group default link/ether 66:78:26:30:e8:cf brd ff:ff:ff:ff:ff:ff promiscuity 0 vxlan id 4096 local 192.168.0.82 dev ens33 srcport 0 0 dstport 4789 nolearning ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535[root@node2 ]# bridge fdb show66:52:2e:2a:6b:f4 dev vxlan.calico dst 192.168.0.81 self permanent66:b1:18:59:9a:ed dev vxlan.calico dst 192.168.0.80 self permanent

结论

Destination Gateway Genmask Flags Metric Ref Use Iface10.244.42.64 10.244.42.64 255.255.255.192 UG 0 0 0 vxlan.calico

到达对端 pod3 所在节点解封装,然后查询本地路由到目的 pod。

Destination Gateway Genmask Flags Metric Ref Use Iface10.244.42.71 0.0.0.0 255.255.255.255 UH 0 0 0 cali49778cadcf1

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Kubernetes之coredns写法
下一篇:津门虎战平长春亚泰,赛季第1分来之不易!
相关文章

 发表评论

暂时没有评论,来抢沙发吧~