Docker 中的网络

网友投稿 280 2022-10-18

Docker 中的网络

文章目录

​​概述浏览​​

​​查看当前的机器中的网络信息​​

​​图示docker网络解析​​

​​查看docker 容器中的ip配置信息​​

​​docker 网络命令​​

​​查看当前机器的所有网络 docker network ls​​​​查看某个网络 docker network inspect ${netword-name}​​​​删除某个网络 docker network rm ${netword-name}​​​​创建某个网络 docker network create ${network-name}​​​​指定某个网络后启动容器 docker run -d --name tomcat02 -p 8081:8081 --network​​​​增加某个容器到某个网络 docker network connect customer-network tomcat01​​

​​docker中的DNS记录问题​​

​​自定义网桥​​​​默认网桥​​

​​host网络模式​​​​none模式​​​​docker中的overlay 解决多机通信问题​​

概述浏览

查看当前的机器中的网络信息

[root@iZwz91h49n3mj8r232gqweZ ~]# ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0 valid_lft 314691780sec preferred_lft 314691780sec3: docker0: mtu 1500 qdisc noqueue state UP link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever..略...34: veth6991d55@if33: mtu 1500 qdisc noqueue master docker0 state UP link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 1136: veth20601e8@if35: mtu 1500 qdisc noqueue master docker0 state UP link/ether 5a:b1:3d:0f:0d:08 brd ff:ff:ff:ff:ff:ff link-netnsid 10[root@iZwz91h49n3mj8r232gqweZ ~]#

图示docker网络解析

1.first-centos和second-centos 是我们在自己的centos系统上创建的两个docker容器Container2.另外我们自己的centos系统上有一个docker0的网卡,这个网卡会将当前系统

查看docker 容器中的ip配置信息

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it first-centos ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever35: eth0@if36: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it second-centos ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever33: eth0@if34: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever[root@iZwz91h49n3mj8r232gqweZ ~]#

1.通过上面的docker exec -it first-centos ip a 命令我们可以看到网卡eth0@if36 实际上就是和上面的veth20601e8@if35通过Veth Pair技术 配对出现的2.通过上面的docker exec -it second-centos ip a 命令我们可以看到网卡eth0@if34 实际上就是和上面的veth6991d55@if33通过Veth Pair技术 配对出现的3.这里注意下:所有的docker创建的容器的跟docker0的网段都在同一个网段172.17.0.XXX4.表象是:同一个网段能够进行ping通通信,但是实际上是通过veth pair技术实现的;

docker 网络命令

查看当前机器的所有网络 docker network ls

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network lsNETWORK ID NAME DRIVER SCOPE757b74ac61c7 bridge bridge locald3d1516d3a7c harbor_harbor bridge local6f425e496441

备注

1.NETWORK ID: 网络ID2.NAME:网路名称 3.DRIVER:网络驱动类型/网络模式 bridge 桥接模式,此模式会为每一个容器分配、设置IP等, 并将容器连接到一个docker0虚拟网桥, 通过docker0网桥以及Iptables nat表配置与宿主机通信。 host 宿主主机模式,容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口。 null:该模式关闭了容器的网络功能。

查看某个网络 docker network inspect ${netword-name}

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect bridge[ { "Name": "bridge", "Id": "757b74ac61c7d5f2148f7dfada40d6cc6cfe9ad73c924b4d2ff351dcdd55ea69", "Created": "2019-12-01T09:09:58.838895122+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "7aa9960b8e1ae75de034d88cd8cdcbd3d4307d49138ac6d427247ede01166147": { "Name": "first-centos", "EndpointID": "8b530ec7edac68dfa2903979b8ad5c39fed8aa8704868716df8ef0a7ac3d8d87", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" }, "f987e94f9251c01a8b48c591fd7866adb6fd1fccee394acd2452325fa3796860": { "Name": "second-centos", "EndpointID": "9bb718d4e262cf13b2e0b48f56e1a126c906e51f06c98c50a08178a0393a45ba", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "172.17.0.3/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} }]

备注

1.Containers 这里是bridge默认是我们的docker0,其中通过该网桥docker0/bridge 创建的docker容器有两个 即:first-centos和second-centos

删除某个网络 docker network rm ${netword-name}

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network create customer-network1dda4d56c4a822155e48506d226bd034c7bdd73969faf9dcf687431b23282e903[root@iZwz91h49n3mj8r232gqweZ ~]# docker network rm customer-network1 customer-network1[root@iZwz91h49n3mj8r232gqweZ ~]#

创建某个网络 docker network create ${network-name}

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network create customer-networkfa4e855899d5a1e290c2c0b3fd724959a7f484455a4d8452205bdda5d616c317[root@iZwz91h49n3mj8r232gqweZ ~]# docker network lsNETWORK ID NAME DRIVER SCOPE757b74ac61c7 bridge bridge localfa4e855899d5 customer-network bridge locald3d1516d3a7c harbor_harbor bridge local6f425e496441 host host locald082dd604405 none null local[root@iZwz91h49n3mj8r232gqweZ ~]#

备注

1.创建一个bridge桥接类型的网络

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network[ { "Name": "customer-network", "Id": "fa4e855899d5a1e290c2c0b3fd724959a7f484455a4d8452205bdda5d616c317", "Created": "2019-12-08T10:27:32.313643881+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.19.0.0/16", "Gateway": "172.19.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} }][root@iZwz91h49n3mj8r232gqweZ ~]#

备注

1.通过docker network inspect customer-network 命令查看, 可以发现,当前的网络重新分配了一个172.19.0.1网络; 之前的docker0 172.17.0.2,明显是不一样的,一个是172.17一个是172.19

指定某个网络后启动容器 docker run -d --name tomcat02 -p 8081:8081 --network

docker run -d --name ${Container-name} --network ${network-name} ${image-name}

[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat02 -p 8081:8081 --network customer-network tomcat9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90 [root@iZwz91h49n3mj8r232gqweZ ~]#

1.默认如果不指定网络,那么在docker0这个网络下创建容器2.

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network[ { "Name": "customer-network", "Id": "c2cfc0ed26761a909f1f273a937d954398070c1736d5cb3f5a306474faae7836", "Created": "2019-12-08T10:46:18.628042916+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.21.0.0/16", "Gateway": "172.21.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90": { "Name": "tomcat02", "EndpointID": "11142f2c0421fe292a6f9e60c62a497a17dc1452e3b1f11c6292adf6adcd9e5e", "MacAddress": "02:42:ac:15:00:02", "IPv4Address": "172.21.0.2/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} }]

1.我们在网络customer-network中创建的容器tomcat02创建成功,IP为 172.21.0.2, 跟customer-network中的网络保持了一直 都是172.21

增加某个容器到某个网络 docker network connect customer-network tomcat01

docker network ${network-name} ${Container-name}

背景

有可能在不同的网络中的容器,可能需要互相通信,这个时候他们的网段由于不一致导致无法通信,本节即为解决该问题;

原来的tomcat01容器在默认的网桥docker01之中,其中IP 172.17.0.4

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever50: eth0@if51: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.4/16 brd 172.17.255.255

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever50: eth0@if51: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever[root@iZwz91h49n3mj8r232gqweZ ~]#

原来的网桥customer-network的信息如下,里面只有一个容器tomcat02

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network[ { "Name": "customer-network", "Id": "c2cfc0ed26761a909f1f273a937d954398070c1736d5cb3f5a306474faae7836", "Created": "2019-12-08T10:46:18.628042916+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.21.0.0/16", "Gateway": "172.21.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90": { "Name": "tomcat02", "EndpointID": "11142f2c0421fe292a6f9e60c62a497a17dc1452e3b1f11c6292adf6adcd9e5e", "MacAddress": "02:42:ac:15:00:02", "IPv4Address": "172.21.0.2/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} }]

新增tomcat01容器到customer-network

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network connect customer-network tomcat01

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network inspect customer-network[ { "Name": "customer-network", "Id": "c2cfc0ed26761a909f1f273a937d954398070c1736d5cb3f5a306474faae7836", "Created": "2019-12-08T10:46:18.628042916+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.21.0.0/16", "Gateway": "172.21.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "9559aa60858c970688a3c2e768e00ae1c07b6bec2bbdd8f55b8c70657dd3ed90": { "Name": "tomcat02", "EndpointID": "11142f2c0421fe292a6f9e60c62a497a17dc1452e3b1f11c6292adf6adcd9e5e", "MacAddress": "02:42:ac:15:00:02", "IPv4Address": "172.21.0.2/16", "IPv6Address": "" }, "bed934410133064a8d2aff3b9d81c3cd0ff0d75a210f2a8e7ee3e007b21e3be8": { "Name": "tomcat01", "EndpointID": "7836d5be0d13572a88a1c0667684bc58bc71a3ea10a50fa800e9fadfba474ab3", "MacAddress": "02:42:ac:15:00:03", "IPv4Address": "172.21.0.3/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} }][root@iZwz91h49n3mj8r232gqweZ ~]#

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever50: eth0@if51: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever54: eth1@if55: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:15:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.21.0.3/16 brd 172.21.255.255

1.新增之后查看,我们发现tomcat01已经也在我们的自定义的网桥(customer-network)之中了,并且新分配了IP 172.21.0.3;2.我们单独查看tomcat01容器的ip信息,发现其IP目前有两个172.17.0.4(之前的默认网桥分配的) 172.21.0.3为customer-network网桥分配的,这样172.21.0.XXX网段的也能够进行通信了;

容器tomcat01与tomcat02通信验证

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat01 ping 172.21.0.2PING 172.21.0.2 (172.21.0.2) 56(84) bytes of data.64 bytes from 172.21.0.2: icmp_seq=1 ttl=64 time=0.180 ms64 bytes from 172.21.0.2: icmp_seq=2 ttl=64 time=0.065 ms64 bytes from 172.21.0.2: icmp_seq=3 ttl=64 time=0.068 ms64 bytes from 172.21.0.2: icmp_seq=4 ttl=64 time=0.066 ms64 bytes from 172.21.0.2: icmp_seq=5 ttl=64 time=0.054 ms64 bytes from 172.21.0.2: icmp_seq=6 ttl=64 time=0.078

容器tomcat02与tomcat01通信验证

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat02 ping 172.21.0.3PING 172.21.0.3 (172.21.0.3) 56(84) bytes of data.64 bytes from 172.21.0.3: icmp_seq=1 ttl=64 time=0.153 ms64 bytes from 172.21.0.3: icmp_seq=2 ttl=64 time=0.064 ms64 bytes from 172.21.0.3: icmp_seq=3 ttl=64 time=0.072 ms64 bytes from 172.21.0.3: icmp_seq=4 ttl=64 time=0.068 ms64 bytes from 172.21.0.3: icmp_seq=5 ttl=64 time=0.079 ms64 bytes from 172.21.0.3: icmp_seq=6 ttl=64 time=0.060 ms64 bytes from 172.21.0.3: icmp_seq=7 ttl=64 time=0.064 ms64 bytes from 172.21.0.3: icmp_seq=8 ttl=64 time=0.061 ms64 bytes from 172.21.0.3: icmp_seq=9 ttl=64 time=0.059 ms64 bytes from 172.21.0.3: icmp_seq=10 ttl=64 time=0.062

以上发现,都能互相ping通

docker中的DNS记录问题

1.我们在进行创建容器Container的时候,在桥接模式下, 同一个网段下,我们的不同的ip是能够进行互相ping通ip的; 同样同一个网段下,我们通过ping Container-name也能够ping通

自定义网桥

自定义的网桥中ip互ping

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat33 ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever60: eth0@if61: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:15:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.21.0.4/16 brd 172.21.255.255 scope global eth0 valid_lft forever preferred_lft forever[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat44 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever62: eth0@if63: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:15:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.21.0.5/16 brd 172.21.255.255 scope global eth0 valid_lft forever preferred_lft forever[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat33 ping 172.21.0.5PING 172.21.0.5 (172.21.0.5) 56(84) bytes of data.64 bytes from 172.21.0.5: icmp_seq=1 ttl=64 time=0.159 ms64 bytes from 172.21.0.5: icmp_seq=2 ttl=64 time=0.055 ms^Z64 bytes from 172.21.0.5: icmp_seq=3 ttl=64 time=0.048 ms64 bytes from 172.21.0.5: icmp_seq=4 ttl=64 time=0.046 ms64 bytes from 172.21.0.5: icmp_seq=5 ttl=64 time=0.070

自定义的网桥中Container-name互通

[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat33 --network customer-network tomcat8272fab13f79b529677c1a9effd59206ddf4cb8bdad5c60bee0d8cd91cec8b11[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat44 --network customer-network tomcat b1d73622f15c028526da0fe760c406ca282ad57e31f26d073b357b48a29d4612[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat33 ping tomcat44PING tomcat44 (172.21.0.5) 56(84) bytes of data.64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=1 ttl=64 time=0.178 ms64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=2 ttl=64 time=0.059 ms64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=3 ttl=64 time=0.063 ms^Z64 bytes from tomcat44.customer-network (172.21.0.5): icmp_seq=4 ttl=64 time=0.077

1.这里注意下自定义的网桥中,创建Container之后,会自动生成一条DNS记录,Container-name对应着IP

默认网桥

注意

1.默认的网桥下,创建的容器,相助之前IP能够进行ping通,但是通过Container-name是无法ping通的

[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat11 -p 8011:8011 tomcat8287de436fbc29dc6a62034d8a30f7cc62eb4a4b76232f4478476648c9c8473e[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat22 -p 8022:8022 tomcat 24eec4c1fb3770ec7da3251e74d6ca360f736e0db9e7919051882256237b4ada[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat11 ping tomcat22ping: tomcat22: Name or service not known[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat11 ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever56: eth0@if57: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.5/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever58: eth0@if59: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.6/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ip 172.17.0.5Object "172.17.0.5" is unknown, try "ip help".[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ping 172.17.0.5PING 172.17.0.5 (172.17.0.5) 56(84) bytes of data.64 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.208 ms64 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.066 ms64 bytes from 172.17.0.5: icmp_seq=3 ttl=64 time=0.074

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat22 ping tomcat33ping: tomcat33: Name or service not known[root@iZwz91h49n3mj8r232gqweZ ~]#

默认网桥中使用Container-name无法进行ping通

1.默认网桥中使用Container-name无法进行ping通

1.解决上面的问题,我们可以通过以下方式

[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat03 tomcat0b3a8700698b163e7fa3163f881d73f1c2fd164dba375af72deb95b925ef536c[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat04 --link tomcat03 tomcatfa991903e990420377674646fc2fcc2892ad930948a50c703233679da190d462[root@iZwz91h49n3mj8r232gqweZ ~]#

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat04 ping tomcat03PING tomcat03 (172.17.0.7) 56(84) bytes of data.64 bytes from tomcat03 (172.17.0.7): icmp_seq=1 ttl=64 time=0.209 ms64 bytes from tomcat03 (172.17.0.7): icmp_seq=2 ttl=64 time=0.064

[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat03 ping tomcat04ping: tomcat04: Name or service not known[root@iZwz91h49n3mj8r232gqweZ ~]#

1.这里tomcat04能ping通tomcat03,但是tomcat03ping不通tomcat04 因为我们这里只是读tomcat04启动的时候做了设置2.一般不建议使用link这种操作,一般建议创建自定义网桥

host网络模式

1.host 网络模式实际上就是使用跟宿主机一样的网络;

[root@iZwz91h49n3mj8r232gqweZ ~]# docker network lsNETWORK ID NAME DRIVER SCOPE757b74ac61c7 bridge bridge localc2cfc0ed2676 customer-network bridge locald3d1516d3a7c harbor_harbor bridge local6f425e496441 host host locald082dd604405 none null local[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat05 --network host tomcat413f2f52cc292a7ceca7182b21ebe74ac6c16d0991d7b6cb79ccf6d56a450f99[root@iZwz91h49n3mj8r232gqweZ ~]# [root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat05 ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0 valid_lft 314482680sec preferred_lft 314482680sec3: docker0: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever6: br-d3d1516d3a7c: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 brd 172.18.255.255 scope global br-d3d1516d3a7c valid_lft forever preferred_lft forever8: veth09c86df@if7: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether d2:95:5b:b9:df:58 brd ff:ff:ff:ff:ff:ff link-netnsid 010: veth5aaebfd@if9: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether de:d7:5a:73:3b:65 brd ff:ff:ff:ff:ff:ff link-netnsid 112: vethfa01851@if11: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether e2:94:b6:a5:f6:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 514: vethd6d7476@if13: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether 1a:70:f7:4e:66:01 brd ff:ff:ff:ff:ff:ff link-netnsid 216: veth2881aeb@if15: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether f6:43:1a:e5:e7:8c brd ff:ff:ff:ff:ff:ff link-netnsid 418: vethf151547@if17: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether 9e:f6:a5:ed:09:59 brd ff:ff:ff:ff:ff:ff link-netnsid 320: veth0e78003@if19: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether f6:92:4c:29:23:91 brd ff:ff:ff:ff:ff:ff link-netnsid 622: vethf7699f4@if21: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether 82:99:f5:a3:ad:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 724: vethed86455@if23: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether 66:d0:5f:bf:4f:dd brd ff:ff:ff:ff:ff:ff link-netnsid 826: veth9fe54e4@if25: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP group default link/ether 7e:f0:6f:ef:f7:32 brd ff:ff:ff:ff:ff:ff link-netnsid 934: veth6991d55@if33: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 1136: veth20601e8@if35: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 5a:b1:3d:0f:0d:08 brd ff:ff:ff:ff:ff:ff link-netnsid 1043: br-c2cfc0ed2676: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:20:b2:65:00 brd ff:ff:ff:ff:ff:ff inet 172.21.0.1/16 brd 172.21.255.255 scope global br-c2cfc0ed2676 valid_lft forever preferred_lft forever53: vethca95279@if52: mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP group default link/ether 52:62:fd:05:cf:57 brd ff:ff:ff:ff:ff:ff link-netnsid 1457: vethd3c057b@if56: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 7e:37:ea:e0:a2:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 1559: veth538a160@if58: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ce:67:04:9d:e6:34 brd ff:ff:ff:ff:ff:ff link-netnsid 1661: veth55940bb@if60: mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP group default link/ether 9a:07:d7:91:8a:24 brd ff:ff:ff:ff:ff:ff link-netnsid 1763: vethf2b1a18@if62: mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP group default link/ether 12:4e:b2:3e:81:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 1865: veth7808c0a@if64: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ba:86:41:bc:2c:2b brd ff:ff:ff:ff:ff:ff link-netnsid 1967: veth3c461cd@if66: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether c2:a4:27:ed:d1:28 brd ff:ff:ff:ff:ff:ff link-netnsid 20[root@iZwz91h49n3mj8r232gqweZ ~]#

1.会发现其实跟宿主的ip情况是一致的;

宿主机的ip情况

[root@iZwz91h49n3mj8r232gqweZ ~]# ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0 valid_lft 314482632sec preferred_lft 314482632sec3: docker0: mtu 1500 qdisc noqueue state UP link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever6: br-d3d1516d3a7c: mtu 1500 qdisc noqueue state UP link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 brd 172.18.255.255 scope global br-d3d1516d3a7c valid_lft forever preferred_lft forever8: veth09c86df@if7: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether d2:95:5b:b9:df:58 brd ff:ff:ff:ff:ff:ff link-netnsid 010: veth5aaebfd@if9: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether de:d7:5a:73:3b:65 brd ff:ff:ff:ff:ff:ff link-netnsid 112: vethfa01851@if11: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether e2:94:b6:a5:f6:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 514: vethd6d7476@if13: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether 1a:70:f7:4e:66:01 brd ff:ff:ff:ff:ff:ff link-netnsid 216: veth2881aeb@if15: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether f6:43:1a:e5:e7:8c brd ff:ff:ff:ff:ff:ff link-netnsid 418: vethf151547@if17: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether 9e:f6:a5:ed:09:59 brd ff:ff:ff:ff:ff:ff link-netnsid 320: veth0e78003@if19: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether f6:92:4c:29:23:91 brd ff:ff:ff:ff:ff:ff link-netnsid 622: vethf7699f4@if21: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether 82:99:f5:a3:ad:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 724: vethed86455@if23: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether 66:d0:5f:bf:4f:dd brd ff:ff:ff:ff:ff:ff link-netnsid 826: veth9fe54e4@if25: mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP link/ether 7e:f0:6f:ef:f7:32 brd ff:ff:ff:ff:ff:ff link-netnsid 934: veth6991d55@if33: mtu 1500 qdisc noqueue master docker0 state UP link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 1136: veth20601e8@if35: mtu 1500 qdisc noqueue master docker0 state UP link/ether 5a:b1:3d:0f:0d:08 brd ff:ff:ff:ff:ff:ff link-netnsid 1043: br-c2cfc0ed2676: mtu 1500 qdisc noqueue state UP link/ether 02:42:20:b2:65:00 brd ff:ff:ff:ff:ff:ff inet 172.21.0.1/16 brd 172.21.255.255 scope global br-c2cfc0ed2676 valid_lft forever preferred_lft forever53: vethca95279@if52: mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP link/ether 52:62:fd:05:cf:57 brd ff:ff:ff:ff:ff:ff link-netnsid 1457: vethd3c057b@if56: mtu 1500 qdisc noqueue master docker0 state UP link/ether 7e:37:ea:e0:a2:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 1559: veth538a160@if58: mtu 1500 qdisc noqueue master docker0 state UP link/ether ce:67:04:9d:e6:34 brd ff:ff:ff:ff:ff:ff link-netnsid 1661: veth55940bb@if60: mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP link/ether 9a:07:d7:91:8a:24 brd ff:ff:ff:ff:ff:ff link-netnsid 1763: vethf2b1a18@if62: mtu 1500 qdisc noqueue master br-c2cfc0ed2676 state UP link/ether 12:4e:b2:3e:81:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 1865: veth7808c0a@if64: mtu 1500 qdisc noqueue master docker0 state UP link/ether ba:86:41:bc:2c:2b brd ff:ff:ff:ff:ff:ff link-netnsid 1967: veth3c461cd@if66: mtu 1500 qdisc noqueue master docker0 state UP link/ether c2:a4:27:ed:d1:28 brd ff:ff:ff:ff:ff:ff link-netnsid 20[root@iZwz91h49n3mj8r232gqweZ ~]#

none模式

[root@iZwz91h49n3mj8r232gqweZ ~]# docker run -d --name tomcat06 --network none tomcat3480def5997a3fe31a0679b44e6636f399683e7c7da3003c348b5551820961f3[root@iZwz91h49n3mj8r232gqweZ ~]# docker exec -it tomcat06 ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever[root@iZwz91h49n3mj8r232gqweZ ~]#

1.只有一个local的网卡

docker中的overlay 解决多机通信问题

1.多个centos宿主主机使用Docker创建Container的时候,有可能会出现不同Container里的ip是一样的的

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Java 详解Collection集合之ArrayList和HashSet
下一篇:docker中启动Springboot时异常之Failed to instantiate [com.zaxxer.hikari.HikariDataSource]
相关文章

 发表评论

暂时没有评论,来抢沙发吧~