解析Docker的4种容器网络
默认网络模型
先介绍默认的网络模型:
安装docker后,输入ifconfig
就会发现多了网卡中多了一个docker0:
$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:e0:2e:fe:77 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0是一个二层网络设备,即网桥(交换机),将Linux支持的不同的端口连接起来,实现交换机多对多的通信。默认网络模模式为桥接模式,使用veth pair技术,即虚拟以太网设备,成对出现用于解决网络命名空间之间的隔离,一端连接Container network namespace,另一端连接Host network namespace
下面启动一个centos容器用于测试: sudo docker run -it centos bash
,输入ip a s
查看网卡
$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
可以看到创建出了lo网卡和eth0网卡
同时可以看到宿主机上创建出了一个虚拟网卡:
vethee50e32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::d847:e3ff:fe86:b0e8 prefixlen 64 scopeid 0x20<link>
ether da:47:e3:86:b0:e8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 29 bytes 4524 (4.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
容器访问外网
如上展示了容器访问外网的方式,veth一头连接容器,一头连接主机,eth0是宿主机的物理网卡,SNAT进行IP地址转换,容器所有到外部网络的连接,源地址都会被 NAT 成本地系统的 IP 地址,这是使用 iptables 的源地址伪装操作实现的
启动一个Nginx容器进行演示:sudo docker run -d --name web1 -p 8081:80 nginx
,将宿主机的8081端口映射到nginx容器的80端口
在启动容器之前,查看POSTROUTING链的NAT规则:
$ sudo iptables -t nat -vnL POSTROUTING
Chain POSTROUTING (policy ACCEPT 31 packets, 2428 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
启动容器后发现多了一条:
$ sudo iptables -t nat -vnL POSTROUTING
Chain POSTROUTING (policy ACCEPT 31 packets, 2428 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80
多出来一条MASQUERADE,这是新创建的端口映射
外网访问容器
数据包发送到eth0,经过DNAT转换,例如进行8081和80端口的映射时,发送宿主机的端口的数据包会被转发到容器的80端口,如下查看DOCKER链的NAT规则:
$ sudo iptables -t nat -vnL DOCKER
[sudo] password for hwx:
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8081 to:172.17.0.3:80
可以看到增加了如上的转换
Docker四种网络模型
详细解释:
模式 | 使用方法 | 说明 |
---|---|---|
bridge [桥接式网络(Bridge container A)] | --network bridge | 桥接容器,除了有一块本地回环接口(Loopback interface)外,还有一块私有接口(Private interface)通过容器虚拟接口(Container virtual interface)连接到桥接虚拟接口(Docker bridge virtual interface),之后通过逻辑主机接口(Logical host interface)连接到主机物理网络(Physical network interface)。 桥接网卡默认会分配到172.17.0.0/16的IP地址段。 如果我们在创建容器时没有指定网络模型,默认就是(Nat)桥接网络,这也就是为什么在登录到一个容器后,发现IP地址段都在172.17.0.0/16网段的原因。 |
host [开放式容器(Open container)] | --network host | 比联盟式网络更开放,联盟式网络是多个容器共享网络(Net),而开放式容器(Open contaner)就直接共享了宿主机的名称空间。因此物理网卡有多少个,那么该容器就能看到多少网卡信息。我们可以说Open container是联盟式容器的衍生 |
none [封闭式网络(Closed container)] | --network none | 封闭式容器,只有本地回环接口(Loopback interface,和服务器看到的lo接口类似),无法与外界进行通信 |
container [联盟式网络(Joined container A | Joined container B ] | --network container:c1(容器名称或容器ID) | 每个容器都各有一部分名称空间(Mount,PID,User),另外一部分名称空间是共享的(UTS,Net,IPC)。 由于它们的网络是共享的,因此各个容器可以通过本地回环接口(Loopback interface)进行通信。 除了共享同一组本地回环接口(Loopback interface)外,还有一块一块私有接口(Private interface)通过联合容器虚拟接口(Joined container virtual interface)连接到桥接虚拟接口(Docker bridge virtual interface),之后通过逻辑主机接口(Logical host interface)连接到主机物理网络(Physical network interface) |
输入docker network -h
查看网络相关的帮助指令:
$ sudo docker network -h
Flag shorthand -h has been deprecated, please use --help
Usage: docker network COMMAND
Manage networks
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
Run 'docker network COMMAND --help' for more information on a command.
查看已有的网络模型
$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
de27040d1dbb bridge bridge local
95aa02052f1e host host local
f877107fa92a none null local
查看已有的网络模型的详细信息,查看bridge网络模式:
$ sudo docker network inspect bridge
[
{
"Name": "bridge",
"Id": "de27040d1dbb5375a429f6c0a11eadbfe468749be366ff1ebe1a681db0c67419",
"Created": "2022-08-07T16:40:03.780510605+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
....
查看Docker支持的网络模型:
$ sudo docker info | grep Network
Network: bridge host ipvlan macvlan null overlay
创建指定类型的网络模型
输入docker network create --help
查看帮助指令:
$ sudo docker network create --help
Usage: docker network create [OPTIONS] NETWORK
Create a network
Options:
--attachable Enable manual container attachment
--aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
--config-from string The network from which to copy the configuration
--config-only Create a configuration only network
-d, --driver string Driver to manage the Network (default "bridge")
--gateway strings IPv4 or IPv6 Gateway for the master subnet
--ingress Create swarm routing-mesh network
--internal Restrict external access to the network
--ip-range strings Allocate container ip from a sub-range
--ipam-driver string IP Address Management Driver (default "default")
--ipam-opt map Set IPAM driver specific options (default map[])
--ipv6 Enable IPv6 networking
--label list Set metadata on a network
-o, --opt map Set driver specific options (default map[])
--scope string Control the network's scope
--subnet strings Subnet in CIDR format that represents a network segment
桥接网络
Bridge网络模型,创建一个名为mybr0的桥接网络:
$ sudo docker network create -d bridge --subnet '192.168.100.0/24' --gateway '192.168.100.1' -o com.docker.network.bridge.name=docker1 mybr0
d3edc3f6b0a83e4500a6ddce59b498897d28f9509ee4dbce565f695e4fbb535b
用-d指定网络模型,用--subnet指定子网网段,用--gateway指定网关IP,-o指定options,最后是网络的名称,关于options的信息可以在inspect帮助信息中找到:
$ sudo docker network inspect bridge
....
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
启动容器并连接到已经创建mybr0的网络:
$ sudo docker run -it --network mybr0 --rm busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:64:02
inet addr:192.168.100.2 Bcast:192.168.100.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:45 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:7129 (6.9 KiB) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
使用--network连接到网络中,在伪终端中输入ifconfig来查看网卡信息,可以看到eth0中显示的网段正是上述创建网络时使用的网段
在Linux Docker主机上,默认的bridge网络被映射到内核中的docker0的Linux网桥
host网络模型
如下查看该模型的详细信息:
$ sudo docker network inspect host
[
{
"Name": "host",
"Id": "95aa02052f1e6bc5b2254079ec0d352a44f4ac36a80719942d851a237265ce1f",
"Created": "2022-07-11T15:16:39.407947148+08:00",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
以host网络模式启动容器:
$ sudo docker run -it --network host --rm busybox
这时在容器的伪终端中收入ifconfig
就可以查看到宿主机的网卡信息,宿主机有多少网卡,就能看到多少个:
/ # ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:9E:EA:84:EA
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:9eff:feea:84ea/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:21121 (20.6 KiB)
docker1 Link encap:Ethernet HWaddr 02:42:C7:38:02:B7
inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp8s0 Link encap:Ethernet HWaddr 00:2B:67:B3:23:FE
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
....
如果是启动一个Nginx容器,那么应当访问主机地址即可访问到Nginx容器,因为共享了网络命名空间:
$ sudo docker run -d --network host nginx
40ce947653e6ee080b3463f8cbb45ee31f2e11dc976e325c9d3390734aa161dd
这时直接访问宿主机的IP地址:
$ curl http://192.168.2.8
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
....
发现可以访问Nginx,这证明了该模式下共享网络命名空间,无需NAT转换,但是不能再运行一个这样的容器
none模型,这类似于虚拟机的仅主机模式,查看详细信息:
$ sudo docker inspect none
[
{
"Name": "none",
"Id": "f877107fa92a6a1842003cba07a908dce5ec42101a82945a54988f6d616aba95",
"Created": "2022-07-11T15:16:39.402959094+08:00",
"Scope": "local",
"Driver": "null",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
使用这种网络模型意味着没有网络
$ sudo docker run -it --network none --rm busybox
/ # ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
初始状态下仅有一个环回设备
联盟网络
首先创建一个C1容器,使用默认网络模型,查看网络设备:
$ sudo docker run -it --name c1 --rm busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:20 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3202 (3.1 KiB) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
按下Ctrl+P+Q暂时退出,输入以下指令建立联盟网络,进入终端后输入ifconifg查看网卡信息
$ sudo docker run -it --name c2 --network container:c1 --rm busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:49 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:7623 (7.4 KiB) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
在c2中创建一个httpd服务,并写入首页文件:
echo "Hello World" >> /tmp/index.html
httpd -h /tmp
按下Ctrl+P+Q,保持容器运行并退出,使用c1运行wget命令,向127.0.0.1发起http请求
$ sudo docker exec c1 wget -O - -q 127.0.0.1
hello world
得到如上响应,代表访问成功,这证明了在这个联盟式网络中,容器共享一部分网络命名空间,可以通过回环地址来访问
标签:容器,network,--,0.0,bytes,网络,Docker,docker From: https://www.cnblogs.com/N3ptune/p/16607135.html