首页 > 其他分享 >docker网络

docker网络

时间:2022-11-26 15:58:27浏览次数:38  
标签:tomcat -- redis 网络 6379 docker net

docker网络

docker0网络详解

清空环境
docker rm -f $(docker images -aq)

image.png
会发现有三个网络

docker是如何处理容器网络的?

ps -a  
docker ps -a | grep Exited
docker ps -a | grep Exited | awk '{print $1}'
docker ps -a | grep Exited | awk '{print $1}' | xargs docker rm -f

[root@localhost ~]# dokcer run -d -P --name tomcat01 tomcat
#查看容器内网络  ip addr 会发现容器启动会得到一个eth0@if7的ip地址(容器分配)
[root@localhost ~]# docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

思考,能不能ping通容器内部

[root@localhost ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.228 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.075 ms

原理:
1.我们每启动一个docker容器,就会给docker容器分配一个ip,每安装一个docker就有一个网卡(通过桥接技术),,使用的技术是evth-pair技术
再次测试ip addrimage.png
2.再启动一个容器测试,发现又多了一个网卡
image.png
我们发现容器带来的网卡都是一对的,
evth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一段连着协议,一段彼此相连#正因为有这个特性,evth-pair充当一个桥梁,连接各种虚拟网络设备的
openstac,Docker容器之间的连接,oVs的连接,都是使用evth-pair 技术

3.测试tomcat01和tomcat02能不能ping通
[root@localhost ~]# docker exec -it tomcat02 ping 172.18.0.2
结论:容器和容器之间是可以互相ping通的!
image.png
结论: tomcat01和tomcat02是公用的一个路由器,docker0。
所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用IP

容器内部没有ip addr 或者其他命令需要进入容器进行安装
更新apt依赖
apt update
安装ipaddr apt install -y iproute2
安装ifconfig apt install -y net-tools #
安装ping apt install -y iputils-ping

小结Docker使用的是Linux的桥接,宿主机中是一个Dokcer容器的网桥 docker0.

image.png
Docker 中的所有的网络接口都是虚拟的。虚拟的转发效率高
只要删除容器,对应网桥就没了
image.png

(–link现在已经不推荐使用,了解即可)

在微服务部署的场景下,注册中心是使用服务名来唯一识别微服务的,而我们上线部署的时候微服务对应的IP地址可能会改动,所以我们需要使用容器名来配置容器间的网络连接。使用–link可以完成这个功能。

ping如果是报错的OCI的,记得先进容器内部执行,之所以会报错是因为这里的tomcat没有ping这个命令,需要先安装 apt update && apt install -y net-tools

[rootekuangshen / ]#docker exec -it tomcat02
ping tomcat01ping : tomcat01 : Name or service not knovmn

#知何可以解决呢? --link
[root@localhost ~]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=3 ttl=64 time=0.058 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=4 ttl=64 time=0.056 ms
^C
--- tomcat02 ping statistics ---
#反向能不能呢? 很明显 不能
[root@localhost ~]# docker exec -it tomcat02 ping tomcat03
ping: tomcat03: No address associated with hostname

docker network命令
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks

探究inspect
image.png

其实这个tomcat03就是在本地配置了tomcat02的配置?

#查看hosts配置,在这里原理发现

[root@localhost ~]# docker exec -it tomcat02 cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      23c44c7e1500

本质探究:--link就是我们在hosts配置中增加了一个172.18.0.3 tomcat02 312857784cd4我们现在玩Docker 已经不建议使用--link了
自定义网络,不适用docker0

自定义网络

因为docker0,默认情况下不能通过容器名进行访问。需要通过–link进行设置连接。这样的操作比较麻烦,更推荐的方式是自定义网络,容器都使用该自定义网络,就可以实现通过容器名来互相访问了。

查看所有网络

image.png

网络模式
bridge:桥接docker (默认,自己床架也使用bridge模式)
none:不配置网络
host :和宿主机共享网络
container :容器网络连通!(用的少!局限很大)

测试

#我们直接启动的命令--net bridge,--net bridge 是默认参数,不写也会自动带上的
而这个就是我们的dockero
docker run -d -p --name tomcato1 tomcat
docker run -d -p --name tomcato1 --net bridga tomcat
docker0特点。默认,域名不能访问,--Tink可以打通连接
#我们可以自定义一个网络
--driver bridge
--subnet 192.168.0.0/16
--gateway 192.168.0.1
docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet

网络构建完成
image.png


[root@localhosts ~]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "bd09d0023cc17bb06679eea4a70707f3d0deef462e49a883e849b38ce4bc059a",
        "Created": "2022-04-27T09:11:51.194880776+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

# 下面启动两个容器,指定使用该自定义网络mynet,测试处于自定义网络下的容器,是否可以直接通过容器名进行网络访问。
[root@localhosts ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcat
5742920d6a3cafe5d88bca0ef66d2b56d475c887984bd7948e114cc57b915720
[root@VM-20-17-centos ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcat
d50e701c062d2d041728a0422d2c47b9ec6c80aea8b920ee32d2bb9674eca624


[root@localhost ~]# docker network inspect mynet
"Containers": {
            "5742920d6a3cafe5d88bca0ef66d2b56d475c887984bd7948e114cc57b915720": {
                "Name": "tomcat-net-01",
                "EndpointID": "690a2421dcef5530238b37487ed2c879db8a8f776388bf3e0444a8914576ca57",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            },
            "d50e701c062d2d041728a0422d2c47b9ec6c80aea8b920ee32d2bb9674eca624": {
                "Name": "tomcat-net-02",
                "EndpointID": "e36caa80fefa8d1a700ed37cad51047db542e5057297435f558161921335d230",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            }
        },
 # 进入01,安装ping命令       
[root@localhost ~]# docker exec -it tomcat-net-01/bin/bash
root@5742920d6a3c:/usr/local/tomcat# apt-get update
root@5742920d6a3c:/usr/local/tomcat# apt install iputils-ping
# 进入02,安装ping命令
[root@localhost ~]# docker exec -it tomcat-net-02 /bin/bash
root@d50e701c062d:/usr/local/tomcat# apt-get update
root@d50e701c062d:/usr/local/tomcat# apt install iputils-ping


[root@localhost ~]# docker exec -it tomcat-net-01 ping 192.168.0.3

# 直接通过名字ping通
[root@localhost ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.062 ms
^C
--- tomcat-net-02 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.057/0.059/0.062/0.002 ms


# 反过来通过02 ping 01,一样能通
[root@localhost ~]# docker exec -it tomcat-net-02 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.063 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=4 ttl=64 time=0.063 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=5 ttl=64 time=0.078 ms
^C
--- tomcat-net-01 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.057/0.063/0.078/0.007 ms


docker netwrok create 参数
Options:
--attachable Enable manual container attachment
--aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
--config-from string The network from which to copy the configuration
--config-only Create a configuration only network
-d, --driver string Driver to manage the Network (default "bridge")
--gateway strings IPv4 or IPv6 Gateway for the master subnet
--ingress Create swarm routing-mesh network
--internal Restrict external access to the network
--ip-range strings Allocate container ip from a sub-range
--ipam-driver string IP Address Management Driver (default "default")
--ipam-opt map Set IPAM driver specific options (default map[])
--ipv6 Enable IPv6 networking
--label list Set metadata on a network
-o, --opt map Set driver specific options (default map[])
--scope string Control the network's scope
--subnet strings Subnet in CIDR format that represents a network segment

网络连通

image.png
image.png

#测试打通 tomcat01 connect
 docker network connect mynet tomcat01
#打通之后发现,会把该容器的地址添加到该网络中
#一个容器,两个地址

image.png

#01连通ok
[root@localhost ~]# docker exec -it tomcat01 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2)56(84) bytes of data.
64 bytes from tomcat-net-O1.mynet (192.168.0.2): icmp_seq=1 ttlm64 timem0.072 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttlm64 timem0.070 ms
#02是依旧打不通的
[root@localhost ~]# docker exec -it tomcat02 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known

结论︰假设要跨网络操作别人,就需要使用docker network connect连通

实战:部署Redis集群

image.png

docker network create redis --subnet 172.38.0.0/16
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

#第1个Redis容器
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
    -v /mydata/redis/node-1/data:/data \
    -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第2个Redis容器
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
    -v /mydata/redis/node-2/data:/data \
    -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第3个Redis容器
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
    -v /mydata/redis/node-3/data:/data \
    -v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第4个Redis容器
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
    -v /mydata/redis/node-4/data:/data \
    -v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第5个Redis容器
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
    -v /mydata/redis/node-5/data:/data \
    -v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第6个Redis容器
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
    -v /mydata/redis/node-6/data:/data \
    -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 637S{port}:6379 -p 1637S{port}:16379 --name redis-Siport}\-
v /mydata/redis/node-Siport]/data : / data \
-v/mydata/redis/node-S{port]/conf/redis.conf :/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1S(port} redis :5.0.9-alpine3.11 redis-server /etc/redis/redis.conf;
[root@localhost mydata]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS          PORTS                                                                                      NAMES
65f2d4a0bee3   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   6 seconds ago    Up 5 seconds    0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp   redis-3
7723616e75ab   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   22 minutes ago   Up 22 minutes   0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp   redis-6
a230f670810a   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   22 minutes ago   Up 22 minutes   0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp   redis-5
05db3437fc9b   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   22 minutes ago   Up 22 minutes   0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp   redis-4
b725d91f31b9   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   23 minutes ago   Up 23 minutes   0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp   redis-2
66b4857a103c   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   23 minutes ago   Up 23 minutes   0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp   redis-1
#进入其中一个容器
docker exec -it redis-1 /bin/sh
#进行集群的配置
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 0c4d6d85ae339e2cd1ea83fb9970040ddabbb05e 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: a2d073ba19af2dad5d44c55cf9da144ab33977e4 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: c8de23ecf2760f3373fda9ad7294225742e23078 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: 7d905318e026fbf999cc51b1895540d07d4fe138 172.38.0.14:6379
   replicates c8de23ecf2760f3373fda9ad7294225742e23078
S: 0a6f3d91e9d8790c44378ef6f20ade4cd0e55a6b 172.38.0.15:6379
   replicates 0c4d6d85ae339e2cd1ea83fb9970040ddabbb05e
S: 677503d6390c2d7cc1d0096cb3b83b1515d92f84 172.38.0.16:6379
   replicates a2d073ba19af2dad5d44c55cf9da144ab33977e4
Can I set the above configuration? (type 'yes' to accept):#输入yes确定
#创建成功
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
# redis-cli -c
# cluster info
# cluster nodes
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:203
cluster_stats_messages_pong_sent:206
cluster_stats_messages_sent:409
cluster_stats_messages_ping_received:201
cluster_stats_messages_pong_received:203
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:409
127.0.0.1:6379> cluster nodes
1fe798431adf6929e7c4c42dfe68a06cf2afc8fc 172.38.0.12:6379@16379 master - 0 1651026759598 2 connected 5461-10922   # 主机
8772dc6ea9377834bd6478dc705abe6fa3a655f1 172.38.0.14:6379@16379 slave # 从机 24b2f2cf14ad8edc16b336ec539ca38c615f030a 0 1651026759000 4 connected  
002d5169ffa41b51d434a53a62388e039fabb3ad 172.38.0.16:6379@16379 slave # 从机 1fe798431adf6929e7c4c42dfe68a06cf2afc8fc 0 1651026758095 6 connected  
5859376a31233072e1761c82a43a55d4a4508740 172.38.0.11:6379@16379 myself,master - 0 1651026758000 1 connected 0-5460  # 主机
5edeb5a195a1ae0f0e9b54018312e4f129c5bdab 172.38.0.15:6379@16379 slave   # 从机 5859376a31233072e1761c82a43a55d4a4508740 0 1651026760099 5 connected
24b2f2cf14ad8edc16b336ec539ca38c615f030a 172.38.0.13:6379@16379 master - 0 1651026759097 3 connected 10923-16383   # 主机


/data # redis-cli -c
127.0.0.1:6379> set test02 ceshi
-> Redirected to slot [14163] located at 172.38.0.13:6379
OK

[root@VM-20-17-centos ~]# docker stop redis-3
redis-3
#要重进集群,因为当前连的是172.38.0.13的,而正好在刚刚被我们停掉了,所以要重新进入集群,然后再次查看
/data # redis-cli -c
127.0.0.1:6379> get test02
-> Redirected to slot [14163] located at 172.38.0.14:6379
"ceshi"


标签:tomcat,--,redis,网络,6379,docker,net
From: https://www.cnblogs.com/yutoujun/p/16927552.html

相关文章

  • docker介绍
    推荐教程:https://www.bilibili.com/video/BV1og4y1q7M4?spm_id_from=333.999.0.0&vd_source=642a988fd50073c667fec7829ca79103Docker技术入门与实战-第3版.pdfdokecr为......
  • docker安装
    docker的组成镜像:(image)docker的镜像就好比是一个模板,可以通过模板来创建容器的服务,tomcat镜像--->run--->tomcat01容器(提供服务器),通过这个镜像可创建多个容器(最终服务......
  • docker常用命令
    docker的常用命令帮助命令dockerversion#显示docker的版本信息dockerinfo#显示docker的系统的详细信息docker命令---help#帮助命令帮助文档的地址:https://......
  • docker基础练习
    练习nginx[root@localhosthome]#dockersearchnginx[root@localhosthome]#dockerpullnginx[root@localhosthome]#dockerimagesREPOSITORYTAG......
  • docker镜像详解
    Docker镜像详解什么是镜像镜像是一种轻量级、可执行的独立软件包,用来打包软件运行环境和基于运行环境开发的软件,它包含运行某个软件所需要的所有内容,包括代码,运行时(一个......
  • Dockerfile配合IDEA实现一键部署
    1.1Dokcker开启远程访问1.1.0修改docker服务的配置文件vim/lib/systemd/system/docker.service1在ExecStart那行,加上-Htcp://0.0.0.0:2375代表任何ip都可以访问重新加......
  • IDEA官方 Docker 插件一键部署应用到远程服务器
    环境:jdk1.8及以上。Maven3.2+ideadockerdocker开启远程连接访问首先我们要开启docker的远程连接访问。保证不是docker所在的服务器,也能够远程访问docker。Linux版的docker:1......
  • docker保存镜像
    在已有镜像系统上导出镜像保存镜像dockersave-oneo4j-3.5.35.tarneo4j:3.5.35-community导入镜像文件dockerload<neo4j-3.5.35.tar dockerimages   ......
  • 实验五:全连接神经网络手写数字识别实验
    |班级链接|https://edu.cnblogs.com/campus/czu/classof2020BigDataClass3-MachineLearning||作业链接|https://edu.cnblogs.com/campus/czu/classof2020BigDataClass3-Ma......
  • 【无人机通信优化】基于粒子群算法的多跳无线网络部署优化附matlab代码
    ✅作者简介:热爱科研的Matlab仿真开发者,修心和技术同步精进。......