linux network namespace and bridge

linux netns 和 bridge

netns


最近在学习k8s网络,看到权威指南中有讲到基础网络的实现,故而搬运一下重新学习network namespace的隔离技术,默认binary是放在iproute2这个套件中的

1
2
3
4
5
6
7
8
9
vagrant@ubuntu-xenial:~$ apt-file search $(which ip)
cups-ipp-utils: /usr/sbin/ippserver
freeipa-client: /usr/sbin/ipa-certupdate
freeipa-client: /usr/sbin/ipa-client-automount
freeipa-client: /usr/sbin/ipa-client-install
freeipa-client: /usr/sbin/ipa-getkeytab
freeipa-client: /usr/sbin/ipa-join
freeipa-client: /usr/sbin/ipa-rmkeytab
iproute2: /sbin/ip

默认情况下,使用 ip netns 是没有网络 namespace 的,所以 ip netns ls 命令看不到任何输出。

1
2
3
4
5
6
7
8
9
10
vagrant@ubuntu-xenial:~$ ip netns help
Usage: ip netns list
ip netns add NAME
ip netns set NAME NETNSID
ip [-all] netns delete [NAME]
ip netns identify [PID]
ip netns pids NAME
ip [-all] netns exec [NAME] cmd ...
ip netns monitor
ip netns list-id

新创建的 netns 会在/var/run/netns/ 目录中生存对应名称的文件

1
2
3
4
5
6
7
8
9
10
11
vagrant@ubuntu-xenial:~$ sudo ip netns add xiemx1
vagrant@ubuntu-xenial:~$ sudo ip netns add xiemx2
vagrant@ubuntu-xenial:~$ sudo ip netns ls
xiemx2
xiemx1
vagrant@ubuntu-xenial:~$ ll /var/run/netns/
total 0
drwxr-xr-x 2 root root 80 Jan 21 03:19 ./
drwxr-xr-x 28 root root 1140 Jan 21 03:19 ../
-r--r--r-- 1 root root 0 Jan 21 03:19 xiemx1
-r--r--r-- 1 root root 0 Jan 21 03:19 xiemx2

由于netns 之间互相都是隔离的,因此要查看对应命名空间的网络设备、路由表就需要使用 ip netns exec <netns name> bash 开启子bash进入对应的命名空间,也可以直接执行命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
vagrant@ubuntu-xenial:~$ sudo ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 02:97:71:8a:f0:d8 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::97:71ff:fe8a:f0d8/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:f1:22:f6 brd ff:ff:ff:ff:ff:ff
inet 10.110.120.65/24 brd 10.110.120.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fef1:22f6/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:da:5a:39:42 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

vagrant@ubuntu-xenial:~$ sudo ip net exec xiemx1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

vagrant@ubuntu-xenial:~$ sudo ip net exec xiemx2 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

每个 namespace 在创建的时候会自动创建一个 lo ,默认时DOWN状态,如果需要启用记得UP一下:

1
2
3
4
5
6
7
8
9
10
11
vagrant@ubuntu-xenial:~$ sudo ip net exec xiemx1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
vagrant@ubuntu-xenial:~$ sudo ip netns exec xiemx1 ip link set lo up
vagrant@ubuntu-xenial:~$ sudo ip netns exec xiemx1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

veth pair


netns 之间是相互隔离的,linux 提供了 veth 设备对来实现不同netns之间的往来通讯,veth 设备是成对出现的,类似于一根网线插到了两个隔离的ns之中,实现了两个隔离网络的互联。

我们可以使用 ip link add <name1> type veth peer name <name2> 来创建一对 veth pair 出来,需要记住的是 veth pair 无法单独存在,删除其中一个,另一个也会自动消失。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
vagrant@ubuntu-xenial:~$ sudo ip netns exec xiemx1 bash
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
root@ubuntu-xenial:~# ip link add xiemx-veth1 type veth peer name xiemx-veth2
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: xiemx-veth2@xiemx-veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff
5: xiemx-veth1@xiemx-veth2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 42:6a:cb:19:0d:2a brd ff:ff:ff:ff:ff:ff

####如果对名称没有特别要求可以使用默认命令创建,会默认生存veth0/veth1 的设备对
root@ubuntu-xenial:~# ip link add type veth
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: xiemx-veth2@xiemx-veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff
5: xiemx-veth1@xiemx-veth2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 42:6a:cb:19:0d:2a brd ff:ff:ff:ff:ff:ff
6: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 3e:72:e3:48:25:69 brd ff:ff:ff:ff:ff:ff
7: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ea:37:ea:92:a5:c5 brd ff:ff:ff:ff:ff:ff

给这对 veth pair 配置上 ip 地址,并up

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@ubuntu-xenial:~# ip netns exec xiemx1 bash
root@ubuntu-xenial:~# ip add add 10.0.0.1/24 dev xiemx-veth1
root@ubuntu-xenial:~# ip add add 10.0.0.2/24 dev xiemx-veth2
root@ubuntu-xenial:~# ip add show dev xiemx-veth1 up
root@ubuntu-xenial:~# ip add show dev xiemx-veth2 up
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: xiemx-veth2@xiemx-veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 scope global xiemx-veth2
valid_lft forever preferred_lft forever
inet6 fe80::a476:6fff:fe47:e1f9/64 scope link
valid_lft forever preferred_lft forever
5: xiemx-veth1@xiemx-veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 42:6a:cb:19:0d:2a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 scope global xiemx-veth1
valid_lft forever preferred_lft forever
inet6 fe80::406a:cbff:fe19:d2a/64 scope link
valid_lft forever preferred_lft forever
6: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 3e:72:e3:48:25:69 brd ff:ff:ff:ff:ff:ff
7: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ea:37:ea:92:a5:c5 brd ff:ff:ff:ff:ff:ff

root@ubuntu-xenial:~# ip route
10.0.0.0/24 dev xiemx-veth1 proto kernel scope link src 10.0.0.1
10.0.0.0/24 dev xiemx-veth2 proto kernel scope link src 10.0.0.2

目前所有的veth pair的两端都在xiemx1这个netns 中,现在移动设备的一段到xiemx2 这个netns中,实现网络互联

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
root@ubuntu-xenial:~# ip link set xiemx-veth2 netns xiemx2
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
5: xiemx-veth1@if4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
link/ether 42:6a:cb:19:0d:2a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.1/24 scope global xiemx-veth1
valid_lft forever preferred_lft forever
inet6 fe80::406a:cbff:fe19:d2a/64 scope link
valid_lft forever preferred_lft forever
6: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 3e:72:e3:48:25:69 brd ff:ff:ff:ff:ff:ff
7: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ea:37:ea:92:a5:c5 brd ff:ff:ff:ff:ff:ff
root@ubuntu-xenial:~# ip netns exec xiemx2 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: xiemx-veth2@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
root@ubuntu-xenial:~# ip netns exec xiemx2 ifconfig xiemx-veth2 up
root@ubuntu-xenial:~# ip netns exec xiemx2 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: xiemx-veth2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::a476:6fff:fe47:e1f9/64 scope link
valid_lft forever preferred_lft forever
root@ubuntu-xenial:~# ip netns exec xiemx2 ip add add 10.0.0.2/24 dev xiemx-veth2
root@ubuntu-xenial:~# ip netns exec xiemx2 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: xiemx-veth2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.2/24 scope global xiemx-veth2
valid_lft forever preferred_lft forever
inet6 fe80::a476:6fff:fe47:e1f9/64 scope link
valid_lft forever preferred_lft forever
root@ubuntu-xenial:~# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.046 ms
^C
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.046/0.064/0.083/0.020 ms

bridge


linux 内核支持网口桥接,但是和传统的硬件网桥不同的是,linux 中的网桥设备不仅仅是二层设备,只是对报文进行转发,由于Linux 主机上运行的上层应用有可能就是报文的终点,因此还要求网桥能够将保数据包传递给linux网络协议栈。

Docker bridge的网络就可以看成是通过bridge 来讲veth的设备对一端进行聚合,另一端放到容器的进程中,实现网络隔离和网络互联。再通过iptables的数据包转发功能来传递数据包,这里不讨论iptables层面的问题。

手动模拟一下大概如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
###增加一个网桥(假设是docker bridge)
root@ubuntu-xenial:~# ip netns exec xiemx1 bash
root@ubuntu-xenial:~# brctl show
bridge name bridge id STP enabled interfaces
root@ubuntu-xenial:~# brctl addbr xiemx-br
root@ubuntu-xenial:~# brctl show
bridge name bridge id STP enabled interfaces
xiemx-br 8000.000000000000 no

###将veth设备的一端绑定到网桥上,由于使用网桥进行通讯,所以veth设备在这里只需要当成二层设备来使用,不需要IP
root@ubuntu-xenial:~# brctl addif xiemx-br veth0
root@ubuntu-xenial:~# brctl show
bridge name bridge id STP enabled interfaces
xiemx-br 8000.3e72e3482569 no veth0

###将veth的另一端移动到另一个netns中,可以理解为容器内的eth0
root@ubuntu-xenial:~# ip link set veth1 netns xiemx2
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
5: xiemx-veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 42:6a:cb:19:0d:2a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.1/24 scope global xiemx-veth1
valid_lft forever preferred_lft forever
inet6 fe80::406a:cbff:fe19:d2a/64 scope link
valid_lft forever preferred_lft forever
6: veth0@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master xiemx-br state DOWN group default qlen 1000
link/ether 3e:72:e3:48:25:69 brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: xiemx-br: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 3e:72:e3:48:25:69 brd ff:ff:ff:ff:ff:ff
inet 11.0.0.1/24 brd 11.0.0.255 scope global xiemx-br
valid_lft forever preferred_lft forever

root@ubuntu-xenial:~# ip netns exec xiemx2 bash
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: xiemx-veth2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.2/24 scope global xiemx-veth2
valid_lft forever preferred_lft forever
inet6 fe80::a476:6fff:fe47:e1f9/64 scope link
valid_lft forever preferred_lft forever
7: veth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ea:37:ea:92:a5:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 0

### 给veth1 分配ip 地址,并开启设备
root@ubuntu-xenial:~# ip add add 11.0.0.2/24 dev veth1
root@ubuntu-xenial:~# ifconfig veth1 up
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: xiemx-veth2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:76:6f:47:e1:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.2/24 scope global xiemx-veth2
valid_lft forever preferred_lft forever
inet6 fe80::a476:6fff:fe47:e1f9/64 scope link
valid_lft forever preferred_lft forever
7: veth1@if6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
link/ether ea:37:ea:92:a5:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 11.0.0.2/24 scope global veth1
valid_lft forever preferred_lft forever

### 给网桥分配IP,并开启veth0设备
root@ubuntu-xenial:~# ip netns exec xiemx1 bash
root@ubuntu-xenial:~# ifconfig xiemx-br 11.0.0.1/24
root@ubuntu-xenial:~# ifconfig veth0 up
root@ubuntu-xenial:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
5: xiemx-veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 42:6a:cb:19:0d:2a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.1/24 scope global xiemx-veth1
valid_lft forever preferred_lft forever
inet6 fe80::406a:cbff:fe19:d2a/64 scope link
valid_lft forever preferred_lft forever
6: veth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master xiemx-br state UP group default qlen 1000
link/ether 3e:72:e3:48:25:69 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::3c72:e3ff:fe48:2569/64 scope link
valid_lft forever preferred_lft forever
8: xiemx-br: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3e:72:e3:48:25:69 brd ff:ff:ff:ff:ff:ff
inet 11.0.0.1/24 brd 11.0.0.255 scope global xiemx-br
valid_lft forever preferred_lft forever
inet6 fe80::3c72:e3ff:fe48:2569/64 scope link
valid_lft forever preferred_lft forever

### 测试网络联通
root@ubuntu-xenial:~# ping 11.0.0.2
PING 11.0.0.2 (11.0.0.2) 56(84) bytes of data.
64 bytes from 11.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms
64 bytes from 11.0.0.2: icmp_seq=2 ttl=64 time=0.047 ms
64 bytes from 11.0.0.2: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 11.0.0.2: icmp_seq=4 ttl=64 time=0.047 ms
64 bytes from 11.0.0.2: icmp_seq=5 ttl=64 time=0.051 ms
^C
--- 11.0.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 0.047/0.087/0.244/0.078 ms