Thanks for reply My ceph.conf: [global] auth client required = none auth cluster required = none auth service required = none bluestore_block_db_size = 64424509440 cluster network = 10.10.10.0/24 fsid = 24d5d6bc-0943-4345-b44e-46c19099004b keyring = /etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd journal size = 5120 osd pool default min size = 2 osd pool default size = 3 public network = 10.10.10.0/24 [client] rbd cache = true rbd cache max dirty = 134217728 rbd cache max dirty age = 2 rbd cache size = 268435456 rbd cache target dirty = 67108864 rbd cache writethrough until flush = true [osd] keyring = /var/lib/ceph/osd/ceph-$id/keyring [mon.pve-hs-3] host = pve-hs-3 mon addr = 10.10.10.253:6789 [mon.pve-hs-main] host = pve-hs-main mon addr = 10.10.10.251:6789 [mon.pve-hs-2] host = pve-hs-2 mon addr = 10.10.10.252:6789
Each node has two ethernet cards in LACP bond on network
10.10.10.x auto bond0 iface bond0 inet static address 10.10.10.252 netmask 255.255.255.0 slaves enp4s0 enp4s1 bond_miimon 100 bond_mode 802.3ad bond_xmit_hash_policy layer3+4 #CLUSTER BOND The LAG on switch (TPLink TL-SG2008) is enabled, I see from "show
run" # interface gigabitEthernet 1/0/1 channel-group 4 mode active # interface gigabitEthernet 1/0/2 channel-group 4 mode active # interface gigabitEthernet 1/0/3 channel-group 2 mode active # interface gigabitEthernet 1/0/4 channel-group 2 mode active # interface gigabitEthernet 1/0/5 channel-group 3 mode active # interface gigabitEthernet 1/0/6 channel-group 3 mode active # interface gigabitEthernet 1/0/7 # interface gigabitEthernet 1/0/8
Node 1 is on port 1 and 2, node 2 on port 3 and 4, node 3 on port 5 and 6
Routing table, show with "ip -4 route show table all" default via 192.168.2.1 dev vmbr0 onlink 10.10.10.0/24 dev bond0 proto kernel scope link src 10.10.10.252 192.168.1.0/24 dev vmbr1 proto kernel scope link src 192.168.1.252 linkdown 192.168.2.0/24 dev vmbr0 proto kernel scope link src 192.168.2.252 broadcast 10.10.10.0 dev bond0 table local proto kernel scope link src 10.10.10.252 local 10.10.10.252 dev bond0 table local proto kernel scope host src 10.10.10.252 broadcast 10.10.10.255 dev bond0 table local proto kernel scope link src 10.10.10.252 broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 broadcast 192.168.1.0 dev vmbr1 table local proto kernel scope link src 192.168.1.252 linkdown local 192.168.1.252 dev vmbr1 table local proto kernel scope host src 192.168.1.252 broadcast 192.168.1.255 dev vmbr1 table local proto kernel scope link src 192.168.1.252 linkdown broadcast 192.168.2.0 dev vmbr0 table local proto kernel scope link src 192.168.2.252 local 192.168.2.252 dev vmbr0 table local proto kernel scope host src 192.168.2.252 broadcast 192.168.2.255 dev vmbr0 table local proto kernel scope link src 192.168.2.252
Network configuration $ ip -4 a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 inet 192.168.1.252/24 brd 192.168.1.255 scope global vmbr1 valid_lft forever preferred_lft forever 7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 10.10.10.252/24 brd 10.10.10.255 scope global bond0 valid_lft forever preferred_lft forever 8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 192.168.2.252/24 brd 192.168.2.255 scope global vmbr0 valid_lft forever preferred_lft forever $ ip -4 link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000 link/ether 40:8d:5c:b0:2d:fe brd ff:ff:ff:ff:ff:ff 3: enp4s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000 link/ether 98:de:d0:1d:75:4a brd ff:ff:ff:ff:ff:ff 4: enp4s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000 link/ether 98:de:d0:1d:75:4a brd ff:ff:ff:ff:ff:ff 6: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 98:de:d0:1d:75:4a brd ff:ff:ff:ff:ff:ff 8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 40:8d:5c:b0:2d:fe brd ff:ff:ff:ff:ff:ff 9: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether b2:47:55:9f:d3:0b brd ff:ff:ff:ff:ff:ff 11: veth103i0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000 link/ether fe:03:27:0d:02:38 brd ff:ff:ff:ff:ff:ff link-netnsid 0 13: veth106i0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000 link/ether fe:ce:4f:09:24:45 brd ff:ff:ff:ff:ff:ff link-netnsid 1 14: tap109i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 3a:f0:99:3f:6a:75 brd ff:ff:ff:ff:ff:ff 15: tap201i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 16:99:8a:56:6d:7f brd ff:ff:ff:ff:ff:ff
I think it's everything. Thanks
Il 23/10/2017 15:42, Denes Dolhay ha
scritto:
--
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com