Am 17.04.17 um 22:12 schrieb Richard Hesse: > A couple of questions: > > 1) What is your rack topology? Are all ceph nodes in the same rack > communicating with the same top of rack switch? The cluster is planned for one rack with two ToR-/cluster-internal switches. The cluster will be accessed from 2-3 machines mounted into the same rack, which have uplinks to the outside world. > 2) Why did you choose to run the ceph nodes on loopback interfaces as > opposed to the /24 for the "public" interface? The fabric needs loopback ip addresses and the plan was/is to use them directly for Ceph. What would you suggest instead? > 3) Are you planning on using RGW at all? No, there won't be any RGW. It is a plain rbd cluster, which will be used for backup purposes. Best Regards Jan > On Thu, Apr 13, 2017 at 10:57 AM, Jan Marquardt <jm@xxxxxxxxxxx > <mailto:jm@xxxxxxxxxxx>> wrote: > > Hi, > > I am currently working on Ceph with an underlying Clos IP fabric and I > am hitting some issues. > > The setup looks as follows: There are 3 Ceph nodes which are running > OSDs and MONs. Each server has one /32 loopback ip, which it announces > via BGP to its uplink switches. Besides the loopback ip each server has > an management interface with a public (not to be confused with ceph's > public network) ip address. For BGP switches and servers are running > quagga/frr. > > Loopback ips: > > 10.10.100.1 # switch1 > 10.10.100.2 # switch2 > 10.10.100.21 # server1 > 10.10.100.22 # server2 > 10.10.100.23 # server3 > > Ceph's public network is 10.10.100.0/24 <http://10.10.100.0/24>. > > Here comes the current main problem: There are two options for > configuring the loopback address. > > 1.) Configure it on lo. In this case the routing works as inteded, but, > as far as I found out, Ceph can not be run on lo interface. > > root@server1:~# ip route get 10.10.100.22 > 10.10.100.22 via 169.254.0.1 dev enp4s0f1 src 10.10.100.21 > cache > > 2.) Configure it on dummy0. In this case Ceph is able to start, but > quagga installs learned routes with wrong source addresses - the public > management address from each host. This results in network problems, > because Ceph uses the management ips to communicate to the other Ceph > servers. > > root@server1:~# ip route get 10.10.100.22 > 10.10.100.22 via 169.254.0.1 dev enp4s0f1 src a.b.c.d > cache > > (where a.b.c.d is the machine's public ip address on its management > interface) > > Has already someone done something similar? > > Please let me know, if you need any further information. Any help would > really be appreciated. > > Best Regards > > Jan > > -- > Artfiles New Media GmbH | Zirkusweg 1 | 20359 Hamburg > Tel: 040 - 32 02 72 90 | Fax: 040 - 32 02 72 95 > E-Mail: support@xxxxxxxxxxx <mailto:support@xxxxxxxxxxx> | Web: > http://www.artfiles.de > Geschäftsführer: Harald Oltmanns | Tim Evers > Eingetragen im Handelsregister Hamburg - HRB 81478 > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> > > -- Artfiles New Media GmbH | Zirkusweg 1 | 20359 Hamburg Tel: 040 - 32 02 72 90 | Fax: 040 - 32 02 72 95 E-Mail: support@xxxxxxxxxxx | Web: http://www.artfiles.de Geschäftsführer: Harald Oltmanns | Tim Evers Eingetragen im Handelsregister Hamburg - HRB 81478
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com