Re: Ceph with Clos IP fabric

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

>2) Why did you choose to run the ceph nodes on loopback interfaces as opposed to the /24 for the "public" interface?

I can’t speak for this example, but in a clos fabric you generally want to assign the routed IPs on loopback rather than physical interfaces. This way if one of the link goes down (t.ex the public interface), the routed IP is still advertised on the other link(s).

 

> I found out, Ceph can not be run on lo interface

Jan, do you have more info about, I only found this in the doc: http://docs.ceph.com/docs/hammer/start/quick-start-preflight/#ensure-connectivity

 

Cheers,

Maxime

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Richard Hesse <richard.hesse@xxxxxxxxxx>
Date: Monday 17 April 2017 22:12
To: Jan Marquardt <jm@xxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Ceph with Clos IP fabric

 

A couple of questions:

 

1) What is your rack topology? Are all ceph nodes in the same rack communicating with the same top of rack switch?

 

2) Why did you choose to run the ceph nodes on loopback interfaces as opposed to the /24 for the "public" interface?

 

3) Are you planning on using RGW at all?

 

On Thu, Apr 13, 2017 at 10:57 AM, Jan Marquardt <jm@xxxxxxxxxxx> wrote:

Hi,

I am currently working on Ceph with an underlying Clos IP fabric and I
am hitting some issues.

The setup looks as follows: There are 3 Ceph nodes which are running
OSDs and MONs. Each server has one /32 loopback ip, which it announces
via BGP to its uplink switches. Besides the loopback ip each server has
an management interface with a public (not to be confused with ceph's
public network) ip address. For BGP switches and servers are running
quagga/frr.

Loopback ips:

10.10.100.1     # switch1
10.10.100.2     # switch2
10.10.100.21    # server1
10.10.100.22    # server2
10.10.100.23    # server3

Ceph's public network is 10.10.100.0/24.

Here comes the current main problem: There are two options for
configuring the loopback address.

1.) Configure it on lo. In this case the routing works as inteded, but,
as far as I found out, Ceph can not be run on lo interface.

root@server1:~# ip route get 10.10.100.22
10.10.100.22 via 169.254.0.1 dev enp4s0f1  src 10.10.100.21
    cache

2.) Configure it on dummy0. In this case Ceph is able to start, but
quagga installs learned routes with wrong source addresses - the public
management address from each host. This results in network problems,
because Ceph uses the management ips to communicate to the other Ceph
servers.

root@server1:~# ip route get 10.10.100.22
10.10.100.22 via 169.254.0.1 dev enp4s0f1  src a.b.c.d
    cache

(where a.b.c.d is the machine's public ip address on its management
interface)

Has already someone done something similar?

Please let me know, if you need any further information. Any help would
really be appreciated.

Best Regards

Jan

--
Artfiles New Media GmbH | Zirkusweg 1 | 20359 Hamburg
Tel: 040 - 32 02 72 90 | Fax: 040 - 32 02 72 95
E-Mail: support@xxxxxxxxxxx | Web: http://www.artfiles.de
Geschäftsführer: Harald Oltmanns | Tim Evers
Eingetragen im Handelsregister Hamburg - HRB 81478


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux