Re: Ceph with Clos IP fabric

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>>2) Why did you choose to run the ceph nodes on loopback interfaces as opposed to the /24 for the "public" interface?
> 
> I can’t speak for this example, but in a clos fabric you generally want
> to assign the routed IPs on loopback rather than physical interfaces.
> This way if one of the link goes down (t.ex the public interface), the
> routed IP is still advertised on the other link(s).

Maxime, thank you for clarifying this. Each server is configured like this:

lo/dummy0: Loopback interface; Holds the ip address used with Ceph,
which is announced by BGP into the fabric.

enp5s0: Management Interface, which is used only for managing the box.
There should not be any Ceph traffic on this one.

enp3s0f0: connected to sw01 and used for BGP
enp3s0f1: connected to sw02 and used for BGP
enp4s0f0: connected to sw01 and used for BGP
enp4s0f1: connected to sw02 and used for BGP

These four interfaces are supposed to transport the Ceph traffic.

>> I found out, Ceph can not be run on lo interface
> 
> Jan, do you have more info about, I only found this in the doc:
> http://docs.ceph.com/docs/hammer/start/quick-start-preflight/#ensure-connectivity

The OSDs are not able to start if the ip is configured on lo instead of
dummy0:

2017-04-20 12:47:01.684929 7fe671233a40 -1 unable to find any IP address
in networks: 10.10.100.0/24

After asking about this on the Ceph IRC channel, someone told me that
Ceph is not able to use lo.

> Cheers,
> 
> Maxime

Regards,

Jan


>  
> 
> *From: *ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of
> Richard Hesse <richard.hesse@xxxxxxxxxx>
> *Date: *Monday 17 April 2017 22:12
> *To: *Jan Marquardt <jm@xxxxxxxxxxx>
> *Cc: *"ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
> *Subject: *Re:  Ceph with Clos IP fabric
> 
>  
> 
> A couple of questions:
> 
>  
> 
> 1) What is your rack topology? Are all ceph nodes in the same rack
> communicating with the same top of rack switch?
> 
>  
> 
> 2) Why did you choose to run the ceph nodes on loopback interfaces as
> opposed to the /24 for the "public" interface?
> 
>  
> 
> 3) Are you planning on using RGW at all?
> 
>  
> 
> On Thu, Apr 13, 2017 at 10:57 AM, Jan Marquardt <jm@xxxxxxxxxxx
> <mailto:jm@xxxxxxxxxxx>> wrote:
> 
>     Hi,
> 
>     I am currently working on Ceph with an underlying Clos IP fabric and I
>     am hitting some issues.
> 
>     The setup looks as follows: There are 3 Ceph nodes which are running
>     OSDs and MONs. Each server has one /32 loopback ip, which it announces
>     via BGP to its uplink switches. Besides the loopback ip each server has
>     an management interface with a public (not to be confused with ceph's
>     public network) ip address. For BGP switches and servers are running
>     quagga/frr.
> 
>     Loopback ips:
> 
>     10.10.100.1     # switch1
>     10.10.100.2     # switch2
>     10.10.100.21    # server1
>     10.10.100.22    # server2
>     10.10.100.23    # server3
> 
>     Ceph's public network is 10.10.100.0/24 <http://10.10.100.0/24>.
> 
>     Here comes the current main problem: There are two options for
>     configuring the loopback address.
> 
>     1.) Configure it on lo. In this case the routing works as inteded, but,
>     as far as I found out, Ceph can not be run on lo interface.
> 
>     root@server1:~# ip route get 10.10.100.22
>     10.10.100.22 via 169.254.0.1 dev enp4s0f1  src 10.10.100.21
>         cache
> 
>     2.) Configure it on dummy0. In this case Ceph is able to start, but
>     quagga installs learned routes with wrong source addresses - the public
>     management address from each host. This results in network problems,
>     because Ceph uses the management ips to communicate to the other Ceph
>     servers.
> 
>     root@server1:~# ip route get 10.10.100.22
>     10.10.100.22 via 169.254.0.1 dev enp4s0f1  src a.b.c.d
>         cache
> 
>     (where a.b.c.d is the machine's public ip address on its management
>     interface)
> 
>     Has already someone done something similar?
> 
>     Please let me know, if you need any further information. Any help would
>     really be appreciated.
> 
>     Best Regards
> 
>     Jan
> 
>     --
>     Artfiles New Media GmbH | Zirkusweg 1 | 20359 Hamburg
>     Tel: 040 - 32 02 72 90 | Fax: 040 - 32 02 72 95
>     E-Mail: support@xxxxxxxxxxx <mailto:support@xxxxxxxxxxx> | Web:
>     http://www.artfiles.de
>     Geschäftsführer: Harald Oltmanns | Tim Evers
>     Eingetragen im Handelsregister Hamburg - HRB 81478
> 
> 
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
>  
> 

-- 
Artfiles New Media GmbH | Zirkusweg 1 | 20359 Hamburg
Tel: 040 - 32 02 72 90 | Fax: 040 - 32 02 72 95
E-Mail: support@xxxxxxxxxxx | Web: http://www.artfiles.de
Geschäftsführer: Harald Oltmanns | Tim Evers
Eingetragen im Handelsregister Hamburg - HRB 81478

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux