Re: layer3 network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If, like me, you have several different networks, or they overlap for whatever reason, I just have the options:

mon addr = IP:port
osd addr = IP

in the relevant sections. However I use puppet to deploy ceph, and all files are "manually" created.

So it becomes something like this:

[mon.mon1]
  host = mon1
  mon addr = x.y.z.a:6789
  mon data = "">[osd.0]
  host = dskh1
  osd addr = x.y.z.a
  osd data = "">  osd journal = /var/lib/ceph/osd/journal/osd-0
  keyring = /var/lib/ceph/osd/osd-0/keyring
  osd max backfills = 1
  osd recovery max active = 1
  osd recovery op priority = 1
  osd client op priority = 63
  osd disk thread ioprio class = idle
  osd disk thread ioprio priority = 7
[osd.1]
....

necessarily the host and addr parts are correct in our environment.

On 7 July 2016 at 11:36, Nick Fisk <nick@xxxxxxxxxx> wrote:
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Matyas Koszik
> Sent: 07 July 2016 11:26
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: layer3 network
>
>
>
> Hi,
>
> My setup uses a layer3 network, where each node has two connections (/31s), equipped with a loopback address and redundancy is
> provided via OSPF. In this setup it is important to use the loopback address as source for outgoing connections, since the
interface
> addresses are not protected from failure, but the loopback address is.
>
> So I set the public addr and the cluster addr to the desired ip, but it seems that the outgoing connections do not use this as the
source
> address.
> I'm using jewel; is this the expected behavior?

Do your public/cluster networks overlap the physical connection IP's? From what I understand Ceph binds to the interface whose IP
lies within the range specified in the conf file.

So for example if public addr = 192.168.1.0/24

Then your loopback should be in that range, but you must make sure the physical nics lie outside this range.

I'm following this with interest as I am about to deploy something very similar.

>
> Matyas
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--

Luis Periquito

Unix Team Lead



Head Office, Titan Court, 3 Bishop Square, Hatfield Business Park, Hatfield, Herts AL10 9NE


Notice:  This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that _expression_ is defined in the Companies Act 2006) from time to time.  The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, Hatfield Business Park, Hatfield, Herts. AL10 9NE.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux