Re: multi site with cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

We run CephFS in a limited fashion in a stretched cluster of about 40km with redundant 10G fibre between sites – link latency is in the order of 1-2ms.  Performance is reasonable for our usage but is noticeably slower than comparable local ceph based RBD shares.

 

Essentially we just setup the ceph pools behind cephFS to have replicas on each site.  To export it we are simply using Linux kernel NFS and it gets exported from 4 hosts that act as CephFS clients.  Those 4 hosts are then setup in an DNS record that resolves to all 4 IPs, and we then use automount to do automatic mounting and host failover on the NFS clients.  Automount takes care of finding the quickest and available NFS server.

 

I stress this is a limited setup that we use for some fairly light duty, but we are looking to move things like user home directories onto this.  YMMV.

 

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Up Safe
Sent: Monday, 21 May 2018 5:36 PM
To: David Turner <drakonstein@xxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] multi site with cephfs

 

Hi,

can you be a bit more specific?

I need to understand whether this is doable at all.

Other options would be using ganesha, but I understand it's very limited on NFS;

or start looking at gluster.

 

Basically, I need the multi site option, i.e. active-active read-write.

 

Thanks

 

On Wed, May 16, 2018 at 5:50 PM, David Turner <drakonstein@xxxxxxxxx> wrote:

Object storage multi-site is very specific to using object storage.  It uses the RGW API's to sync s3 uploads between each site.  For CephFS you might be able to do a sync of the rados pools, but I don't think that's actually a thing yet.  RBD mirror is also a layer on top of things to sync between sites.  Basically I think you need to do something on top of the Filesystem as opposed to within Ceph  to sync it between sites.

 

On Wed, May 16, 2018 at 9:51 AM Up Safe <upandsafe@xxxxxxxxx> wrote:

But this is not the question here.

The question is whether I can configure multi site for CephFS. 

Will I be able to do so by following the guide to set up the multi site for object storage? 

 

Thanks

 

On Wed, May 16, 2018, 16:45 John Hearns <hearnsj@xxxxxxxxxxxxxx> wrote:

The answer given at the seminar yesterday was that a practical limit was around 60km.

I don't think 100km is that much longer.  I defer to the experts here.

 

 

 

 

 

 

On 16 May 2018 at 15:24, Up Safe <upandsafe@xxxxxxxxx> wrote:

Hi,

 

About a 100 km. 

I have a 2-4ms latency between them. 

 

Leon

 

On Wed, May 16, 2018, 16:13 John Hearns <hearnsj@xxxxxxxxxxxxxx> wrote:

Leon,

I was at a Lenovo/SuSE seminar yesterday and asked a similar question regarding separated sites.

How far apart are these two geographical locations?   It does matter.

 

On 16 May 2018 at 15:07, Up Safe <upandsafe@xxxxxxxxx> wrote:

Hi,

I'm trying to build a multi site setup.

But the only guides I've found on the net were about building it with object storage or rbd.

What I need is cephfs.

I.e. I need to have 2 synced file storages at 2 geographical locations.

Is this possible?

Also, if I understand correctly - cephfs is just a component on top of the object storage.

Following this logic - it should be possible, right?

Or am I totally off here?

Thanks,

Leon


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux