NFS over CEPH - best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 09, 2014 at 12:37:57PM +0100, Andrei Mikhailovsky wrote:
> Ideally I would like to have a setup with 2+ iscsi servers, so that I can perform maintenance if necessary without shutting down the vms running on the servers. I guess multipathing is what I need. 
> 
> Also I will need to have more than one xenserver/vmware host servers, so the iscsi LUNs will be mounted on several servers. 
> 

So you have multiple machines talking to the same LUN at the same time ?

You'll have to co-ordinate how changes are written to the backing store, normally you'd have the virtualization servers use some kind of protocol.

When it's SCSI there are the older Reserve/Release commands and the newer SCSI-3 Persistent Reservation commands.

(i)SCSI allows multiple changes to be in-flight, without coordination things will go wrong.

Below it was mentioned that you can disable the cache for rbd, if you have no coordination protocol you'll need to do the same on the iSCSI-side.

I believe when you do that it will be slower, but it might work.

> Would the suggested setup not work for my requirements? 
> 

It depends on VMWare if they allow such a setup.

Then there is an other thing. How do the VMWare machines coordinate which VM they should be running ?

I don't know VMWare but usually if you have some kind of clustering setup you'll need to have a 'quorum'.

A lot of times the quorum is handled by a quorum disk with the SCSI coordiation protocols mentioned above.

An other way to have a quorum is to have a majority voting system with an un-even number of machines talking over the network. This is what Ceph monitor nodes do.

As an example of a clustering system that allows it to be used without a quorum disk with only 2 machines talking over the network is Linux Pacemaker. When something bad happends, one machine will just turn off the power of the other machine to prevent things going wrong (this is called STONITH).

> Andrei 
> ----- Original Message -----
> 
> From: "Leen Besselink" <leen at consolejunkie.net> 
> To: ceph-users at lists.ceph.com 
> Sent: Thursday, 8 May, 2014 9:35:21 PM 
> Subject: Re: NFS over CEPH - best practice 
> 
> On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote: 
> > Le 07/05/2014 15:23, Vlad Gorbunov a ?crit : 
> > >It's easy to install tgtd with ceph support. ubuntu 12.04 for example: 
> > > 
> > >Connect ceph-extras repo: 
> > >echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release 
> > >-sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list 
> > > 
> > >Install tgtd with rbd support: 
> > >apt-get update 
> > >apt-get install tgt 
> > > 
> > >It's important to disable the rbd cache on tgtd host. Set in 
> > >/etc/ceph/ceph.conf: 
> > >[client] 
> > >rbd_cache = false 
> > [...] 
> > 
> > Hello, 
> > 
> 
> Hi, 
> 
> > Without cache on the tgtd side, it should be possible to have 
> > failover and load balancing (active/avtive) multipathing. 
> > Have you tested multipath load balancing in this scenario ? 
> > 
> > If it's reliable, it opens a new way for me to do HA storage with iSCSI ! 
> > 
> 
> I have a question, what is your use case ? 
> 
> Do you need SCSI-3 persistent reservations so multiple machines can use the same LUN at the same time ? 
> 
> Because in that case I think tgtd won't help you. 
> 
> Have a good day, 
> Leen. 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users at lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux