Re: CTDB Cluster Samba on Cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 29, 2013 at 11:05:56AM -0700, ronnie sahlberg wrote:
> On Fri, Mar 29, 2013 at 9:31 AM, Marco Aroldi <marco.aroldi@xxxxxxxxx> wrote:
> > Still trying with no success:
> >
> > Sage and Ronnie:
> > I've tried the ping_pong tool, even with "locking=no" in my smb.conf
> > (no differences)
> >
> > # ping_pong /mnt/ceph/samba-cluster/test 3
> > I have about 180 locks/second
> 
> That is very slow.
> 
> > If I start the same command from the other node, the tools stops
> > completely. 0 locks/second
> 
> Looks like fcntl() locking doenst work that well.
> 
> 
> 
> The slow rate of fcntl() lock  will impact samba.
> By default, for almost all file i/o samba will need to do at least one
> fcntl(G_GETLK) in ordet to check whether some other, non-samba,
> process holds a lock to the file.
> If you can only do 180 fcntl(F_*LK)  per second across the cluster for
> a file (I assume this is per file limitation)
> this will have the effect of you only being able to do 180 i/o per
> second to a file, which will make CIFS impossibly slow for any real
> use.
> This was all from a single node as well  so no inter-node contention!
> 
> 
> So here you probably want to use "locking = no" in samba.  But beware,
>   locking=no can have catastrophic effects on your data.
> But without "locking = no"  would just become impossibly slow,
> probably uselessly slow.

Please make that "posix locking = no", not "locking = no".

The locking=yes piece is being taken care of by Samba and
ctdb.

> Using "locking = no" in samba does mean though that you no longer have
> any locking coherency across protocols.

s/locking/posix locking/

> I.e.  NFS clients and samba clients are now disjoint since they can no
> longer see eachothers locks.

Right. But that's kindof okay if you have only SMB clients.

> If you only ever access the data via CIFS,  locking = no  should be safe.

Again: Please DO NOT USE locking=no, the options is "posix
locking = no"!!!

> But IF you access data via NFS or other NS protocols,   breaking lock
> coherency across protocols like this could lead to dataloss
> depending on the i/o patterns.
> 
> 
> I would recommend only using   locking = no   if you can guarantee

"posix locking = no" please, not "locking=no". I know that
some product's command line interface calls this "locking",
but I doubt this is what is being used here.

> that you will never export the data via other means than CIFS.
> If you can not guarantee that,  you will have to reseach the use
> patterns very carefully to determine whether locking = no is safe or

And again: "posix locking = no", not "locking = no"

> not.
> 
> 
> 
> I think for fcntl() locking,   depending on use case , is this a home
> server where you can accept very poor performance?   or is this a
> server for a small workgroup?
> If the latter, if using locking = yes   you probably want your

Ok, we have a misunderstanding throughout this complete
mail. Again, I think you want "posix locking", not "locking"
set to no.

> filesystem to allow  >10.000 operations per second from a node with no
> contention
> and >1000 operations per node per second when there is contention across nodes.
> 
> If it is a big server, you probably want >> instead of > for these
> numbers. At least.
> 
> 
> But first you would need to get ping_pong working reliably  both
> running in a steady state, and later running and recovering from
> continous single node reboots.
> It seems ping pong is not working really well for you at all at this
> stage,   so that is likely a problem.
> 
> 
> 
> As I said,  very few cluster filesystems have fcntl() locking that is
> not completely broken.
> 
> 
> 
> 
> For now,   you could try "lokcing = no" in samba with the caveats above,

Please make that again "posix locking = no", not "locking = no".

I hope this message got through by now.

Volker

-- 
SerNet GmbH, Bahnhofsallee 1b, 37081 Göttingen
phone: +49-551-370000-0, fax: +49-551-370000-9
AG Göttingen, HRB 2816, GF: Dr. Johannes Loxen
http://www.sernet.de, mailto:kontakt@xxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux