Re: CephFS root squash?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 10, 2017 at 5:27 PM, Jim Kilborn <jim@xxxxxxxxxxxx> wrote:
> Interesting. I thought cephfs could be a replacement for a nfs server for holding home directories, but not have a single point of failure. I'm surprised that is generally frowned upon by the comments.

(Sorry, this got a bit long, the tldr is that NFS is sometimes what
you want, and in those cases you can always run it on top of CephFS to
get the best of both worlds)

I suppose I'm taking quite a conservative view, biased a bit by what
sort of systems make my life easier (to support).  If there are
unmanaged user machines accessing the filesystem it's much harder to
e.g. push out fixes to the client when there are issues.

Regarding NFS vs. CephFS native access: the CephFS protocol relies on
the clients to be somewhat cooperative.  For example, some users are
familiar with the "failing to respond to capability release" health
warnings that can result from buggy clients.  When you see that
warning, your MDS is getting upset because it can't control the size
of its own cache any more, because it can't evict things until the
client does, and the client isn't playing along.  This is different
from NFS, where the looser semantics allow the server to just go ahead
and forget about a file and/or let other clients do stuff to it, and
ultimately just give something less-consistent, or even an ESTALE to
the client that was accessing the file first.

The strong POSIX semantics that CephFS enforces can also present some
gotchas to applications that aren't expecting it: "ls -l" can be
*really* slow when listing a directory of files that another clients
is writing, because CephFS will insist on communicating with the other
client to ensure you're getting the correct size information, whereas
NFS would generally just give you its most recent cached size.

Remember that nfs and cephfs aren't mutually exclusive: if you have a
workload that is working well on nfs, it may well make sense to run an
nfs gateway (kernel nfs server or nfs-ganesha) on top of CephFS.  That
way you can get the best of both worlds, and retain the option of
connecting some of your clients natively to cephfs while perhaps a
larger collection of less-trusted workstations had access via NFS.

John

>
>
> Sent from my Windows 10 phone
>
>
>
> From: John Spray<mailto:jspray@xxxxxxxxxx>
> Sent: Friday, February 10, 2017 4:21 AM
> To: Robert Sander<mailto:r.sander@xxxxxxxxxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  CephFS root squash?
>
>
>
> On Fri, Feb 10, 2017 at 8:02 AM, Robert Sander
> <r.sander@xxxxxxxxxxxxxxxxxxx> wrote:
>> On 09.02.2017 20:11, Jim Kilborn wrote:
>>
>>> I am trying to figure out how to allow my users to have sudo on their workstation, but not have that root access to the ceph kernel mounted volume.
>>
>> I do not think that CephFS is meant to be mounted on human users'
>> workstations.
>
> We'd all like to avoid squishy human users if possible but sometimes
> it's unavoidable :-D
>
> My feeling is that cephfs should be mounted natively only on trusted,
> "tightly coupled" systems, whose availability is comparable to that of
> the servers.  So on a typical user laptop would be a bad idea, but on
> a big visualization workstation might be OK, or on the always-on
> identical desktops in a single CAD/CGI/EDA team might be okay too.
>
> Slow/naughty clients generally only cause pain to other clients in the
> same filesystem, so if you do have some files accessible to
> workstations it might also be prudent to segregate them in a separate
> filesystem (currently no cephX way of enforcing that, but if you
> basically trust the workstations and just want to isolate them in case
> of bugs/outages, it's okay).
>
> John
>
>>
>> Regards
>> --
>> Robert Sander
>> Heinlein Support GmbH
>> Schwedter Str. 8/9b, 10119 Berlin
>>
>> http://www.heinlein-support.de
>>
>> Tel: 030 / 405051-43
>> Fax: 030 / 405051-19
>>
>> Zwangsangaben lt. §35a GmbHG:
>> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
>> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux