Re: Privileges for read-only CephFS access?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 19, 2015 at 12:50 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> On Wed, Feb 18, 2015 at 3:30 PM, Florian Haas <florian@xxxxxxxxxxx> wrote:
>> On Wed, Feb 18, 2015 at 11:41 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>>> On Wed, Feb 18, 2015 at 1:58 PM, Florian Haas <florian@xxxxxxxxxxx> wrote:
>>>> On Wed, Feb 18, 2015 at 10:28 PM, Oliver Schulz <oschulz@xxxxxxxxxx> wrote:
>>>>> Dear Ceph Experts,
>>>>>
>>>>> is it possible to define a Ceph user/key with privileges
>>>>> that allow for read-only CephFS access but do not allow
>>>>> write or other modifications to the Ceph cluster?
>>>>
>>>> Warning, read this to the end, don't blindly do as I say. :)
>>>>
>>>> All you should need to do is define a CephX identity that has only r
>>>> capabilities on the data pool (assuming you're using a default
>>>> configuration where your CephFS uses the data and metadata pools):
>>>>
>>>> sudo ceph auth get-or-create client.readonly mds 'allow' osd 'allow r
>>>> pool=data' mon 'allow r'
>>>>
>>>> That identity should then be able to mount the filesystem but not
>>>> write any data (use "ceph-fuse -n client.readonly" or "mount -t ceph
>>>> -o name=readonly")
>>>>
>>>> That said, just touching files or creating them is only a metadata
>>>> operation that doesn't change anything in the data pool, so I think
>>>> that might still be allowed under these circumstances.
>>>
>>> ...and deletes, unfortunately. :(
>>
>> If the file being deleted is empty, yes. If the file has any content,
>> then the removal should hit the data pool before it hits metadata, and
>> should fail there. No?
>
> No, all data deletion is handled by the MDS, for two reasons:
> 1) You don't want clients to have to block on deletes in time linear
> with the number of objects
> 2) (IMPORTANT) if clients unlink a file which is still opened
> elsewhere, it can't be deleted until closed. ;)

Yeah of course, that makes sense. Sorry, wasn't really thinking, apparently.

>>>I don't think this is presently a
>>> thing it's possible to do until we get a much better user auth
>>> capabilities system into CephFS.
>>>
>>>>
>>>> However, I've just tried the above with ceph-fuse on firefly, and I
>>>> was able to mount the filesystem that way and then echo something into
>>>> a previously existing file. After unmounting, remounting, and trying
>>>> to cat that file, I/O just hangs. It eventually does complete, but
>>>> this looks really fishy.
>>>
>>> This is happening because the CephFS clients don't (can't, really, for
>>> all the time we've spent thinking about it) check whether they have
>>> read permissions on the underlying pool when buffering writes for a
>>> file. I believe if you ran an fsync on the file you'd get an EROFS or
>>> similar.
>>> Anyway, the client happily buffers up the writes. Depending on how
>>> exactly you remount then it might not be able to drop the MDS caps for
>>> file access (due to having dirty data it can't get rid of), and those
>>> caps have to time out before anybody else can access the file again.
>>> So you've found an unpleasant oddity of how the POSIX interfaces map
>>> onto this kind of distributed system, but nothing unexpected. :)
>>
>> Oliver's point is valid though; I would be nice if you could somehow
>> make CephFS read-only to some (or all) clients server side, the way an
>> NFS ro export does.
>
> Yeah. Yet another thing that would be good but requires real
> permission bits on the MDS. It'll happen eventually, but we have other
> bits that seem a lot more important...fsck, stability, single-tenant
> usability....

Sure, understandably so. Thanks!

Cheers,
Florian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux