Re: cephfs filesystem layouts : authentication gotchas ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 03/03/2015 15:21, SCHAER Frederic wrote:

By the way : looks like the “ceph fs ls” command is inconsistent when the cephfs is mounted (I used a locally compiled kmod-ceph rpm):

[root@ceph0 ~]# ceph fs ls

name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet ]

(umount /mnt …)

[root@ceph0 ~]# ceph fs ls

name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet root ]

This is probably #10288, which was fixed in 0.87.1

So, I have this pool named “root” that I added in the cephfs filesystem.

I then edited the filesystem xattrs :

[root@ceph0 ~]# getfattr -n ceph.dir.layout /mnt/root

getfattr: Removing leading '/' from absolute path names

# file: mnt/root

ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=root"

I’m therefore assuming client.puppet should not be allowed to write or read anything in /mnt/root, which belongs to the “root” pool… but that is not the case.

On another machine where I mounted cephfs using the client.puppet key, I can do this :

The mount was done with the client.puppet key, not the admin one that is not deployed on that node :

1.2.3.4:6789:/ on /mnt type ceph (rw,relatime,name=puppet,secret=<hidden>,nodcache)

[root@dev7248 ~]# echo "not allowed" > /mnt/root/secret.notfailed

[root@dev7248 ~]#

[root@dev7248 ~]# cat /mnt/root/secret.notfailed

not allowed

This is data you're seeing from the page cache, it hasn't been written to RADOS.

You have used the "nodcache" setting, but that doesn't mean what you think it does (it was about caching dentries, not data). It's actually not even used in recent kernels (http://tracker.ceph.com/issues/11009).

You could try the nofsc option, but I don't know exactly how much caching that turns off -- the safer approach here is probably to do your testing using I/Os that have O_DIRECT set.

And I can even see the xattrs inherited from the parent dir :

[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

getfattr: Removing leading '/' from absolute path names

# file: mnt/root/secret.notfailed

ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=root"

Whereas on the node where I mounted cephfs as ceph admin, I get nothing :

[root@ceph0 ~]# cat /mnt/root/secret.notfailed

[root@ceph0 ~]# ls -l /mnt/root/secret.notfailed

-rw-r--r-- 1 root root 12 Mar  3 15:27 /mnt/root/secret.notfailed

After some time, the file also gets empty on the “puppet client” host :

[root@dev7248 ~]# cat /mnt/root/secret.notfailed

[root@dev7248 ~]#

(but the metadata remained ?)

Right -- eventually the cache goes away, and you see the true (empty) state of the file.

Also, as an unpriviledged user, I can get ownership of a “secret” file by changing the extended attribute :

[root@dev7248 ~]# setfattr -n ceph.file.layout.pool -v puppet /mnt/root/secret.notfailed

[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

getfattr: Removing leading '/' from absolute path names

# file: mnt/root/secret.notfailed

ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=puppet"

Well, you're not really getting "ownership" of anything here: you're modifying the file's metadata, which you are entitled to do (pool permissions have nothing to do with file metadata). There was a recent bug where a file's pool layout could be changed even if it had data, but that was about safety rather than permissions.

Final question for those that read down here : it appears that before creating the cephfs filesystem, I used the “puppet” pool to store a test rbd instance.

And it appears I cannot get the list of cephfs objects in that pool, whereas I can get those that are on the newly created “root” pool :

[root@ceph0 ~]# rados -p puppet ls

test.rbd

rbd_directory

[root@ceph0 ~]# rados -p root ls

1000000000a.00000000

1000000000b.00000000

Bug, or feature ?


I didn't see anything in your earlier steps that would have led to any objects in the puppet pool.

To get closer to the effect you're looking for, you probably need to combine your pool settings with some permissions on the folders, and do your I/O as a user other than root -- your user-level permissions would protect your metadata, and your pool permissions would protect your data.

There are also plans to make finer grained access control for the metadata, but that's not there yet.

Cheers,
John

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux