Re: Writes to mounted Ceph FS fail silently if client has no write capability on data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 5, 2012 at 10:40 AM, Florian Haas <florian@xxxxxxxxxxx> wrote:
> Hi everyone,
>
> please enlighten me if I'm misinterpreting something, but I think the
> Ceph FS layer could handle the following situation better.
>
> How to reproduce (this is on a 3.2.0 kernel):
>
> 1. Create a client, mine is named "test", with the following capabilities:
>
> client.test
>         key: <key>
>         caps: [mds] allow
>         caps: [mon] allow r
>         caps: [osd] allow rw pool=testpool
>
> Note the client only has access to a single pool, "testpool".
>
> 2. Export the client's secret and mount a Ceph FS.
>
> mount -t ceph -o name=test,secretfile=/etc/ceph/test.secret
> daisy,eric,frank:/ /mnt
>
> This succeeds, despite us not even having read access to the "data" pool.
>
> 3. Write something to a file.
>
> root@alice:/mnt# echo "hello world" > hello.txt
> root@alice:/mnt# cat hello.txt
>
> This too succeeds.
>
> 4. Sync and clear caches.
>
> root@alice:/mnt# sync
> root@alice:/mnt# echo 3 > /proc/sys/vm/drop_caches
>
> 5. Check file size and contents.
>
> root@alice:/mnt# ls -la
> total 5
> drwxr-xr-x  1 root root    0 Jul  5 17:15 .
> drwxr-xr-x 21 root root 4096 Jun 11 09:03 ..
> -rw-r--r--  1 root root   12 Jul  5 17:15 hello.txt
> root@alice:/mnt# cat hello.txt
> root@alice:/mnt#
>
> Note the reported file size in unchanged, but the file is empty.
>
> Checking the "data" pool with client.admin credentials obviously shows
> that that pool is empty, so objects are never written. Interestingly,
> "cephfs hello.txt show_location" does list an object_name, identifying
> an object which doesn't exist.
>
> Is there any way to make the client fail with -EIO, -EPERM,
> -EOPNOTSUPP or whatever else is appropriate, rather than pretending to
> write when it can't?

There definitely are, but I don't think we're going to fix that until
we get to working seriously on the filesystem. Create a bug! ;)

> Also, going down the rabbit hole, how would this behavior change if I
> used cephfs to set the default layout on some directory to use a
> different pool?

I'm not sure what you're asking here — if you have access to the
metadata server, you can change the pool that new files go into, and I
think you can set the pool to be whatever you like (and we should
probably harden all this, too). So you can fix it if it's a problem,
but you can also turn it into a problem.
Is that what you were after?
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux