Re: cephfs quotas reporting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 5, 2016 at 3:57 AM, John Spray <jspray@xxxxxxxxxx> wrote:
> On Mon, Dec 5, 2016 at 3:27 AM, Goncalo Borges
> <goncalo.borges@xxxxxxxxxxxxx> wrote:
>> Hi Again...
>>
>> Once more, my environment:
>>
>> - ceph/cephfs in 10.2.2.
>> - All infrastructure is in the same version (rados cluster, mons, mds and
>> cephfs clients).
>> - We mount cephfs using ceph-fuse.
>>
>> I want to set up quotas to limit users from filling the filesystem and
>> proactively avoid a situation where I have several simultaneous full or near
>> full osds. However, I am not understanding how the reporting of space works
>> once quotas are in place. My cephfs cluster provides a ~100TB of space (~
>> 300 TB of raw space since I have a replication of 3x). Check the following
>> two cases:
>>
>> 1./ In clients where the full filesystem hierarchy is available:
>>
>> - I have the following quota:
>>
>> # getfattr -n ceph.quota.max_bytes /coepp/cephfs
>> getfattr: Removing leading '/' from absolute path names
>> # file: coepp/cephfs
>> ceph.quota.max_bytes="88000000000000"
>>
>> - I am mounting the client as
>>
>> # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m
>> <MON IP>:6789 --client-quota --fuse_default_permissions=0
>> --client_acl_type=posix_acl -r /cephfs /coepp/cephfs/
>>
>> - The results of two consecutive 'df' commands, executed after the mount
>> operation, is the following. You can conclude that in the first command, the
>> reported values are computed with respect to the quota but then fallback to
>> the default when no quotas is in place.
>>
>> #  puppet agent -t; df -h ; df -h
>> (...)
>> ceph-fuse              81T   51T   30T  64% /coepp/cephfs
>> (...)
>> ceph-fuse             306T  153T  154T  50% /coepp/cephfs
>
> To clarify, you are not doing anything in the background in between
> the two df calls?  You're running df twice in a row on an idle system
> and getting different results?  That's definitely a bug!

I'm not an expert in how the quota code works, but looking at
Client::get_quota_root() it seems to go to a lot of trouble to find
the *previous* quota setting, not the one at the given starting inode.
We may have it succeeding on initial mount just because we don't have
any parent inodes in cache, but once it gets them it claws backwards?
-Greg

>
> John
>
>>
>> 2./ On another type of clients where I only mount a subdirectory of the
>> filesystem (/coepp/cephfs/borg instead of coepp/cephfs):
>>
>> - I have the following quota:
>>
>> # getfattr -n ceph.quota.max_bytes /coepp/cephfs/borg
>> getfattr: Removing leading '/' from absolute path name
>> # file: coepp/cephfs/borg
>> ceph.quota.max_bytes="10000000000000"
>>
>> - I mount the filesystem as:
>>
>> # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m
>> <MON IP>:6789 --client-quota --fuse_default_permissions=0
>> --client_acl_type=posix_acl -r /cephfs/borg /coepp/cephfs/borg
>>
>>
>> - The reported space is
>>
>> # puppet agent -t; df -h ; df -h
>> (...)
>> ceph-fuse       9.1T  5.7T  3.5T  62% /coepp/cephfs/borg
>> (...)
>> ceph-fuse        81T   51T   30T  64% /coepp/cephfs/borg
>>
>>
>> 3./ Both clients are behaving in the same way where they start by reporting
>> according to the implemented quota
>>
>>             51T of used space with respect to 81TB in total (case 1)
>>             5.7T of used space with respect to 9.1T in total (case 2)
>>
>> and then fallback to the values enforced in the previous level of the
>> hierarchy.
>>
>>             153T of used space with respect to 306T in total (case 1)
>>             51T of used space with respect to 81TB in total (case 2)
>>
>> Am i doing something wrong here?
>>
>> Cheers
>> Goncalo
>>
>> --
>> Goncalo Borges
>> Research Computing
>> ARC Centre of Excellence for Particle Physics at the Terascale
>> School of Physics A28 | University of Sydney, NSW  2006
>> T: +61 2 93511937
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux