Re: [Cephfs] Mounting a specific pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok thank you, so is there a way to unset my quota ? or should I create a new pool and destroy the old one ?

Another question by the way :) , does this syntax work :
mount -t ceph ip1:6789,ip2:6789,ip3:6789:/qcow2   /disks/
If I only want to mount my "qcow2" pool ?


>On Tue, Nov 5, 2013 at 5:09 PM, NEVEU Stephane <stephane.neveu@xxxxxxxxxxxxxxx> wrote:
> Ok, so this command only works with rbd ?
> ceph osd pool set-quota poolname max_bytes xxxx
>
> What happen then if I've already set a quota on my pool then added my data_pool to mds ? Will this "quota" simply become ineffective in the cephfs context ? and I'll be able to write larger datas than my "quota" ?
>
No. OSDs still enforce the quota. If quota has been reached, sync write will return -ENOSPC, buffered write will lose data.

Yan, Zheng




> -----Message d'origine-----
> De : Yan, Zheng [mailto:ukernel@xxxxxxxxx] Envoyé : mardi 5 novembre 
> 2013 09:38 À : NEVEU Stephane Cc : ceph-users@xxxxxxxxxxxxxx Objet : 
> Re:  [Cephfs] Mounting a specific pool
>
> On Tue, Nov 5, 2013 at 4:05 PM, NEVEU Stephane <stephane.neveu@xxxxxxxxxxxxxxx> wrote:
>> Hi all,
>>
>>
>>
>> I'm trying to test/figure out how cephfs works and my goal is to 
>> mount specific pools on different KVM hosts :
>>
>> Ceph osd pool create qcow2 10000
>>
>> Ceph osd dump | grep qcow2
>>
>> -> Pool 9
>>
>> Ceph mds add_data_pool 9
>>
>> I want now a 900Gb quota for my pool :
>>
>> Ceph osd pool set-quota qcow2 max_bytes 120795955200
>>
>> Ok, now how can I verify the size in Gb of my pool (not the 
>> replication size
>> 1,2,3 etc) ?
>>
>>
>>
>> On my KVM host (client):
>>
>> Mount -t ceph ip1:6789,ip2:6789,ip3:6789:/ /disks/
>>
>> OK
>>
>> Cephsfs /disks/ show_layout
>>
>> Layout.data_pool: 0
>>
>> Etc.
>>
>> Cephfs /disks/ set_layout -p 9 -u 4194304 -c 1 -s 4194304
>>
>> Umount /disks/ && mount /disks/
>>
>> Cephsfs /disks/ show_layout
>>
>> Layout.data_pool: 9
>>
>>
>>
>> Great my layout is now 9 so my qcow2 pool but :
>>
>> Df-h | grep disks, shows the entire cluster size not only 900Gb why ?
>> is it normal ? or am I doing something wrong ?
>
> cephfs does not support any type of quota, df always reports ntire cluster size.
>
> Yan, Zheng
>
>
>>
>>
>>
>> Thank you for your help J
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux