Re: about 'ceph df' value on Jewel+Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



version info
======
cepher@10-165-160-18:~/xzy$ ceph -v
ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
cepher@10-165-160-18:~/xzy$ cat /etc/debian_version 
8.4

2016-06-02 17:54 GMT+08:00 席智勇 <xizhiyong18@xxxxxxxxx>:
hi cepher:
      
       I upgrade my ceph cluster to Jewel, and use bluestore as beckend store,  when I create image using rbd command line tool, it works ok, like:

cepher@10-165-160-18:~/xzy$ sudo rbd create xzy_vol  -p  switch01_ssd_volumes --size 10240
cepher@10-165-160-18:~/xzy$ rbd ls -p switch01_ssd_volumes
xzy_vol

but, when I create volume by cinder, it can't be created. I found Cinder judge if there is enough space by ceph df, the ceph df info is:

GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    53605G     51458G        2147G          4.01 
POOLS:
    NAME                     ID     USED     %USED     MAX AVAIL     OBJECTS 
    switch01_ssd_vms         1         0         0             0           0 
    switch01_ssd_volumes     2         8         0             0           1 


We can see the AVAIL space of pools is all 0.

On the other hand, I found the weight value was not correct either, and it is a known issue(http://tracker.ceph.com/issues/15985). I don't know whether it is related.

When use filestore as backend object store(on jewel), there do not have this problem.

Anyone can give me some advice or share some infomation.


best regards~

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux