ceph status: pg backfill_toofull, but all OSDs have enough space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

After increasing number of PGs in a pool, ceph status is reporting "Degraded data redundancy (low space): 1 pg backfill_toofull", but I don't understand why, because all OSDs seem to have enough space.

ceph health detail says:
pg 40.155 is active+remapped+backfill_toofull, acting [20,57,79,85]

$ ceph pg map 40.155
osdmap e3952 pg 40.155 (40.155) -> up [20,57,66,85] acting [20,57,79,85]

So I guess Ceph wants to move 40.155 from 66 to 79 (or other way around?). According to "osd df", OSD 66's utilization is 71.90%, OSD 79's utilization is 58.45%. The OSD with least free space in the cluster is 81.23% full, and it's not any of the ones above.

OSD backfillfull_ratio is 90% (is there a better way to determine this?):
$ ceph osd dump | grep ratio
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.7

Does anybody know why a PG could be in the backfill_toofull state if no OSD is in the backfillfull state?


Vlad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux