Re: ceph df shows 100% used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



With the help of robbat2 and llua on IRC channel I was able to solve this situation by taking down the 2-OSD only hosts.
After crush reweighting OSDs 8 and 23 from host mia1-master-fe02 to 0, ceph df showed the expected storage capacity usage (about 70%)


With this in mind, those guys have told me that it is due the cluster beeing uneven and unable to balance properly. It makes sense and it worked.
But for me it is still a very unexpected bahaviour for ceph to say that the pools are 100% full and Available Space is 0.

There were 3 hosts and repl. size = 2, if the host with only 2 OSDs were full (it wasn't), ceph could still use space from OSDs from the other hosts.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux