Kraken - Pool storage MAX AVAIL drops by 30TB after disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

We have a 5 node EC 4+1 cluster with 335 OSDs running Kraken Bluestore 11.2.0.
There was a disk failure on one of the OSDs and the disk was replaced. After which it was noticed that there was a ~30TB drop in the MAX_AVAIL value for the pool storage details on output of 'ceph df'
Even though the disk was replaced and the OSD is now running properly, this value did not recover back to the original; also the disk is only a 4TB disk. Hence the drop of ~30TB from the MAX_AVAIL doesn't seem right. Has anyone had a similar issue before?

Thanks.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux