Re: Bluestore runs out of space and dies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 31/10/2019 17:02, Janne Johansson wrote:

I thought about protecting everything with a proper full ratio, but I really afraid of the perspective of some human error (raising fullratio too high to help to recover some 'very important data asap') which would leave cluster broken for real. Those few Gb would be the next line of defense. It's better to have a downtime than 'unable to recover' situation.

Den tors 31 okt. 2019 kl 15:07 skrev George Shuklin <george.shuklin@xxxxxxxxx>:
Thank you everyone, I got it. There is no way to fix out-of-space
bluestore without expanding it.

Therefore, in production we would stick with 99%FREE size for LV, as it
gives operators 'last chance' to repair the cluster in case of
emergency. It's a bit unfortunate that we need to give up the whole per
cent (1 % is too much for 4Tb drives).

In production, stuff with start giving warnings at 85%, so you would just not get into this kinds of situations where the last percent matters or not.
 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux