Re: EC pool used space high

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I thought of that but it doesn't make much sense. AFAICT min_size should block IO when i lose 3 osds, but it shouldn't effect the amount of the stored data. Am i missing something?

On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
On 11/25/19 6:05 PM, Erdem Agaoglu wrote:

What I can't find is the 138,509 G difference between the ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static BTW, checking the same data historically shows we have about 1.12x of what we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in reality. Anyone have any ideas for why this is the case?

May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3 (+1) is a your calculated 1.67.



k



--
erdem agaoglu
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux