Re: EC pool used space high

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again,

Even with this, our 6+3 EC pool with default bluestore_min_alloc_size 64KiB filled with 4MiB RBD objects should not take 1.67x space. It should be around 1.55x. There still is a 12% un-accounted overhead. Could there be something else too?

Best,

On Tue, Nov 26, 2019 at 8:08 PM Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
Maybe following link helps...
https://www.spinics.net/lists/dev-ceph/msg00795.html

On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu <erdem.agaoglu@xxxxxxxxx> wrote:
>
> I thought of that but it doesn't make much sense. AFAICT min_size should block IO when i lose 3 osds, but it shouldn't effect the amount of the stored data. Am i missing something?
>
> On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
>>
>> On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
>>
>>
>> What I can't find is the 138,509 G difference between the ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static BTW, checking the same data historically shows we have about 1.12x of what we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in reality. Anyone have any ideas for why this is the case?
>>
>> May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3 (+1) is a your calculated 1.67.
>>
>>
>>
>> k
>
>
>
> --
> erdem agaoglu
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
erdem agaoglu
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux