Re: Size and capacity calculations questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!


>>> Thank you!
>>> The output of both commands are below.
>>> I still dont understand why there are 21T used data (because 5.5T*3 =
>>> 16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
>>> osd output tells we have 25T free space.
>>
>> As I know MAX AVAIL is calculated with respect to
>> mon_osd_full_ratio, max OSD %USE, WEIGHT, total weight of set of OSDs
>> in pool and replication factor of pool.
>> Max used hdd OSD is 34.
>> So (0.95 - 0.65) * 931 / 3 * 1.3 ~= 4.5T
> 
> (0.95 - 0.65) * 931 / 3 * 1.3 * 37 ~= 4.5T
> 
>> 1.3 here state for avg osd size / 931 (you have 13*1.8T osd and other
>> 24*931G)
>> The logic behind is to estimate amount of data you can write into pool
>> before max used OSD became full.

Thanks for helping understanding these numbers!

probably we then recreate the bluestore osds one by one with reduced min
alloc size.

begin:vcard
fn:Jochen Schulz
n:Schulz;Jochen
org;quoted-printable:Georg-August University of G=C3=B6ttingen;Institute for Numerical and Applied Mathematics
adr;quoted-printable;dom:;;Lotzestr. 16-18;G=C3=B6ttingen;;37083
email;internet:schulz@xxxxxxxxxxxxxxxxxxxxxx
title:Dr. rer. nat.
tel;work:+49 (0)551 39 24525
version:2.1
end:vcard

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux