Re: Pool on limited number of OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Janne

Thanks for good spot however all of them are 3.53830, that change was left
after some tests to kick CRUSH algorithm

Jacek

śr., 19 lut 2020 o 09:47 Janne Johansson <icepic.dz@xxxxxxxxx> napisał(a):

> Den ons 19 feb. 2020 kl 09:42 skrev Jacek Suchenia <
> jacek.suchenia@xxxxxxxxx>:
>
>> Hello Wido
>>
>> Sure, here is a rule:
>> -15    s3  3.53830                 host kw01sv09.sr1.cr1.lab1~s3
>>  11    s3  3.53830                     osd.11
>> -17    s3  3.53830                 host kw01sv10.sr1.cr1.lab1~s3
>>  10    s3  3.53830                     osd.10
>> -16    s3  0.01529                 host kw01sv11.sr1.cr1.lab1~s3
>>   0    s3  0.01529                     osd.0
>>
>
> The sizes seem _very_ uneven? Perhaps it figures it can't place another PG
> on osd.0 due to its tiny size, and hence can't form a decent replica=3
> using it, and it can't form one without it either, since you have only
> those OSDs.
>
> --
> May the most significant bit of your life be positive.
>


-- 
Jacek Suchenia
jacek.suchenia@xxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux