Re: size of inc_osdmap vs osdmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hmm that does seem odd. How are you looking at those sizes?

On Wed, Dec 12, 2018 at 4:38 AM Sergey Dolgov <palza00@xxxxxxxxx> wrote:
Greq, for example for our cluster ~1000 osd:

size osdmap.1357881__0_F7FE779D__none = 363KB (crush_version 9860,
modified 2018-12-12 04:00:17.661731)
size osdmap.1357882__0_F7FE772D__none = 363KB
size osdmap.1357883__0_F7FE74FD__none = 363KB (crush_version 9861,
modified 2018-12-12 04:00:27.385702)
size inc_osdmap.1357882__0_B783A4EA__none = 1.2MB

difference between epoch 1357881 and 1357883: crush weight one osd was
increased by 0.01 so we get 5 new pg_temp in osdmap.1357883 but size
inc_osdmap so huge

чт, 6 дек. 2018 г. в 06:20, Gregory Farnum <gfarnum@xxxxxxxxxx>:
>
> On Wed, Dec 5, 2018 at 3:32 PM Sergey Dolgov <palza00@xxxxxxxxx> wrote:
>>
>> Hi guys
>>
>> I faced strange behavior of crushmap change. When I change crush
>> weight osd I sometimes get  increment osdmap(1.2MB) which size is
>> significantly bigger than size of osdmap(0.4MB)
>
>
> This is probably because when CRUSH changes, the new primary OSDs for a PG will tend to set a "pg temp" value (in the OSDMap) that temporarily reassigns it to the old acting set, so the data can be accessed while the new OSDs get backfilled. Depending on the size of your cluster, the number of PGs on it, and the size of the CRUSH change, this can easily be larger than the rest of the map because it is data with size linear in the number of PGs affected, instead of being more normally proportional to the number of OSDs.
> -Greg
>
>>
>> I use luminois 12.2.8. Cluster was installed a long ago, I suppose
>> that initially it was firefly
>> How can I view content of increment osdmap or can you give me opinion
>> on this problem. I think that spikes of traffic tight after change of
>> crushmap relates to this crushmap behavior
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Best regards, Sergey Dolgov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux