On 25/05/2023 20:36, Stefan Kooman wrote:
On 5/25/23 18:17, Igor Fedotov wrote:
Perhaps...
I don't like the idea to use fragmentation score as a real index. IMO
it's mostly like a very imprecise first turn marker to alert that
something might be wrong. But not a real quantitative high-quality
estimate.
Chiming in on the high fragmentation issue. We started collecting
"fragmentation_rating" of each OSD this afternoon. All OSDs that have
been provisioned a year ago have a fragmentation rating of ~ 0.9. Not
sure for how long they are on this level.
Could you please collect allocation probes from existing OSD logs? Just
a few samples from different OSDs...
And after reading your mails it might not be a problem at all. But we
will start collecting this information in the coming weeks.
We will be re-provisioning all our OSDs, so that might be a good time
to look at the behavior and development of "cnt versus frags" ratio.
After we completely emptied a host, even after having the OSDs run
idle for a couple of hours, the fragmentation ratio would not drop
lower than 0.27 for some OSDs, and up to 0.62 for others. Is it
expected that this will not go to ~ zero?
You might be facing the issue fixed by
https://github.com/ceph/ceph/pull/49885
We have not noticed any performance degradation whatsoever. We are
running with bitmap for current OSDs, hybrid allocator for the new
provisioned OSDs (4k min alloc size).
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx