On 4/15/2019 4:17 PM, Wido den Hollander wrote:
On 4/15/19 2:55 PM, Igor Fedotov wrote:
Hi Wido,
the main driver for this backport were multiple complains on write ops
latency increasing over time.
E.g. see thread named: "ceph osd commit latency increase over time,
until restart" here.
Or http://tracker.ceph.com/issues/38738
Most symptoms showed Stupid Allocator as a root cause for that.
Hence we've got a decision to backport bitmap allocator which should
work a fix/workaround.
I see, that makes things clear. Anything users should take into account
when setting:
[osd]
bluestore_allocator = bitmap
bluefs_allocator = bitmap
Writing this here for archival purposes so that users who have the same
question can find it easily.
Nothing specific but a bit different memory usage pattern: stupid
allocator has more dynamic memory usage approach while bitmap allocator
is absolutely static in this respect. So depending on the use case OSD
might require more or less RAM. E.g. on fresh deployment stupid
allocator memory requirements are most probably less that bitmap
allocator ones. But RAM usage for bitmap allocator doesn't change with
OSD evolution while ones for stupid allocator might grow unexpectedly high.
FWIW resulting disk fragmentation might be different too. The same apply
to their performance but I'm not sure if the latter is visible with the
full Ceph stack.
Wido
Thanks,
Igor
On 4/15/2019 3:39 PM, Wido den Hollander wrote:
Hi,
With the release of 12.2.12 the bitmap allocator for BlueStore is now
available under Mimic and Luminous.
[osd]
bluestore_allocator = bitmap
bluefs_allocator = bitmap
Before setting this in production: What might the implications be and
what should be thought of?
From what I've read the bitmap allocator seems to be better in read
performance and uses less memory.
In Nautilus bitmap is the default, but L and M still default to stupid.
Since the bitmap allocator was backported there must be a use-case to
use the bitmap allocator instead of stupid.
Thanks!
Wido
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com