Earlier in bluestore's life, we couldn't handle a 4K min_alloc size on
NVMe without incurring pretty significant slowdowns (and also generally
higher amounts of metadata in the DB). Lately I've been seeing some
indications that we've improved the stack to the point where 4K
min_alloc no longer is significantly slower on NVMe than 16K. It might
be time to consider switching back for Octopus. On the HDD side I'm not
sure if we want to consider dropping down from 64K. There are
definitely going to be some trade-offs there.
Mark
On 6/17/19 3:22 AM, Igor Fedotov wrote:
Hi Maged,
min_alloc_size determines allocation granularity hence if object size
isn't aligned with its value allocation overhead still takes place.
E.g. with min_alloc_size = 16K and object size = 24K total allocation
(i.e. bluestore_allocated) would be 32K.
And yes, this overhead is permanent.
Thanks,
Igor
On 6/17/2019 1:06 AM, Maged Mokhtar wrote:
Hi all,
I want to understand more the difference between bluestore_allocated
and bluestore_stored in the case of no compression. If i am writing
fixed objects with sizes greater than min alloc size, would
bluestore_allocated still be higher than bluestore_stored ? If so, is
this a permanent overhead/penalty or is something the allocator can
re-use/optimize later as more objects are stored ?
Appreciate any help.
Cheers /Maged
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com