Re: hybrid allocator based on btree allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Unfortunately we don't have any available test cases/benchmarks to assess allocators' RAM usage at the moment.

What I can suggest is to evaluate RAM usage's  upper bound using an absolutely degraded case: allocate 100% of space and then release 50% of blocks in an interleaving manner.

E.g. for 4K alloc unit:

alloc(MAX_SIZE)

o = 0

while o < MAX_SIZE

   release(o, 0x1000)

   o+= 0x2000


Then assess memory consumption for every allocator.

Suggest to have MAX_SIZE comparable with the best modern HDDs, i.e. 10-20 TBs. Pn the other hand RAM usage in this scenario should raise pretty linearly with MAX_SIZE growth. Hence lower numbers are probably good enough as well.


Thanks,

Igor









On 6/25/2021 12:13 PM, Kefu Chai wrote:
thank you Igor!

On Fri, Jun 25, 2021 at 4:45 PM Igor Fedotov <ifedotov@xxxxxxx> wrote:
Hi  Kefu and Adam,

curious whether we have reliable enough set of numbers on how spatially
efficient btree allocator is? I recall some Adam's comment in the PR
showing 2x saving in BTree allocator vs. AVL on.
no, probably not reliable or enough. could you shed some light on how
i can get some reliable numbers? shall i run some load on an osd and
check the memory consumption of the allocator? if yes, do we have any
(recommended) benchmark for profiling the disk allocators?

But the data set there seems to be relatively small - total consumed RAM
is just a few MBs. Anything else available?
not so far.

I recall Adam promised to run more vast testing at perf meeting....
i see.  will wait for his update then if we don't have a standardized test yet.


Thanks,

Igor



On 6/25/2021 11:35 AM, Kefu Chai wrote:
hi Adam,

while looking at Hybrid Allocator [0] and the newly introduced Btree
Allocator [1], i am wondering if we still need the bitmap allocator to
cap the memory usage due to the large overhead of AVL allocator?
because btree is much more spatially efficient than AVL tree.

cheers

---
[0] https://github.com/ceph/ceph/pull/33365
[1] https://github.com/ceph/ceph/pull/41828
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux