Re: bufferlist allocation optimization ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sure, so we could introduce it to async messenger. We could create a
buffer pool, then bufferlist's api could use passing in buffer pool to
alloc memory.

On Wed, Aug 12, 2015 at 12:43 PM, Dałek, Piotr
<Piotr.Dalek@xxxxxxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: Haomai Wang [mailto:haomaiwang@xxxxxxxxx]
>> Sent: Wednesday, August 12, 2015 4:56 AM
>> To: Dałek, Piotr
>>
>> On Wed, Aug 12, 2015 at 5:48 AM, Dałek, Piotr <Piotr.Dalek@xxxxxxxxxxxxxx>
>> wrote:
>> >> -----Original Message-----
>> >> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-
>> >> owner@xxxxxxxxxxxxxxx] On Behalf Of Sage Weil
>> >> Sent: Tuesday, August 11, 2015 10:11 PM
>> >>
>> >> I went ahead and implemented both of these pieces.  See
>> >>
>> >>       https://github.com/ceph/ceph/pull/5534
>> >>
>> >> My benchmark numbers are highly suspect, but the approximate
>> takeaway
>> >> is that it's 2x faster for the simple microbenchmarks and does 1/3rd
>> >> the allocations.  But there is some weird interaction with the
>> >> allocator going on for 16k allocations that I saw, so it needs some more
>> careful benchmarking.
>> >
>> > 16k allocations aren't that common, actually.
>> > Some time ago I took an alloc profile for raw_char and posix_aligned
>> buffers, and...
>> >
>> > [root@storage1 /]# sort buffer::raw_char-2143984.dat | uniq -c | sort -g
>> >       1 12
>> >       1 33
>> >       1 393
>> >       1 41
>> >       2 473
>> >       2 66447
>> >       3 190
>> >       3 20
>> >       3 64
>> >       4 16
>> >      36 206
>> >      88 174
>> >      88 48
>> >      89 272
>> >      89 36
>> >      90 34
>> >     312 207
>> >    3238 208
>> >   32403 209
>> >  196300 210
>> >  360164 45
>>
>> Since size is centralization, we could use a fixed size buffer pool to optimize
>> this. The performance is outstanding as I perf.
>
> Idea is great, but execution is tricky, especially in case of simple messenger -- we have a lot of threads allocing and freeing memory, so pool must be aware of that and not become a bottleneck itself.
>
>
> With best regards / Pozdrawiam
> Piotr Dałek



-- 
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux