Re: [PATCH] mm/page_alloc: make sure __rmqueue() etc. always inline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/18/2017 03:53 AM, Lu, Aaron wrote:
> On Tue, 2017-10-17 at 13:32 +0200, Vlastimil Babka wrote:
>>
>> Are transparent hugepages enabled? If yes, __rmqueue() is called from
>> rmqueue(), and there's only one page fault (and __rmqueue()) per 512
>> "writes to each page". If not, __rmqueue() is called from rmqueue_bulk()
>> in bursts once pcplists are depleted. I guess it's the latter, otherwise
>> I wouldn't expect a function call to have such visible overhead.
> 
> THP is disabled. I should have mentioned this in the changelog, sorry
> about that.

OK, then it makes sense!

>>
>> I guess what would help much more would be a bulk __rmqueue_smallest()
>> to grab multiple pages from the freelists. But can't argue with your
> 
> Do I understand you correctly that you suggest to use a bulk
> __rmqueue_smallest(), say __rmqueue_smallest_bulk(). With that, instead
> of looping pcp->batch times in rmqueue_bulk(), a single call to
> __rmqueue_smallest_bulk() is enough and __rmqueue_smallest_bulk() will
> loop pcp->batch times to get those pages?

Yeah, but I looked at it more closely, and maybe there's not much to
gain after all. E.g., there seem to be no atomic counter updates that
would benefit from batching, or expensive setup/cleanup in
__rmqueue_smallest().

> Then it feels like __rmqueue_smallest_bulk() has become rmqueue_bulk(),
> or do I miss something?

Right, looks like thanks to inlining, the compiler can already achieve
most of the potential gains.

>> With gcc 7.2.1:
>>> ./scripts/bloat-o-meter base.o mm/page_alloc.o
>>
>> add/remove: 1/2 grow/shrink: 2/0 up/down: 2493/-1649 (844)
> 
> Nice, it clearly showed 844 bytes bloat.
> 
>> function                                     old     new   delta
>> get_page_from_freelist                      2898    4937   +2039
>> steal_suitable_fallback                        -     365    +365
>> find_suitable_fallback                        31     120     +89
>> find_suitable_fallback.part                  115       -    -115
>> __rmqueue                                   1534       -   -1534

It also shows that steal_suitable_fallback() is no longer inlined. Which
is fine, because that should ideally be rarely executed.

>>
>>> [aaron@aaronlu obj]$ size */*/vmlinux
>>>    text    data     bss     dec       hex     filename
>>> 10342757   5903208 17723392 33969357  20654cd gcc-4.9.4/base/vmlinux
>>> 10342757   5903208 17723392 33969357  20654cd gcc-4.9.4/head/vmlinux
>>> 10332448   5836608 17715200 33884256  2050860 gcc-5.5.0/base/vmlinux
>>> 10332448   5836608 17715200 33884256  2050860 gcc-5.5.0/head/vmlinux
>>> 10094546   5836696 17715200 33646442  201676a gcc-6.4.0/base/vmlinux
>>> 10094546   5836696 17715200 33646442  201676a gcc-6.4.0/head/vmlinux
>>> 10018775   5828732 17715200 33562707  2002053 gcc-7.2.0/base/vmlinux
>>> 10018775   5828732 17715200 33562707  2002053 gcc-7.2.0/head/vmlinux
>>>
>>> Text size for vmlinux has no change though, probably due to function
>>> alignment.
>>
>> Yep that's useless to show. These differences do add up though, until
>> they eventually cross the alignment boundary.
> 
> Agreed.
> But you know, it is the hot path, the performance improvement might be
> worth it.

I'd agree, so you can add

Acked-by: Vlastimil Babka <vbabka@xxxxxxx>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux