Re: [RFC][PATCH] mm: cut down __GFP_NORETRY page allocation failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 3, 2011 at 12:51 PM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote:
> Hi Minchan,
>
> On Tue, May 03, 2011 at 08:49:20AM +0800, Minchan Kim wrote:
>> Hi Wu, Sorry for slow response.
>> I guess you know why I am slow. :)
>
> Yeah, never mind :)
>
>> Unfortunately, my patch doesn't consider order-0 pages, as you mentioned below.
>> I read your mail which states it doesn't help although it considers
>> order-0 pages and drain.
>> Actually, I tried to look into that but in my poor system(core2duo, 2G
>> ram), nr_alloc_fail never happens. :(
>
> I'm running a 4-core 8-thread CPU with 3G ram.
>
> Did you run with this patch?
>
> [PATCH] mm: readahead page allocations are OK to fail
> https://lkml.org/lkml/2011/4/26/129
>

Of course.
I will try it in my better machine  i5 4 core 3G ram.

> It's very good at generating lots of __GFP_NORETRY order-0 page
> allocation requests.
>
>> I will try it in other desktop but I am not sure I can reproduce it.
>>
>> >
>> > root@fat /home/wfg# ./test-dd-sparse.sh
>> > start time: 246
>> > total time: 531
>> > nr_alloc_fail 14097
>> > allocstall 1578332
>> > LOC: Â Â 542698 Â Â 538947 Â Â 536986 Â Â 567118 Â Â 552114 Â Â 539605 Â Â 541201 Â Â 537623 Â Local timer interrupts
>> > RES: Â Â Â 3368 Â Â Â 1908 Â Â Â 1474 Â Â Â 1476 Â Â Â 2809 Â Â Â 1602 Â Â Â 1500 Â Â Â 1509 Â Rescheduling interrupts
>> > CAL: Â Â 223844 Â Â 224198 Â Â 224268 Â Â 224436 Â Â 223952 Â Â 224056 Â Â 223700 Â Â 223743 Â Function call interrupts
>> > TLB: Â Â Â Â381 Â Â Â Â 27 Â Â Â Â 22 Â Â Â Â 19 Â Â Â Â 96 Â Â Â Â404 Â Â Â Â111 Â Â Â Â 67 Â TLB shootdowns
>> >
>> > root@fat /home/wfg# getdelays -dip `pidof dd`
>> > print delayacct stats ON
>> > printing IO accounting
>> > PID Â Â 5202
>> >
>> >
>> > CPU       count   real total Âvirtual total  Âdelay total
>> > Â Â Â Â Â Â Â Â 1132 Â Â 3635447328 Â Â 3627947550 Â 276722091605
>> > IO       Âcount  Âdelay total Âdelay average
>> > Â Â Â Â Â Â Â Â Â Â2 Â Â Â187809974 Â Â Â Â Â Â 62ms
>> > SWAP      Âcount  Âdelay total Âdelay average
>> > Â Â Â Â Â Â Â Â Â Â0 Â Â Â Â Â Â Â0 Â Â Â Â Â Â Â0ms
>> > RECLAIM     count  Âdelay total Âdelay average
>> > Â Â Â Â Â Â Â Â 1334 Â Â35304580824 Â Â Â Â Â Â 26ms
>> > dd: read=278528, write=0, cancelled_write=0
>> >
>> > I guess your patch is mainly fixing the high order allocations while
>> > my workload is mainly order 0 readahead page allocations. There are
>> > 1000 forks, however the "start time: 246" seems to indicate that the
>> > order-1 reclaim latency is not improved.
>>
>> Maybe, 8K * 1000 isn't big footprint so I think reclaim doesn't happen.
>
> It's mainly a guess. In an earlier experiment of simply increasing
> nr_to_reclaim to high_wmark_pages() without any other constraints, it
> does manage to reduce start time to about 25 seconds.

If so, I guess the workload might depend on order-0 page, not stack allocation.

>
>> > I'll try modifying your patch and see how it works out. The obvious
>> > change is to apply it to the order-0 case. Hope this won't create much
>> > more isolated pages.
>> >
>> > Attached is your patch rebased to 2.6.39-rc3, after resolving some
>> > merge conflicts and fixing a trivial NULL pointer bug.
>>
>> Thanks!
>> I would like to see detail with it in my system if I can reproduce it.
>
> OK.
>
>> >> > no cond_resched():
>> >>
>> >> What's this?
>> >
>> > I tried a modified patch that also removes the cond_resched() call in
>> > __alloc_pages_direct_reclaim(), between try_to_free_pages() and
>> > get_page_from_freelist(). It seems not helping noticeably.
>> >
>> > It looks safe to remove that cond_resched() as we already have such
>> > calls in shrink_page_list().
>>
>> I tried similar thing but Andrew have a concern about it.
>> https://lkml.org/lkml/2011/3/24/138
>
> Yeah cond_resched() is at least not the root cause of our problems..
>
>> >> > + Â Â Â Â Â Â Â Â Â Â if (total_scanned > 2 * sc->nr_to_reclaim)
>> >> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â goto out;
>> >>
>> >> If there are lots of dirty pages in LRU?
>> >> If there are lots of unevictable pages in LRU?
>> >> If there are lots of mapped page in LRU but may_unmap = 0 cases?
>> >> I means it's rather risky early conclusion.
>> >
>> > That test means to avoid scanning too much on __GFP_NORETRY direct
>> > reclaims. My assumption for __GFP_NORETRY is, it should fail fast when
>> > the LRU pages seem hard to reclaim. And the problem in the 1000 dd
>> > case is, it's all easy to reclaim LRU pages but __GFP_NORETRY still
>> > fails from time to time, with lots of IPIs that may hurt large
>> > machines a lot.
>>
>> I don't have Âenough time and a environment to test it.
>> So I can't make sure of it but my concern is a latency.
>> If you solve latency problem considering CPU scaling, I won't oppose it. :)
>
> OK, let's head for that direction :)

Anyway,  the problem about draining overhead with __GFP_NORETRY is
valuable, I think.
We should handle it

>
> Thanks,
> Fengguang
>

Thanks for the good experiments and numbers.


-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]