Re: [v7 7/8] mm: only IPI CPUs to drain local pages if they exist

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 28, 2012 at 2:12 AM, Andrew Morton
<akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Thu, 26 Jan 2012 12:02:00 +0200
> Gilad Ben-Yossef <gilad@xxxxxxxxxxxxx> wrote:
>
>> Calculate a cpumask of CPUs with per-cpu pages in any zone
>> and only send an IPI requesting CPUs to drain these pages
>> to the buddy allocator if they actually have pages when
>> asked to flush.
>>
...
>>
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1165,7 +1165,36 @@ void drain_local_pages(void *arg)
>>   */
>>  void drain_all_pages(void)
>>  {
>> -     on_each_cpu(drain_local_pages, NULL, 1);
>> +     int cpu;
>> +     struct per_cpu_pageset *pcp;
>> +     struct zone *zone;
>> +
>> +     /* Allocate in the BSS so we wont require allocation in
>> +      * direct reclaim path for CONFIG_CPUMASK_OFFSTACK=y
>> +      */
>> +     static cpumask_t cpus_with_pcps;
>> +
>> +     /*
>> +      * We don't care about racing with CPU hotplug event
>> +      * as offline notification will cause the notified
>> +      * cpu to drain that CPU pcps and on_each_cpu_mask
>> +      * disables preemption as part of its processing
>> +      */
>
> hmmm.
>
>> +     for_each_online_cpu(cpu) {
>> +             bool has_pcps = false;
>> +             for_each_populated_zone(zone) {
>> +                     pcp = per_cpu_ptr(zone->pageset, cpu);
>> +                     if (pcp->pcp.count) {
>> +                             has_pcps = true;
>> +                             break;
>> +                     }
>> +             }
>> +             if (has_pcps)
>> +                     cpumask_set_cpu(cpu, &cpus_with_pcps);
>> +             else
>> +                     cpumask_clear_cpu(cpu, &cpus_with_pcps);
>> +     }
>> +     on_each_cpu_mask(&cpus_with_pcps, drain_local_pages, NULL, 1);
>>  }
>
> Can we end up sending an IPI to a now-unplugged CPU?  That won't work
> very well if that CPU is now sitting on its sysadmin's desk.

Nope. on_each_cpu_mask() disables preemption and calls smp_call_function_many()
which then checks the mask against the cpu_online_mask

> There's also the case of CPU online.  We could end up failing to IPI a
> CPU which now has some percpu pages.  That's not at all serious - 90%
> is good enough in page reclaim.  But this thinking merits a mention in
> the comment.  Or we simply make this code hotplug-safe.

hmm.. I'm probably daft but I don't see how to make the code hotplug safe for
CPU online case. I mean, let's say we disable preemption throughout the
entire ordeal and then the CPU goes online and gets itself some percpu pages
*after* we've calculated the masks, sent the IPIs and waiting for the
whole thing
to finish but before we've returned...

I might be missing something here, but I think that unless you have some other
means to stop newly hotplugged CPUs to grab per cpus pages there is nothing
you can do in this code to stop it. Maybe make the race window
shorter, that's all.

Would adding a comment such as the following OK?

"This code is protected against sending  an IPI to an offline CPU but does not
guarantee sending an IPI to newly hotplugged CPUs"


Thanks,
Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker
gilad@xxxxxxxxxxxxx
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

"Unfortunately, cache misses are an equal opportunity pain provider."
-- Mike Galbraith, LKML

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]