Re: [PATCH] mm: check zone->all_unreclaimable in all_unreclaimable()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/11/2011 03:18 AM, Minchan Kim wrote:
On Fri, Mar 11, 2011 at 8:58 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@xxxxxxxxxxxxxx>  wrote:
On Thu, 10 Mar 2011 15:58:29 +0900
Minchan Kim<minchan.kim@xxxxxxxxx>  wrote:

Hi Kame,

Sorry for late response.
I had a time to test this issue shortly because these day I am very busy.
This issue was interesting to me.
So I hope taking a time for enough testing when I have a time.
I should find out root cause of livelock.


Thanks. I and Kosaki-san reproduced the bug with swapless system.
Now, Kosaki-san is digging and found some issue with scheduler boost at OOM
and lack of enough "wait" in vmscan.c.

I myself made patch like attached one. This works well for returning TRUE at
all_unreclaimable() but livelock(deadlock?) still happens.

I saw the deadlock.
It seems to happen by following code by my quick debug but not sure. I
need to investigate further but don't have a time now. :(


                  * Note: this may have a chance of deadlock if it gets
                  * blocked waiting for another task which itself is waiting
                  * for memory. Is there a better alternative?
                  */
                 if (test_tsk_thread_flag(p, TIF_MEMDIE))
                         return ERR_PTR(-1UL);
It would be wait to die the task forever without another victim selection.
If it's right, It's a known BUG and we have no choice until now. Hmm.


I fixed this bug too and sent patch "mm: skip zombie in OOM-killer".

http://groups.google.com/group/linux.kernel/browse_thread/thread/b9c6ddf34d1671ab/2941e1877ca4f626?lnk=raot&pli=1

-		if (test_tsk_thread_flag(p, TIF_MEMDIE))
+		if (test_tsk_thread_flag(p, TIF_MEMDIE) && p->mm)
  			return ERR_PTR(-1UL);

It is not committed yet, because Devid Rientjes and company think what to do with "[patch] oom: prevent unnecessary oom kills or kernel panics.".

I wonder vmscan itself isn't a key for fixing issue.

I agree.

Then, I'd like to wait for Kosaki-san's answer ;)

Me, too. :)


I'm now wondering how to catch fork-bomb and stop it (without using cgroup).

Yes. Fork throttling without cgroup is very important.
And as off-topic, mem_notify without memcontrol you mentioned is
important to embedded people, I gues.

I think the problem is that fork-bomb is faster than killall...

And deadlock problem I mentioned.


Thanks,
-Kame

Thanks for the investigation, Kame.

==

This is just a debug patch.

---
  mm/vmscan.c |   58 ++++++++++++++++++++++++++++++++++++++++++++++++++++++----
  1 file changed, 54 insertions(+), 4 deletions(-)

Index: mmotm-0303/mm/vmscan.c
===================================================================
--- mmotm-0303.orig/mm/vmscan.c
+++ mmotm-0303/mm/vmscan.c
@@ -1983,9 +1983,55 @@ static void shrink_zones(int priority, s
        }
  }

-static bool zone_reclaimable(struct zone *zone)
+static bool zone_seems_empty(struct zone *zone, struct scan_control *sc)
  {
-       return zone->pages_scanned<  zone_reclaimable_pages(zone) * 6;
+       unsigned long nr, wmark, free, isolated, lru;
+
+       /*
+        * If scanned, zone->pages_scanned is incremented and this can
+        * trigger OOM.
+        */
+       if (sc->nr_scanned)
+               return false;
+
+       free = zone_page_state(zone, NR_FREE_PAGES);
+       isolated = zone_page_state(zone, NR_ISOLATED_FILE);
+       if (nr_swap_pages)
+               isolated += zone_page_state(zone, NR_ISOLATED_ANON);
+
+       /* In we cannot do scan, don't count LRU pages. */
+       if (!zone->all_unreclaimable) {
+               lru = zone_page_state(zone, NR_ACTIVE_FILE);
+               lru += zone_page_state(zone, NR_INACTIVE_FILE);
+               if (nr_swap_pages) {
+                       lru += zone_page_state(zone, NR_ACTIVE_ANON);
+                       lru += zone_page_state(zone, NR_INACTIVE_ANON);
+               }
+       } else
+               lru = 0;
+       nr = free + isolated + lru;
+       wmark = min_wmark_pages(zone);
+       wmark += zone->lowmem_reserve[gfp_zone(sc->gfp_mask)];
+       wmark += 1<<  sc->order;
+       printk("thread %d/%ld all %d scanned %ld pages %ld/%ld/%ld/%ld/%ld/%ld\n",
+               current->pid, sc->nr_scanned, zone->all_unreclaimable,
+               zone->pages_scanned,
+               nr,free,isolated,lru,
+               zone_reclaimable_pages(zone), wmark);
+       /*
+        * In some case (especially noswap), almost all page cache are paged out
+        * and we'll see the amount of reclaimable+free pages is smaller than
+        * zone->min. In this case, we canoot expect any recovery other
+        * than OOM-KILL. We can't reclaim memory enough for usual tasks.
+        */
+
+       return nr<= wmark;
+}
+
+static bool zone_reclaimable(struct zone *zone, struct scan_control *sc)
+{
+       /* zone_reclaimable_pages() can return 0, we need<= */
+       return zone->pages_scanned<= zone_reclaimable_pages(zone) * 6;
  }

  /*
@@ -2006,11 +2052,15 @@ static bool all_unreclaimable(struct zon
                        continue;
                if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
                        continue;
-               if (zone_reclaimable(zone)) {
+               if (zone_seems_empty(zone, sc))
+                       continue;
+               if (zone_reclaimable(zone, sc)) {
                        all_unreclaimable = false;
                        break;
                }
        }
+       if (all_unreclaimable)
+               printk("all_unreclaimable() returns TRUE\n");

        return all_unreclaimable;
  }
@@ -2456,7 +2506,7 @@ loop_again:
                        if (zone->all_unreclaimable)
                                continue;
                        if (!compaction&&  nr_slab == 0&&
-                           !zone_reclaimable(zone))
+                           !zone_reclaimable(zone,&sc))
                                zone->all_unreclaimable = 1;
                        /*
                         * If we've done a decent amount of scanning and






--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]