> On Thu, Mar 24, 2011 at 3:16 PM, KOSAKI Motohiro > <kosaki.motohiro@xxxxxxxxxxxxxx> wrote: > > Hi > > > >> Thanks for your effort, Kosaki. > >> But I still doubt this patch is good. > >> > >> This patch makes early oom killing in hibernation as it skip > >> all_unreclaimable check. > >> Normally, Âhibernation needs many memory so page_reclaim pressure > >> would be big in small memory system. So I don't like early give up. > > > > Wait. When occur big pressure? hibernation reclaim pressure > > (sc->nr_to_recliam) depend on physical memory size. therefore > > a pressure seems to don't depend on the size. > > It depends on physical memory size and /sys/power/image_size. > If you want to tune image size bigger, reclaim pressure would be big. Ok, _If_ I want. However, I haven't seen desktop people customize it. > >> Do you think my patch has a problem? Personally, I think it's very > >> simple and clear. :) > > > > To be honest, I dislike following parts. It's madness on madness. > > > >    Âstatic bool zone_reclaimable(struct zone *zone) > >    Â{ > >        Âif (zone->all_unreclaimable) > >            Âreturn false; > > > >        Âreturn zone->pages_scanned < zone_reclaimable_pages(zone) * 6; > >    Â} > > > > > > The function require a reviewer know > > > > Âo pages_scanned and all_unreclaimable are racy > > Yes. That part should be written down of comment. > > > Âo at hibernation, zone->all_unreclaimable can be false negative, > >  but can't be false positive. > > The comment of all_unreclaimable already does explain it well, I think. Where is? > > And, a function comment of all_unreclaimable() says > > > >     /* > >     Â* As hibernation is going on, kswapd is freezed so that it can't mark > >     Â* the zone into all_unreclaimable. It can't handle OOM during hibernation. > >     Â* So let's check zone's unreclaimable in direct reclaim as well as kswapd. > >     Â*/ > > > > But, now it is no longer copy of kswapd algorithm. > > The comment don't say it should be a copy of kswapd. I meant the comments says     Â* So let's check zone's unreclaimable in direct reclaim as well as kswapd. but now it isn't aswell as kswapd. I think it's critical important. If people can't understand why the algorithm was choosed, anyone will break the code again sooner or later. > > If you strongly prefer this idea even if you hear above explanation, > > please consider to add much and much comments. I can't say > > current your patch is enough readable/reviewable. > > My patch isn't a formal patch for merge but just a concept to show. > If you agree the idea, of course, I will add more concrete comment > when I send formal patch. > > Before, I would like to get a your agreement. :) > If you solve my concern(early give up in hibernation) in your patch, I > don't insist on my patch, either. Ok. Let's try. Please concern why priority=0 is not enough. zone_reclaimable_pages(zone) * 6 is a conservative value of worry about multi thread race. While one task is reclaiming, others can allocate/free memory concurrently. therefore, even after priority=0, we have a chance getting reclaimable pages on lru. But, in hibernation case, almost all tasks was freezed before hibernation call shrink_all_memory(). therefore, there is no race. priority=0 reclaim can cover all lru pages. Is this enough explanation for you? > > Thanks for the comment, Kosaki. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>