On 2/24/10, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > On Tue, 23 Feb 2010 22:13:56 +0100 > "Rafael J. Wysocki" <rjw@xxxxxxx> wrote: > >> Well, it still looks like we're waiting for create_workqueue_thread() to >> return, which probably is trying to allocate memory for the thread >> structure. >> >> My guess is that the preallocated memory pages freed by >> free_unnecessary_pages() go into a place from where they cannot be taken >> for >> subsequent NOIO allocations. I have no idea why that happens though. >> >> To test that theory you can try to change GFP_IOFS to GFP_KERNEL in the >> calls to clear_gfp_allowed_mask() in kernel/power/hibernate.c (and in >> kernel/power/suspend.c for completness). >> > > If allocation of kernel threads for stop_machine_run() is the problem, > > What happens when > 1. use CONIFG_4KSTACK Interesting question. 4KSTACK doesn't stop it though; it hangs in the same place. > or > 2. make use of stop_machine_create(), stop_machine_destroy(). > A new interface added by this commit. > http://git.kernel.org/?p=linux/kernel/git/torvalds/ > linux-2.6.git;a=commit;h=9ea09af3bd3090e8349ca2899ca2011bd94cda85 > You can do no-fail stop_machine_run(). > > Thanks, > -Kame Since this is a uni-processor machine that would make it a single 4K allocation. AIUI this is supposed to be ok. The hibernation code tries to make sure there is over 1000x that much free RAM (ish), in anticipation of this sort of requirement. There appear to be some deficiencies in the way this allowance works, which have recently been exposed. And unfortunately the allocation hangs instead of failing, so we're in unclean shutdown territory. I have three test scenarios at the moment. I've tested two patches which appear to fix the common cases, but there's still a third test scenario to figure out. (Repeated hibernation attempts with insufficient swap - encountered during real-world use, believe it or not). Alan -- To unsubscribe from this list: send the line "unsubscribe kernel-testers" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html