On Sat, 9 May 2009, Rafael J. Wysocki wrote: > > All of your tasks are in D state other than kthreads, right? That means > > they won't be in the oom killer (thus no zones are oom locked), so you can > > easily do this > > > > struct zone *z; > > for_each_populated_zone(z) > > zone_set_flag(z, ZONE_OOM_LOCKED); > > > > and then > > > > for_each_populated_zone(z) > > zone_clear_flag(z, ZONE_OOM_LOCKED); > > > > The serialization is done with trylocks so this will never invoke the oom > > killer because all zones in the allocator's zonelist will be oom locked. > > Well, that might have been a good idea if it actually had worked. :-( > > > Why does this not work for you? > > If I set image_size to something below "hard core working set" + > totalreserve_pages, preallocate_image_memory() hangs the > box (please refer to the last patch I sent, > http://patchwork.kernel.org/patch/22423/). > This has been changed in the latest mmotm with Mel's page alloactor patches (and I think yours should be based on mmotm). Specifically, page-allocator-break-up-the-allocator-entry-point-into-fast-and-slow-paths.patch. Before his patchset, zonelists that had ZONE_OOM_LOCKED set for at least one of their zones would unconditionally goto restart. Now, if order > PAGE_ALLOC_COSTLY_ORDER, it gives up and returns NULL. Otherwise, it does goto restart. So if your allocation has order > PAGE_ALLOC_COSTLY_ORDER, using the ZONE_OOM_LOCKED approach to locking out the oom killer will work just fine in mmotm. _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm