On Saturday 02 May 2009, Andrew Morton wrote: > On Sat, 2 May 2009 00:29:38 +0200 > "Rafael J. Wysocki" <rjw@xxxxxxx> wrote: > > > From: Rafael J. Wysocki <rjw@xxxxxxx> > > > > Modify the hibernation memory shrinking code so that it will make > > memory allocations to free memory instead of using an artificial > > memory shrinking mechanism for that. Remove the shrinking of > > memory from the suspend-to-RAM code, where it is not really > > necessary. Finally, remove the no longer used memory shrinking > > functions from mm/vmscan.c . > > > > ... > > > > +static long alloc_and_mark_pages(struct memory_bitmap *bm, long nr_pages) > > { > > - if (tmp > SHRINK_BITE) > > - tmp = SHRINK_BITE; > > - return shrink_all_memory(tmp); > > + long nr_normal = 0; > > + > > + while (nr_pages-- > 0) { > > + struct page *page; > > + > > + page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM); > > + if (!page) > > + return -ENOMEM; > > + memory_bm_set_bit(bm, page_to_pfn(page)); > > + if (!PageHighMem(page)) > > + nr_normal++; > > + } > > + > > + return nr_normal; > > } > > Do we need the bitmap? I expect we can just string all these pages > onto a local list via page.lru. Would need to check that - the > pageframe fields are quite overloaded. This is the reason why we use the bitmaps for hibernation. :-) > > ... > > > > +#define SHRINK_BITE 10000 > > + long size, highmem_size, ret; > > + > > + highmem_size = count_highmem_pages() - 2 * alloc_highmem; > > + size = count_data_pages() + PAGES_FOR_IO + SPARE_PAGES > > + - 2 * alloc_normal; > > It'd be nice if this head-spinning arithmetic were spelled out in a > comment somewhere. There are rather a lot of magic-number heuristics > in here. Well, yeah. I'll try to write up something. :-) > > tmp = size; > > size += highmem_size; > > for_each_populated_zone(zone) { > > @@ -621,27 +671,39 @@ int swsusp_shrink_memory(void) > > All looks pretty sane to me. Great, thanks for the comments! -- To unsubscribe from this list: send the line "unsubscribe kernel-testers" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html