> From: Seth Jennings [mailto:sjenning@xxxxxxxxxxxxxxxxxx] > Sent: Monday, April 02, 2012 8:14 AM > To: Greg Kroah-Hartman > Cc: Nitin Gupta; Dan Magenheimer; Konrad Rzeszutek Wilk; Robert Jennings; Seth Jennings; > devel@xxxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx > Subject: [PATCH] staging: zsmalloc: fix memory leak > > From: Nitin Gupta <ngupta@xxxxxxxxxx> > > This patch fixes a memory leak in zsmalloc where the first > subpage of each zspage is leaked when the zspage is freed. > > Based on 3.4-rc1. > > Signed-off-by: Nitin Gupta <ngupta@xxxxxxxxxx> > Acked-by: Seth Jennings <sjenning@xxxxxxxxxxxxxxxxxx> This is a rather severe memory leak and will affect most benchmarking anyone does to evaluate zcache in 3.4 (e.g. as to whether zcache is suitable for promotion), so t'would be nice to get this patch in for -rc2. (Note it fixes a "regression" since it affects zcache only in 3.4+ because the fix is to the new zsmalloc allocator... so no change to stable trees.) Acked-by: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> > --- > drivers/staging/zsmalloc/zsmalloc-main.c | 30 ++++++++++++++++++------------ > 1 files changed, 18 insertions(+), 12 deletions(-) > > diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c > index 09caa4f..917461c 100644 > --- a/drivers/staging/zsmalloc/zsmalloc-main.c > +++ b/drivers/staging/zsmalloc/zsmalloc-main.c > @@ -267,33 +267,39 @@ static unsigned long obj_idx_to_offset(struct page *page, > return off + obj_idx * class_size; > } > > +static void reset_page(struct page *page) > +{ > + clear_bit(PG_private, &page->flags); > + clear_bit(PG_private_2, &page->flags); > + set_page_private(page, 0); > + page->mapping = NULL; > + page->freelist = NULL; > + reset_page_mapcount(page); > +} > + > static void free_zspage(struct page *first_page) > { > - struct page *nextp, *tmp; > + struct page *nextp, *tmp, *head_extra; > > BUG_ON(!is_first_page(first_page)); > BUG_ON(first_page->inuse); > > - nextp = (struct page *)page_private(first_page); > + head_extra = (struct page *)page_private(first_page); > > - clear_bit(PG_private, &first_page->flags); > - clear_bit(PG_private_2, &first_page->flags); > - set_page_private(first_page, 0); > - first_page->mapping = NULL; > - first_page->freelist = NULL; > - reset_page_mapcount(first_page); > + reset_page(first_page); > __free_page(first_page); > > /* zspage with only 1 system page */ > - if (!nextp) > + if (!head_extra) > return; > > - list_for_each_entry_safe(nextp, tmp, &nextp->lru, lru) { > + list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) { > list_del(&nextp->lru); > - clear_bit(PG_private_2, &nextp->flags); > - nextp->index = 0; > + reset_page(nextp); > __free_page(nextp); > } > + reset_page(head_extra); > + __free_page(head_extra); > } > > /* Initialize a newly allocated zspage */ > -- > 1.7.5.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href