On 03/08/2010 06:20 PM, Minchan Kim wrote:
On Mon, 2010-03-08 at 17:33 +0800, Huang Shijie wrote:
The prep_new_page() will call set_page_private(page, 0) to initiate
the page.
So the code is redundant.
Signed-off-by: Huang Shijie<shijie8@xxxxxxxxx>
---
mm/shmem.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index eef4ebe..dde4363 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -433,8 +433,6 @@ static swp_entry_t *shmem_swp_alloc(struct shmem_inode_info *info, unsigned long
spin_unlock(&info->lock);
page = shmem_dir_alloc(mapping_gfp_mask(inode->i_mapping));
- if (page)
- set_page_private(page, 0);
spin_lock(&info->lock);
if (!page) {
And I found another place while I review the code.
> From e64322cde914e43d080d8f3be6f72459d809a934 Mon Sep 17 00:00:00 2001
From: Minchan Kim<barrios@barrios-desktop.(none)>
Date: Tue, 9 Mar 2010 01:09:56 +0900
Subject: [PATCH] kvm : remove redundant initialization of page->private.
The prep_new_page() in page allocator calls set_page_private(page, 0).
So we don't need to reinitialize private of page.
Signed-off-by: Minchan Kim<minchan.kim@xxxxxxxxx>
Cc: Avi Kivity<avi@xxxxxxxxxx>
---
arch/x86/kvm/mmu.c | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 741373e..9851d0e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -326,7 +326,6 @@ static int mmu_topup_memory_cache_page(struct
kvm_mmu_memory_cache *cache,
page = alloc_page(GFP_KERNEL);
if (!page)
return -ENOMEM;
- set_page_private(page, 0);
cache->objects[cache->nobjs++] = page_address(page);
}
return 0;
Whitespace damage, please resend.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>