[PATCH 1/4] shmem: simlify shmem_unlock_mapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



find_get_pages() now can skip unlimited count of exeptional entries,
so shmem_find_get_pages_and_swap() does not required there any more.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxx>
---
 mm/shmem.c |   12 ++----------
 1 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 269d049..4af8e85 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -397,7 +397,6 @@ static void shmem_deswap_pagevec(struct pagevec *pvec)
 void shmem_unlock_mapping(struct address_space *mapping)
 {
 	struct pagevec pvec;
-	pgoff_t indices[PAGEVEC_SIZE];
 	pgoff_t index = 0;
 
 	pagevec_init(&pvec, 0);
@@ -405,16 +404,9 @@ void shmem_unlock_mapping(struct address_space *mapping)
 	 * Minor point, but we might as well stop if someone else SHM_LOCKs it.
 	 */
 	while (!mapping_unevictable(mapping)) {
-		/*
-		 * Avoid pagevec_lookup(): find_get_pages() returns 0 as if it
-		 * has finished, if it hits a row of PAGEVEC_SIZE swap entries.
-		 */
-		pvec.nr = shmem_find_get_pages_and_swap(mapping, index,
-					PAGEVEC_SIZE, pvec.pages, indices);
-		if (!pvec.nr)
+		if (!pagevec_lookup(&pvec, mapping, index, PAGEVEC_SIZE))
 			break;
-		index = indices[pvec.nr - 1] + 1;
-		shmem_deswap_pagevec(&pvec);
+		index = pvec.pages[pvec.nr - 1]->index + 1;
 		check_move_unevictable_pages(pvec.pages, pvec.nr);
 		pagevec_release(&pvec);
 		cond_resched();

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]