[PATCH] [RFC, RT] fix kmap_high_get

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This fixes the build failure with ARCH_NEEDS_KMAP_HIGH_GET.
This is only compile tested.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@xxxxxxxxxxxxxx>
Cc: Nicolas Pitre <nico@xxxxxxxxxxx>
Cc: MinChan Kim <minchan.kim@xxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Li Zefan <lizf@xxxxxxxxxxxxxx>
Cc: Jens Axboe <jens.axboe@xxxxxxxxxx>
Cc: linux-mm@xxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx
---
Hello

this bases on the patch "[PATCH RT 9/6] [RFH] Build failure on
2.6.31-rc4-rt1 in mm/highmem.c" earlier in this thread.

I don't know if kmap_high_get() has to call kmap_account().  Anyone?

As I don't have any knowledge about highmem (or mm in general) I'll go into
hiding before tglx caughts me with his trout.

Best regards
Uwe

 mm/highmem.c |   79 ++++++++++++++++++++-------------------------------------
 1 files changed, 28 insertions(+), 51 deletions(-)

diff --git a/mm/highmem.c b/mm/highmem.c
index 4aa9eea..b5f5faf 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -75,26 +75,6 @@ pte_t * pkmap_page_table;
 
 static DECLARE_WAIT_QUEUE_HEAD(pkmap_wait);
 
-
-/*
- * Most architectures have no use for kmap_high_get(), so let's abstract
- * the disabling of IRQ out of the locking in that case to save on a
- * potential useless overhead.
- */
-#ifdef ARCH_NEEDS_KMAP_HIGH_GET
-#define lock_kmap()             spin_lock_irq(&kmap_lock)
-#define unlock_kmap()           spin_unlock_irq(&kmap_lock)
-#define lock_kmap_any(flags)    spin_lock_irqsave(&kmap_lock, flags)
-#define unlock_kmap_any(flags)  spin_unlock_irqrestore(&kmap_lock, flags)
-#else
-#define lock_kmap()             spin_lock(&kmap_lock)
-#define unlock_kmap()           spin_unlock(&kmap_lock)
-#define lock_kmap_any(flags)    \
-		do { spin_lock(&kmap_lock); (void)(flags); } while (0)
-#define unlock_kmap_any(flags)  \
-		do { spin_unlock(&kmap_lock); (void)(flags); } while (0)
-#endif
-
 /*
  * Try to free a given kmap slot.
  *
@@ -313,22 +293,32 @@ static void kunmap_account(void)
 	wake_up(&pkmap_wait);
 }
 
-void *kmap_high(struct page *page)
+/**
+ * kmap_high_get - pin a highmem page into memory
+ * @page: &struct page to pin
+ *
+ * Returns the page's current virtual memory address, or NULL if no mapping
+ * exists.  When and only when a non null address is returned then a
+ * matching call to kunmap_high() is necessary.
+ *
+ * This can be called from any context.
+ */
+void *kmap_high_get(struct page *page)
 {
 	unsigned long vaddr;
 
-
-	kmap_account();
 again:
 	vaddr = (unsigned long)page_address(page);
 	if (vaddr) {
 		atomic_t *counter = &pkmap_count[PKMAP_NR(vaddr)];
 		if (atomic_inc_not_zero(counter)) {
 			/*
-			 * atomic_inc_not_zero implies a (memory) barrier on success
-			 * so page address will be reloaded.
+			 * atomic_inc_not_zero implies a (memory) barrier on
+			 * success, so page address will be reloaded.
 			 */
-			unsigned long vaddr2 = (unsigned long)page_address(page);
+			unsigned long vaddr2 =
+				(unsigned long)page_address(page);
+
 			if (likely(vaddr == vaddr2))
 				return (void *)vaddr;
 
@@ -344,6 +334,18 @@ again:
 			goto again;
 		}
 	}
+	return NULL;
+}
+
+void *kmap_high(struct page *page)
+{
+	unsigned long vaddr;
+
+	kmap_account();
+again:
+	vaddr = (unsigned long)kmap_high_get(page);
+	if (vaddr)
+		return (void *)vaddr;
 
 	vaddr = pkmap_insert(page);
 	if (!vaddr)
@@ -354,31 +356,6 @@ again:
 
 EXPORT_SYMBOL(kmap_high);
 
-#ifdef ARCH_NEEDS_KMAP_HIGH_GET
-/**
- * kmap_high_get - pin a highmem page into memory
- * @page: &struct page to pin
- *
- * Returns the page's current virtual memory address, or NULL if no mapping
- * exists.  When and only when a non null address is returned then a
- * matching call to kunmap_high() is necessary.
- *
- * This can be called from any context.
- */
-void *kmap_high_get(struct page *page)
-{
-	unsigned long vaddr, flags;
-
-	lock_kmap_any(flags);
-	vaddr = (unsigned long)page_address(page);
-	if (vaddr) {
-		BUG_ON(atomic_read(&pkmap_count[PKMAP_NR(vaddr)]) < 1);
-		atomic_add(1, pkmap_count[PKMAP_NR(vaddr)]);
-	}
-	unlock_kmap_any(flags);
-	return (void*) vaddr;
-}
-#endif
 
  void kunmap_high(struct page *page)
 {
-- 
1.6.3.3

--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux