Re: [PATCH 1/2] x86/vmemmap: Remove !PAGE_ALIGNED case in remove_pte_table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



@@ -962,7 +962,6 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
  {
  	unsigned long next, pages = 0;
  	pte_t *pte;
-	void *page_addr;
  	phys_addr_t phys_addr;
pte = pte_start + pte_index(addr);
@@ -983,42 +982,19 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
  		if (phys_addr < (phys_addr_t)0x40000000)
  			return;
- if (PAGE_ALIGNED(addr) && PAGE_ALIGNED(next)) {
-			/*
-			 * Do not free direct mapping pages since they were
-			 * freed when offlining, or simplely not in use.
-			 */
-			if (!direct)
-				free_pagetable(pte_page(*pte), 0);
-
-			spin_lock(&init_mm.page_table_lock);
-			pte_clear(&init_mm, addr, pte);
-			spin_unlock(&init_mm.page_table_lock);
-
-			/* For non-direct mapping, pages means nothing. */
-			pages++;
-		} else {
-			/*
-			 * If we are here, we are freeing vmemmap pages since
-			 * direct mapped memory ranges to be freed are aligned.
-			 *
-			 * If we are not removing the whole page, it means
-			 * other page structs in this page are being used and
-			 * we canot remove them. So fill the unused page_structs
-			 * with 0xFD, and remove the page when it is wholly
-			 * filled with 0xFD.
-			 */
-			memset((void *)addr, PAGE_INUSE, next - addr);
+		/*
+		 * Do not free direct mapping pages since they were
+		 * freed when offlining, or simplely not in use.
+		 */

s/simplely/simply/ (I know, not your fault :) )

However, that comment is highly confusing. There is nothing to free in case of the a direct mapping; we never allocated anything. That's how a direct map works.

I'd just get rid of the comment completely - we also don't have one at the other "if (!direct)" places.


(side note: all this freeing before unmapping looks very weird, but at least we should have valid accesses anymore)

+		if (!direct)
+			free_pagetable(pte_page(*pte), 0);

[...]

  {
+	/*
+	 * See comment in vmemmap_populate().
+	 */

I'd drop this comment ...

+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+
  	remove_pagetable(start, end, false, altmap);
  }
@@ -1556,6 +1538,15 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
  {
  	int err;
+ /*
+	 * __populate_section_memmap enforces the range to be added to be
+	 * PMD aligned. Therefore we can be sure that, as long as the
+	 * struct page size is multiple of 8, the vmemmap range will be
+	 * PAGE aligned.
+	 */

... and this comment, moving the details into the patch description.

The commit should be easy to detect using git blame in case anybody wonders why.

+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+
  	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
  		err = vmemmap_populate_basepages(start, end, node, NULL);
  	else if (boot_cpu_has(X86_FEATURE_PSE))


Apart from that looks good, thanks!

--
Thanks,

David / dhildenb






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux