The patch titled Add apply_to_page_range() which applies a function to a pte range has been added to the -mm tree. Its filename is add-apply_to_page_range-which-applies-a-function-to-a-pte-range.patch *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find out what to do about this ------------------------------------------------------ Subject: Add apply_to_page_range() which applies a function to a pte range From: Jeremy Fitzhardinge <jeremy@xxxxxxxx> Add a new mm function apply_to_page_range() which applies a given function to every pte in a given virtual address range in a given mm structure. This is a generic alternative to cut-and-pasting the Linux idiomatic pagetable walking code in every place that a sequence of PTEs must be accessed. Although this interface is intended to be useful in a wide range of situations, it is currently used specifically by several Xen subsystems, for example: to ensure that pagetables have been allocated for a virtual address range, and to construct batched special pagetable update requests to map I/O memory (in ioremap()). Signed-off-by: Ian Pratt <ian.pratt@xxxxxxxxxxxxx> Signed-off-by: Christian Limpach <Christian.Limpach@xxxxxxxxxxxx> Signed-off-by: Chris Wright <chrisw@xxxxxxxxxxxx> Signed-off-by: Jeremy Fitzhardinge <jeremy@xxxxxxxxxxxxx> Cc: Christoph Lameter <clameter@xxxxxxx> Cc: Matt Mackall <mpm@xxxxxxxxx> Acked-by: Ingo Molnar <mingo@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm.h | 5 ++ mm/memory.c | 94 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 99 insertions(+) diff -puN include/linux/mm.h~add-apply_to_page_range-which-applies-a-function-to-a-pte-range include/linux/mm.h --- a/include/linux/mm.h~add-apply_to_page_range-which-applies-a-function-to-a-pte-range +++ a/include/linux/mm.h @@ -1130,6 +1130,11 @@ struct page *follow_page(struct vm_area_ #define FOLL_GET 0x04 /* do get_page on page */ #define FOLL_ANON 0x08 /* give ZERO_PAGE if no pgtable */ +typedef int (*pte_fn_t)(pte_t *pte, struct page *pmd_page, unsigned long addr, + void *data); +extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, + unsigned long size, pte_fn_t fn, void *data); + #ifdef CONFIG_PROC_FS void vm_stat_account(struct mm_struct *, unsigned long, struct file *, long); #else diff -puN mm/memory.c~add-apply_to_page_range-which-applies-a-function-to-a-pte-range mm/memory.c --- a/mm/memory.c~add-apply_to_page_range-which-applies-a-function-to-a-pte-range +++ a/mm/memory.c @@ -1448,6 +1448,100 @@ int remap_pfn_range(struct vm_area_struc } EXPORT_SYMBOL(remap_pfn_range); +static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, unsigned long end, + pte_fn_t fn, void *data) +{ + pte_t *pte; + int err; + struct page *pmd_page; + spinlock_t *ptl; + + pte = (mm == &init_mm) ? + pte_alloc_kernel(pmd, addr) : + pte_alloc_map_lock(mm, pmd, addr, &ptl); + if (!pte) + return -ENOMEM; + + BUG_ON(pmd_huge(*pmd)); + + pmd_page = pmd_page(*pmd); + + do { + err = fn(pte, pmd_page, addr, data); + if (err) + break; + } while (pte++, addr += PAGE_SIZE, addr != end); + + if (mm != &init_mm) + pte_unmap_unlock(pte-1, ptl); + return err; +} + +static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, + unsigned long addr, unsigned long end, + pte_fn_t fn, void *data) +{ + pmd_t *pmd; + unsigned long next; + int err; + + pmd = pmd_alloc(mm, pud, addr); + if (!pmd) + return -ENOMEM; + do { + next = pmd_addr_end(addr, end); + err = apply_to_pte_range(mm, pmd, addr, next, fn, data); + if (err) + break; + } while (pmd++, addr = next, addr != end); + return err; +} + +static int apply_to_pud_range(struct mm_struct *mm, pgd_t *pgd, + unsigned long addr, unsigned long end, + pte_fn_t fn, void *data) +{ + pud_t *pud; + unsigned long next; + int err; + + pud = pud_alloc(mm, pgd, addr); + if (!pud) + return -ENOMEM; + do { + next = pud_addr_end(addr, end); + err = apply_to_pmd_range(mm, pud, addr, next, fn, data); + if (err) + break; + } while (pud++, addr = next, addr != end); + return err; +} + +/* + * Scan a region of virtual memory, filling in page tables as necessary + * and calling a provided function on each leaf page table. + */ +int apply_to_page_range(struct mm_struct *mm, unsigned long addr, + unsigned long size, pte_fn_t fn, void *data) +{ + pgd_t *pgd; + unsigned long next; + unsigned long end = addr + size; + int err; + + BUG_ON(addr >= end); + pgd = pgd_offset(mm, addr); + do { + next = pgd_addr_end(addr, end); + err = apply_to_pud_range(mm, pgd, addr, next, fn, data); + if (err) + break; + } while (pgd++, addr = next, addr != end); + return err; +} +EXPORT_SYMBOL_GPL(apply_to_page_range); + /* * handle_pte_fault chooses page fault handler according to an entry * which was read non-atomically. Before making any commitment, on _ Patches currently in -mm which might be from jeremy@xxxxxxxx are paravirt_ops-update-maintainers.patch paravirt_ops-remove-config_debug_paravirt.patch paravirt_ops-use-paravirt_nop-to-consistently-mark-no-op-operations.patch paravirt_ops-add-pagetable-accessors-to-pack-and-unpack-pagetable-entries.patch paravirt_ops-hooks-to-set-up-initial-pagetable.patch paravirt_ops-allocate-a-fixmap-slot.patch paravirt_ops-allow-paravirt-backend-to-choose-kernel-pmd-sharing.patch paravirt_ops-add-hooks-to-intercept-mm-creation-and-destruction.patch paravirt_ops-rename-struct-paravirt_patch-to-paravirt_patch_site-for-clarity.patch paravirt_ops-use-patch-site-ids-computed-from-offset-in-paravirt_ops-structure.patch paravirt_ops-fix-patch-site-clobbers-to-include-return-register.patch paravirt_ops-consistently-wrap-paravirt-ops-callsites-to-make-them-patchable.patch paravirt_ops-document-asm-i386-paravirth.patch paravirt_ops-add-common-patching-machinery.patch paravirt_ops-add-flush_tlb_others-paravirt_op.patch paravirt_ops-revert-map_pt_hook.patch paravirt_ops-add-kmap_atomic_pte-for-mapping-highpte-pages.patch add-apply_to_page_range-which-applies-a-function-to-a-pte-range.patch re-enable-vdso-by-default-with-paravirt.patch remove-noreplacement-option.patch remove-smp_alt_instructions.patch rename-the-parainstructions-symbols-to-be-consistent-with-the-others.patch allow-boot-time-disable-of-smp-altinstructions.patch allow-boot-time-disable-of-paravirt_ops-patching.patch paravirt_ops-update-maintainers.patch paravirt_ops-remove-config_debug_paravirt.patch paravirt_ops-use-paravirt_nop-to-consistently-mark-no-op-operations.patch paravirt_ops-add-pagetable-accessors-to-pack-and-unpack-pagetable-entries.patch paravirt_ops-hooks-to-set-up-initial-pagetable.patch paravirt_ops-allocate-a-fixmap-slot.patch paravirt_ops-allow-paravirt-backend-to-choose-kernel-pmd-sharing.patch paravirt_ops-add-hooks-to-intercept-mm-creation-and-destruction.patch paravirt_ops-rename-struct-paravirt_patch-to-paravirt_patch_site-for-clarity.patch paravirt_ops-use-patch-site-ids-computed-from-offset-in-paravirt_ops-structure.patch paravirt_ops-fix-patch-site-clobbers-to-include-return-register.patch paravirt_ops-consistently-wrap-paravirt-ops-callsites-to-make-them-patchable.patch paravirt_ops-document-asm-i386-paravirth.patch paravirt_ops-add-common-patching-machinery.patch paravirt_ops-add-flush_tlb_others-paravirt_op.patch paravirt_ops-revert-map_pt_hook.patch paravirt_ops-add-kmap_atomic_pte-for-mapping-highpte-pages.patch add-apply_to_page_range-which-applies-a-function-to-a-pte-range.patch fixes-and-cleanups-for-earlyprintk-aka-boot-console.patch ignore-stolen-time-in-the-softlockup-watchdog.patch add-touch_all_softlockup_watchdogs.patch clean-up-elf-note-generation.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html