Andrew Morton wrote:
+static pte_t *get_pte(struct mm_struct *mm, unsigned long addr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *ptep = NULL;
+
+ pgd = pgd_offset(mm, addr);
+ if (!pgd_present(*pgd))
+ goto out;
+
+ pud = pud_offset(pgd, addr);
+ if (!pud_present(*pud))
+ goto out;
+
+ pmd = pmd_offset(pud, addr);
+ if (!pmd_present(*pmd))
+ goto out;
+
+ ptep = pte_offset_map(pmd, addr);
+out:
+ return ptep;
+}
hm, this looks very generic. Does it duplicate anything which core
kernel already provides? If not, perhaps core kernel should provide
this (perhaps after some reorganisation).
It is lookup_address() which works on user addresses, and as such is
very useful. But it would need to deal with returning a level so it can
deal with large pages in usermode, and have some well-defined semantics
on whether the caller is responsible for unmapping the returned thing
(ie, only if its a pte).
I implemented this myself a couple of months ago, but I can't find it
anywhere...
+static int memcmp_pages(struct page *page1, struct page *page2)
+{
+ char *addr1, *addr2;
+ int r;
+
+ addr1 = kmap_atomic(page1, KM_USER0);
+ addr2 = kmap_atomic(page2, KM_USER1);
+ r = memcmp(addr1, addr2, PAGE_SIZE);
+ kunmap_atomic(addr1, KM_USER0);
+ kunmap_atomic(addr2, KM_USER1);
+ return r;
+}
I wonder if this code all does enough cpu cache flushing to be able to
guarantee that it's looking at valid data. Not my area, and presumably
not an issue on x86.
Shouldn't that be kmap_atomic's job anyway? Otherwise it would be hard
to use on any virtual-tag/indexed cache machine.
J
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html