On Thu, 6 Oct 2022 13:09:08 +0100 Alexandru Elisei <alexandru.elisei@xxxxxxx> wrote: > Hi, > > On Thu, Oct 06, 2022 at 01:35:52PM +0200, Claudio Imbrenda wrote: > > On Thu, 6 Oct 2022 12:12:39 +0100 > > Alexandru Elisei <alexandru.elisei@xxxxxxx> wrote: > > > > > All architectures that implements virt_to_pte_phys() (s390x, x86, > > > arm and arm64) return a physical address from the function. Teach > > > vmalloc to treat it as such, instead of confusing the return > > > value with a page table entry. > > > > I'm not sure I understand what you mean > > I thought that vmalloc uses PAGE_MASK because it expects > virt_to_pte_phys() to return a pteval (because of the "pte' part in > the virt_to_pte_phys() I agree that the name of the function is confusing; there are comments in lib/vmalloc.h and for virt_to_pte_phys it says: /* Walk the page table and resolve the virtual address to a physical address */ > function name), which might have the [PAGE_SHIFT-1:0] bits used to store > page metadata by an architecture (like permissions), but like you've > explained below it uses PAGE_MASK to align the page address (which is > identically mapped) before passing it to the page allocator to be freed. > > > > > > Changing things the other way around (having the function return a page > > > table entry instead) is not feasible, because it is possible for an > > > architecture to use the upper bits of the table entry to store metadata > > > about the page. > > > > > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > > > Cc: Thomas Huth <thuth@xxxxxxxxxx> > > > Cc: Andrew Jones <andrew.jones@xxxxxxxxx> > > > Cc: Laurent Vivier <lvivier@xxxxxxxxxx> > > > Cc: Janosch Frank <frankja@xxxxxxxxxxxxx> > > > Cc: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxx> > > > Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> > > > --- > > > lib/vmalloc.c | 4 ++-- > > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > > > diff --git a/lib/vmalloc.c b/lib/vmalloc.c > > > index 572682576cc3..0696b5da8190 100644 > > > --- a/lib/vmalloc.c > > > +++ b/lib/vmalloc.c > > > @@ -169,7 +169,7 @@ static void vm_free(void *mem) > > > /* the pointer is not page-aligned, it was a single-page allocation */ > > > if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) { > > > assert(GET_MAGIC(mem) == VM_MAGIC); > > > - page = virt_to_pte_phys(page_root, mem) & PAGE_MASK; > > > + page = virt_to_pte_phys(page_root, mem); > > > > this will break things for small allocations, though. if the pointer is > > not aligned, then the result of virt_to_pte_phys will also not be > > aligned.... > > I agree, I missed that part. Would be nice if it were written using > PAGE_ALIGN to avoid mistakes like mine in the future, but that's PAGE_ALIGN rounds UP, though, and we need to round down. I think it's easier and more readable to & PAGE_MASK, instead of a more cumbersome ALIGN_DOWN((thing), PAGE_SIZE) > unimportant. > > > > > > assert(page); > > > free_page(phys_to_virt(page)); > > > > ...and phys_to_virt will also return an unaligned address, and > > free_page will complain about it. > > > > > return; > > > @@ -183,7 +183,7 @@ static void vm_free(void *mem) > > > /* free all the pages including the metadata page */ > > > ptr = (uintptr_t)m & PAGE_MASK; > > > > ptr gets page aligned here > > > > > for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) { > > > - page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK; > > > + page = virt_to_pte_phys(page_root, (void *)ptr); > > > > so virt_to_pte_phys will also return an aligned address; > > I agree that & PAGE_MASK is redundant here > > You are correct, if we've ended up here it means that the pointer is > already page aligned, and it will be incremented by PAGE_SIZE each > iteration, hence the virt_to_pte_phys() will also be paged aligned. > > I don't see much point in writing a patch just to remove the unnecessary > alignment here, so I'll drop this patch entirely. > > Thank you for the prompt explanation! I'm glad things have been clarified :) > > Alex > > > > > > assert(page); > > > free_page(phys_to_virt(page)); > > > } > > _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm