On Thu, Feb 25, 2021, wangyanan (Y) wrote: > > On 2021/2/11 7:06, Sean Christopherson wrote: > > Align the HVA for hugepage memslots to 1gb, as opposed to incorrectly > > assuming all architectures' hugepages are 512*page_size. > > > > For x86, multiplying by 512 is correct, but only for 2mb pages, e.g. > > systems that support 1gb pages will never be able to use them for mapping > > guest memory, and thus those flows will not be exercised. > > > > For arm64, powerpc, and s390 (and mips?), hardcoding the multiplier to > > 512 is either flat out wrong, or at best correct only in certain > > configurations. > > > > Hardcoding the _alignment_ to 1gb is a compromise between correctness and > > simplicity. Due to the myriad flavors of hugepages across architectures, > > attempting to enumerate the exact hugepage size is difficult, and likely > > requires probing the kernel. > > > > But, there is no need for precision since a stronger alignment will not > > prevent creating a smaller hugepage. For all but the most extreme cases, > > e.g. arm64's 16gb contiguous PMDs, aligning to 1gb is sufficient to allow > > KVM to back the guest with hugepages. > I have implemented a helper get_backing_src_pagesz() to get granularity of > different > backing src types (anonymous/thp/hugetlb) which is suitable for different > architectures. > See: > https://lore.kernel.org/lkml/20210225055940.18748-6-wangyanan55@xxxxxxxxxx/ > if it looks fine for you, maybe we can use the accurate page sizes for > GPA/HVA alignment:). Works for me. I'll probably just wait until your series is queued to send v2. Thanks again!