On Fri, Jun 29, 2018 at 05:07:47PM +1000, Alexey Kardashevskiy wrote: > On Fri, 29 Jun 2018 15:18:20 +1000 > Alexey Kardashevskiy <aik@xxxxxxxxx> wrote: > > > On Fri, 29 Jun 2018 14:57:02 +1000 > > David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> wrote: > > > > > On Fri, Jun 29, 2018 at 02:51:21PM +1000, Alexey Kardashevskiy wrote: > > > > On Fri, 29 Jun 2018 14:12:41 +1000 > > > > David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> wrote: > > > > > > > > > On Tue, Jun 26, 2018 at 03:59:26PM +1000, Alexey Kardashevskiy wrote: > > > > > > We already have a check in drivers/vfio/vfio_iommu_spapr_tce.c that > > > > > > an IOMMU page is contained in the physical page so the PCI hardware won't > > > > > > get access to unassigned host memory. > > > > > > > > > > > > However we do not have this check in KVM fastpath (H_PUT_TCE accelerated > > > > > > code) so the user space can pin memory backed with 64k pages and create > > > > > > a hardware TCE table with a bigger page size. We were lucky so far and > > > > > > did not hit this yet as the very first time the mapping happens > > > > > > we do not have tbl::it_userspace allocated yet and fall back to > > > > > > the userspace which in turn calls VFIO IOMMU driver and that fails > > > > > > because of the check in vfio_iommu_spapr_tce.c which is really > > > > > > sustainable solution. > > > > > > > > > > > > This stores the smallest preregistered page size in the preregistered > > > > > > region descriptor and changes the mm_iommu_xxx API to check this against > > > > > > the IOMMU page size. > > > > > > > > > > > > Signed-off-by: Alexey Kardashevskiy <aik@xxxxxxxxx> > > > > > > --- > > > > > > Changes: > > > > > > v2: > > > > > > * explicitly check for compound pages before calling compound_order() > > > > > > > > > > > > --- > > > > > > The bug is: run QEMU _without_ hugepages (no -mempath) and tell it to > > > > > > advertise 16MB pages to the guest; a typical pseries guest will use 16MB > > > > > > for IOMMU pages without checking the mmu pagesize and this will fail > > > > > > at https://git.qemu.org/?p=qemu.git;a=blob;f=hw/vfio/common.c;h=fb396cf00ac40eb35967a04c9cc798ca896eed57;hb=refs/heads/master#l256 > > > > > > > > > > > > With the change, mapping will fail in KVM and the guest will print: > > > > > > > > > > > > mlx5_core 0000:00:00.0: ibm,create-pe-dma-window(2027) 0 8000000 20000000 18 1f returned 0 (liobn = 0x80000001 starting addr = 8000000 0) > > > > > > mlx5_core 0000:00:00.0: created tce table LIOBN 0x80000001 for /pci@800000020000000/ethernet@0 > > > > > > mlx5_core 0000:00:00.0: failed to map direct window for > > > > > > /pci@800000020000000/ethernet@0: -1 > > > > > > > > > > [snip] > > > > > > @@ -124,7 +125,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, > > > > > > struct mm_iommu_table_group_mem_t **pmem) > > > > > > { > > > > > > struct mm_iommu_table_group_mem_t *mem; > > > > > > - long i, j, ret = 0, locked_entries = 0; > > > > > > + long i, j, ret = 0, locked_entries = 0, pageshift; > > > > > > struct page *page = NULL; > > > > > > > > > > > > mutex_lock(&mem_list_mutex); > > > > > > @@ -166,6 +167,8 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, > > > > > > goto unlock_exit; > > > > > > } > > > > > > > > > > > > + mem->pageshift = 30; /* start from 1G pages - the biggest we have */ > > > > > > > > > > What about 16G pages on an HPT system? > > > > > > > > > > > > Below in the loop mem->pageshift will reduce to the biggest actual size > > > > which will be 16mb/64k/4k. Or remain 1GB if no memory is actually > > > > pinned, no loss there. > > > > > > Are you saying that 16G IOMMU pages aren't supported? Or that there's > > > some reason a guest can never use them? > > > > > > ah, 16_G_, not _M_. My bad. I just never tried such huge pages, I will > > lift the limit up to 64 then, easier this way. > > > Ah, no, rather this as the upper limit: > > mem->pageshift = ilog2(entries) + PAGE_SHIFT; I can't make sense of this comment in context. I see how you're computing the minimum page size in the reserved region. My question is about what the "maximum minimum" is - the starting value from which you calculate. Currently it's 1G, but I can't immediately see a reason that 16G is impossible here. > @entries here is a number of system pages being pinned in that > function. > > > > > > > > > > > > > > for (i = 0; i < entries; ++i) { > > > > > > if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT), > > > > > > 1/* pages */, 1/* iswrite */, &page)) { > > > > > > @@ -199,6 +202,11 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, > > > > > > } > > > > > > } > > > > > > populate: > > > > > > + pageshift = PAGE_SHIFT; > > > > > > + if (PageCompound(page)) > > > > > > + pageshift += compound_order(compound_head(page)); > > > > > > + mem->pageshift = min_t(unsigned int, mem->pageshift, pageshift); > > > > > > > > > > Why not make mem->pageshift and pageshift local the same type to avoid > > > > > the min_t() ? > > > > > > > > I was under impression min() is deprecated (misinterpret checkpatch.pl > > > > may be) and therefore did not pay attention to it. I can fix this and > > > > repost if there is no other question. > > > > > > Hm, it's possible. > > > > Nah, tried min(), compiles fine. > > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson
Attachment:
signature.asc
Description: PGP signature