Re: [bugzilla-daemon@xxxxxxxxxx: [Bug 219619] New: vfio-pci: screen graphics artifacts after 6.12 kernel upgrade]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2 Jan 2025 11:39:23 -0500
Peter Xu <peterx@xxxxxxxxxx> wrote:

> On Tue, Dec 31, 2024 at 09:07:33AM -0700, Alex Williamson wrote:
> > On Tue, 31 Dec 2024 15:44:13 +0000
> > Precific <precification@xxxxxxxxx> wrote:
> >   
> > > On 31.12.24 02:27, Alex Williamson wrote:  
> > > > On Mon, 30 Dec 2024 21:03:30 +0000
> > > > Precific <precification@xxxxxxxxx> wrote:
> > > >     
> > > >> In my case, commenting out (1) the huge_fault callback assignment from
> > > >> f9e54c3a2f5b suffices for GPU initialization in the guest, even if (2)
> > > >> the 'install everything' loop is still removed.
> > > >>
> > > >> I have uploaded host kernel logs with vfio-pci-core debugging enabled
> > > >> (one log with stock sources, one large log with vfio-pci-core's
> > > >> huge_fault handler patched out):
> > > >> https://bugzilla.kernel.org/show_bug.cgi?id=219619#c1
> > > >> I'm not sure if the logs of handled faults say much about what
> > > >> specifically goes wrong here, though.
> > > >>
> > > >> The dmesg portion attached to my mail is of a Linux guest failing to
> > > >> initialize the GPU (BAR 0 size 16GB with 12GB of VRAM).    
> > > > 
> > > > Thanks for the logs with debugging enabled.  Would you be able to
> > > > repeat the test with QEMU 9.2?  There's a patch in there that aligns
> > > > the mmaps, which should avoid mixing 1G and 2MB pages for huge faults.
> > > > With this you should only see order 18 mappings for BAR0.
> > > > 
> > > > Also, in a different direction, it would be interesting to run tests
> > > > disabling 1G huge pages and 2MB huge pages independently.  The
> > > > following would disable 1G pages:
> > > > 
> > > > diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> > > > index 1ab58da9f38a..dd3b748f9d33 100644
> > > > --- a/drivers/vfio/pci/vfio_pci_core.c
> > > > +++ b/drivers/vfio/pci/vfio_pci_core.c
> > > > @@ -1684,7 +1684,7 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
> > > >   							     PFN_DEV), false);
> > > >   		break;
> > > >   #endif
> > > > -#ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP
> > > > +#if 0
> > > >   	case PUD_ORDER:
> > > >   		ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn + pgoff,
> > > >   							     PFN_DEV), false);
> > > > 
> > > > This should disable 2M pages:
> > > > 
> > > > diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> > > > index 1ab58da9f38a..d7dd359e19bb 100644
> > > > --- a/drivers/vfio/pci/vfio_pci_core.c
> > > > +++ b/drivers/vfio/pci/vfio_pci_core.c
> > > > @@ -1678,7 +1678,7 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
> > > >   	case 0:
> > > >   		ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff);
> > > >   		break;
> > > > -#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
> > > > +#if 0
> > > >   	case PMD_ORDER:
> > > >   		ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn + pgoff,
> > > >   							     PFN_DEV), false);
> > > > 
> > > > And applying both together should be functionally equivalent to
> > > > pre-v6.12.  Thanks,
> > > > 
> > > > Alex
> > > >     
> > > 
> > > Logs with QEMU 9.1.2 vs. 9.2.0, all huge_page sizes/1G only/2M only: 
> > > https://bugzilla.kernel.org/show_bug.cgi?id=219619#c3
> > > 
> > > You're right, I was still using QEMU 9.1.2. With 9.2.0, the 
> > > passed-through GPU works fine indeed with both Linux and Windows guests.
> > > 
> > > The huge_fault calls are aligned nicely with QEMU 9.2.0. Only the lower 
> > > 16MB of BAR 0 see repeated calls at 2M/4K page sizes but no misalignment.
> > > The QEMU 9.1.2 'stock' log shows a misalignment with 1G faults (order 
> > > 18), e.g., huge_faulting 0x40000 pages at page offset 0 and later 
> > > 0x4000. I'm not sure if that is a problem, or if the offsets are simply 
> > > masked off to the correct alignment.
> > > QEMU 9.1.2 also works with 1G pages disabled. Perhaps coincidentally, 
> > > the offsets are aligned properly for order 9 (0x200 'page offset' 
> > > increments) from what I've seen.  
> > 
> > Thank you so much for your testing, this is immensely helpful!  This
> > all suggests to me we're dealing with an alignment issue for 1GB pages.
> > We're getting 2MB alignment on the mmap by default, so that's working
> > everywhere.  QEMU 9.2 provides us with proper 1GB alignment, but it
> > seems we need to filter alignment more strictly when that's not present.
> > Please give this a try with QEMU 9.1.x and an otherwise stock v6.12.x:
> > 
> > diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> > index 1ab58da9f38a..bdfdc8ee4c2b 100644
> > --- a/drivers/vfio/pci/vfio_pci_core.c
> > +++ b/drivers/vfio/pci/vfio_pci_core.c
> > @@ -1661,7 +1661,8 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
> >  	unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
> >  	vm_fault_t ret = VM_FAULT_SIGBUS;
> >  
> > -	if (order && (vmf->address & ((PAGE_SIZE << order) - 1) ||
> > +	if (order && (pgoff & ((1 << order) - 1) ||
> > +		      vmf->address & ((PAGE_SIZE << order) - 1) ||
> >  		      vmf->address + (PAGE_SIZE << order) > vma->vm_end)) {
> >  		ret = VM_FAULT_FALLBACK;
> >  		goto out;  
> 
> That's a great finding..  I wish we could have some sanity check in things
> like pud_mkhuge() on the pfns at least for x86: SDM says the rest bits for
> a huge pfn must be zero (for example, bit 29-13 for 1G), but didn't say
> what if not. I assume that could panic at the right place if such check
> ever existed.
> 
> OTOH, a pure question here is whether we should check pfn+pgoff instead of
> pgoff alone.  I have no idea how firmware would allocate BAR resources,
> especially on start address alignments.  I assume that needs to be somehow
> relevant to the max size of the bar, probably the start address should
> always be aligned to that max bar size?  If so, there should have no
> functional difference checking either pfn+pgoff or pgoff.  It could be a
> matter of readability in that case, saying that the limitation is about pfn
> (of pgtable) rather than directly relevant to the offset of the bar.

Yes, I'm working on the proper patch now that we have a root cause and
I'm changing this to test alignment of pfn+pgoff.  The PCI BARs
themselves are required to have natural alignment, but the vma mapping
the BAR could be at an offset from the base of the BAR, which is
accounted for in our local vma_to_pfn() function.  So I agree that
pfn+pgoff is the more complete fix, which I'll post soon, and hope that
Precific can re-verify the fix.  Thanks,

Alex





[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux