Felix, Jason, Matt, On 2/16/2023 6:05 AM, Felix Kuehling wrote: > [+Shimmer, Aaron] > > Am 2023-02-15 um 10:39 schrieb Bjorn Helgaas: >> [+cc Christian, Xinhui, amd-gfx] >> >> On Fri, Jan 06, 2023 at 01:48:11PM +0800, Baolu Lu wrote: >>> On 1/5/23 11:27 PM, Felix Kuehling wrote: >>>> Am 2023-01-05 um 09:46 schrieb Deucher, Alexander: >>>>>> -----Original Message----- >>>>>> From: Hegde, Vasant <Vasant.Hegde@xxxxxxx> >>>>>> On 1/5/2023 4:07 PM, Baolu Lu wrote: >>>>>>> On 2023/1/5 18:27, Vasant Hegde wrote: >>>>>>>> On 1/5/2023 6:39 AM, Matt Fagnani wrote: >>>>>>>>> I built 6.2-rc2 with the patch applied. The same black >>>>>>>>> screen problem happened with 6.2-rc2 with the patch. I >>>>>>>>> tried to use early kdump with 6.2-rc2 with the patch >>>>>>>>> twice by panicking the kernel with sysrq+alt+c after the >>>>>>>>> black screen happened. The system rebooted after about >>>>>>>>> 10-20 seconds both times, but no kdump and dmesg files >>>>>>>>> were saved in /var/crash. I'm attaching the lspci -vvv >>>>>>>>> output as requested. ... >>>>>>>> Looking into lspci output, it doesn't list ACS feature >>>>>>>> for Graphics card. So with your fix it didn't enable PASID >>>>>>>> and hence it failed to boot. ... >>>>>>> So do you mind telling why does the PASID need to be enabled >>>>>>> for the graphic device? Or in another word, what does the >>>>>>> graphic driver use the PASID for? ... >>>>> The GPU driver uses the pasid for shared virtual memory between >>>>> the CPU and GPU. I.e., so that the user apps can use the same >>>>> virtual address space on the GPU and the CPU. It also uses >>>>> pasid to take advantage of recoverable device page faults using >>>>> PRS. ... >>>> Agreed. This applies to GPU computing on some older AMD APUs that >>>> take advantage of memory coherence and IOMMUv2 address translation >>>> to create a shared virtual address space between the CPU and GPU. >>>> In this case it seems to be a Carrizo APU. It is also true for >>>> Raven APUs. ... >>> Thanks for the explanation. >>> >>> This is actually the problem that commit 201007ef707a was trying to >>> fix. The PCIe fabric routes Memory Requests based on the TLP >>> address, ignoring any PASID (PCIe r6.0, sec 2.2.10.4), so a TLP with >>> PASID that should go upstream to the IOMMU may instead be routed as >>> a P2P Request if its address falls in a bridge window. >>> >>> In SVA case, the IOMMU shares the address space of a user >>> application. The user application side has no knowledge about the >>> PCI bridge window. It is entirely possible that the device is >>> programed with a P2P address and results in a disaster. >> Is this stalled? We explored the idea of changing the PCI core so >> that for devices that use ATS/PRI, we could enable PASID without >> checking for ACS [1], but IIUC we ultimately concluded that it was >> based on a misunderstanding of how ATS Translation Requests are routed >> and that an AMD driver change would be required [2]. >> >> So it seems like we still have this regression, and we're running out >> of time before v6.2. >> >> [1] https://lore.kernel.org/all/20230114073420.759989-1-baolu.lu@xxxxxxxxxxxxxxx/ >> [2] https://lore.kernel.org/all/Y91X9MeCOsa67CC6@xxxxxxxxxx/ > > If I understand this correctly, the HW or the BIOS is doing something wrong > about reporting ACS. I don't know what the GPU driver can do other than add some > quirk to stop using AMD IOMMUv2 on this HW/BIOS. > > It looks like the problem is triggered when the driver calls > amd_iommu_init_device. That's when the first WARNs happen, soon followed by a > kernel oops in report_iommu_fault. The driver doesn't know anything is wrong > because amd_iommu_init_device seems to return "success". And the oops is not in > the GPU driver either. WARN is fixed and its in Joerg's tree. https://lore.kernel.org/all/20230111121503.5931-1-vasant.hegde@xxxxxxx/ report_iommu_fault() happened because in amd_iommu_init_device() path it failed to attach devices to new domain and returned error. But it didn't put devices back to old domain properly. It left in incosistent state and resulted in IO page fault. I have proposed series to handle device to domain attachment failure and better handling of subsequent report_iommu_fault(). https://lore.kernel.org/linux-iommu/20230215052642.6016-1-vasant.hegde@xxxxxxx/ @Matt, Can you please help to verify above patches on your system where you hit the issue originally? (Grab above two series, apply it on top of latest kernel and test it) -Vasant