On Tue, 2022-10-18 at 12:18 -0300, Jason Gunthorpe wrote: > On Tue, Oct 18, 2022 at 04:51:30PM +0200, Niklas Schnelle wrote: > > > @@ -84,7 +88,7 @@ static void __s390_iommu_detach_device(struct zpci_dev *zdev) > > return; > > > > spin_lock_irqsave(&s390_domain->list_lock, flags); > > - list_del_init(&zdev->iommu_list); > > + list_del_rcu(&zdev->iommu_list); > > spin_unlock_irqrestore(&s390_domain->list_lock, flags); > > This doesn't seem obviously OK, the next steps remove the translation > while we can still have concurrent RCU protected flushes going on. > > Is it OK to call the flushes when after the zpci_dma_exit_device()/etc? > > Jason Interesting point. So for the flushes themselves this should be fine, once the zpci_unregister_ioat() is executed all subsequent and ongoing IOTLB flushes should return an error code without further adverse effects. Though I think we do still have an issue in the IOTLB ops for this case as that error would skip the IOTLB flushes of other attached devices. The bigger question and that seems independent from RCU is how/if detach is supposed to work if there are still DMAs ongoing. Once we do the zpci_unregister_ioat() any DMA request coming from the PCI device will be blocked and will lead to the PCI device being isolated (put into an error state) for attempting an invalid DMA. So I had expected that calls of detach/attach would happen without expected ongoing DMAs and thus IOTLB flushes? Of course we should be robust against violations of that and unexpected DMAs for which I think isolating the PCI device is the correct response. What am I missing?