On Fri, Feb 23, 2024 at 03:32:52PM +0800, Ethan Zhao wrote: > On 2/23/2024 2:08 PM, Dan Carpenter wrote: > > On Fri, Feb 23, 2024 at 10:29:28AM +0800, Ethan Zhao wrote: > > > > > @@ -1326,6 +1336,21 @@ static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index) > > > > > head = (head - 2 + QI_LENGTH) % QI_LENGTH; > > > > > } while (head != tail); > > > > > + /* > > > > > + * If got ITE, we need to check if the sid of ITE is one of the > > > > > + * current valid ATS invalidation target devices, if no, or the > > > > > + * target device isn't presnet, don't try this request anymore. > > > > > + * 0 value of ite_sid means old VT-d device, no ite_sid value. > > > > > + */ > > > > This comment is kind of confusing. > > > Really confusing ? this is typo there, resnet-> "present" > > > > > Reading this comment again, the part about zero ite_sid values is > > actually useful, but what does "old" mean in "old VT-d device". How old > > is it? One year old? > > I recite the description from Intel VT-d spec here > > "A value of 0 in this field indicates that this is an older version of DMA > remapping hardware which does not provide additional details about > the Invalidation Time-out Error" > This is good. Put that in the comment. Otherwise it's not clear. I assumed "old" meant released or something. > At least, the Intel VT-d spec 4.0 released 2022 June says the same thing. > as to how old, I didn't find docs older than that, really out of my radar. > > > > > > > /* > > > > * If we have an ITE, then we need to check whether the sid of the ITE > > > > * is in the rbtree (meaning it is probed and not released), and that > > > > * the PCI device is present. > > > > */ > > > > > > > > My comment is slightly shorter but I think it has the necessary > > > > information. > > > > > > > > > + if (ite_sid) { > > > > > + dev = device_rbtree_find(iommu, ite_sid); > > > > > + if (!dev || !dev_is_pci(dev)) > > > > > + return -ETIMEDOUT; > > > > -ETIMEDOUT is weird. The callers don't care which error code we return. > > > > Change this to -ENODEV or something > > > -ETIMEDOUT means prior ATS invalidation request hit timeout fault, and the > > > caller really cares about the returned value. > > > > > I don't really care about the return value and if you say it should be > > -ETIMEDOUT, then you're the expert. However, I don't see anything in > > linux-next which cares about the return values except -EAGAIN. > > This function is only called from qi_submit_sync() which checks for > > -EAGAIN. Then I did a git grep. > > > > $ git grep qi_submit_sync > > drivers/iommu/intel/dmar.c:int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, > > drivers/iommu/intel/dmar.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/dmar.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/dmar.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/dmar.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/dmar.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/dmar.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/dmar.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/iommu.h:int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, > > drivers/iommu/intel/iommu.h: * Options used in qi_submit_sync: > > drivers/iommu/intel/irq_remapping.c: return qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/pasid.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/svm.c: qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN); > > drivers/iommu/intel/svm.c: qi_submit_sync(iommu, &desc, 1, 0); > > drivers/iommu/intel/svm.c: qi_submit_sync(iommu, &desc, 1, 0); > > > > Only qi_flush_iec() in irq_remapping.c cares about the return. Then I > > traced those callers back and nothing cares about -ETIMEOUT. > > > > Are you refering to patches that haven't ben merged yet? > > Yes, patches under working, not the code running on your boxes. > > -ETIMEOUT & -ENODEV, they both describe the error that is happenning, someone > prefers -ETIMEOUT, they would like to know the request was timeout, and someone > perfers -ENODEV, they know the target device is gone, ever existed. Okay. I obviously can't comment on patches that I haven't seen but, sure, it sounds reasonable. > > > > > > + pdev = to_pci_dev(dev); > > > > > + if (!pci_device_is_present(pdev) && > > > > > + ite_sid == pci_dev_id(pci_physfn(pdev))) > > > > The && confused me, but then I realized that probably "ite_sid == > > > > pci_dev_id(pci_physfn(pdev))" is always true. Can we delete that part? > > > Here is the fault handling, just double confirm nothing else goes wrong -- > > > beyond the assumption. > > > > > Basically for that to ever be != it would need some kind of memory > > corruption? I feel like in that situation, the more conservative thing > > is to give up. If the PCI device is not present then just give up. > > memory corruption, buggy BIOS tables, faked request ...something out > of imagination, after confirmed the device is what it claimed to be, if > not present, then give up to retry the request. This is not correct. We looked up the device based on the ite_sid so we know what the device id is, unless we experience catastrophic memory corruption. + dev = device_rbtree_find(iommu, ite_sid); ^^^^^^^ We looked it up here. + if (!dev || !dev_is_pci(dev)) + return -ETIMEDOUT; + pdev = to_pci_dev(dev); + if (!pci_device_is_present(pdev) && + ite_sid == pci_dev_id(pci_physfn(pdev))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Unless the device_rbtree_find() is returning garbage then these things must be true. + return -ETIMEDOUT; I tried to double check how we were storing devices into the rbtree, but then I discovered that the device_rbtree_find() doesn't exist in linux-next and this patch breaks the build. This is very frustrating thing. But let's say a buggy BIOS could mess up the rbtree. In that situation, we would still want to change the && to an ||. If the divice is not present and^W or the rbtree is corrupted then return an error. But don't do this. If the memory is corrupted we are already screwed and there is no way the system can really recover in any reasonable way. regards, dan carpenter