On 2023/9/21 21:20, David Laight wrote: > ... > I've got a target to generate AER errors by generating read cycles > that are inside the address range that the bridge forwards but > outside of any BAR because there are 2 different sized BARs. > (Pretty easy to setup.) > On the system I was using they didn't get propagated all the way > to the root bridge - but were visible in the lower bridge. So how did you observe it? If the error message does not propagate to the root bridge, I think no AER interrupt will be trigger. > It would be nice for a driver to be able to detect/clear such > a flag if it gets an unexpected ~0u read value. > (I'm not sure an error callback helps.) IMHO, a general model is that error detected at endpoint should be routed to upstream port for example: RCiEP route error message to RCEC, so that the AER port service could handle the error, the device driver only have to implement error handler callback. > > OTOH a 'nebs compliant' server routed any kind of PCIe link error > through to some 'system management' logic that then raised an NMI. > I'm not sure who thought an NMI was a good idea - they are pretty > impossible to handle in the kernel and too late to be of use to > the code performing the access. I think it is the responsibility of the device to prevent the spread of errors while reporting that errors have been detected. For example, drop the current, (drain submit queue) and report error in completion record. Both NMI and MSI are asynchronous interrupts. > > In any case we were getting one after 'echo 1 >xxx/remove' and > then taking the PCIe link down by reprogramming the fpga. > So the link going down was entirely expected, but there seemed > to be nothing we could do to stop the kernel crashing. > > I'm sure 'nebs compliant' ought to contain some requirements for > resilience to hardware failures! How the kernel crash after a link down? Did the system detect a surprise down error? Best Regards, Shuai