[+cc Jon, for related VMD firmware-first error enable issue] On Mon, Nov 12, 2018 at 08:05:41PM +0000, Alex_Gagniuc@xxxxxxxxxxxx wrote: > On 11/11/2018 11:50 PM, Oliver O'Halloran wrote: > > On Thu, 2018-11-08 at 23:06 +0000, Alex_Gagniuc@xxxxxxxxxxxx wrote: > >> But it's not the firmware that crashes. It's linux as a result of a > >> fatal error message from the firmware. And we can't fix that because FFS > >> handling requires that the system reboots [1]. > > > > Do we know the exact circumsances that result in firmware requesting a > > reboot? If it happen on any PCIe error I don't see what we can do to > > prevent that beyond masking UEs entirely (are we even allowed to do > > that on FFS systems?). > > Pull a drive out at an angle, push two drives in at the same time, pull > out a drive really slow. If an error is even reported to the OS depends > on PD state, and proprietary mechanisms and logic in the HW and FW. OS > is not supposed to mask errors (touch AER bits) on FFS. PD? Do you think Linux observes the rule about not touching AER bits on FFS? I'm not sure it does. I'm not even sure what section of the spec is relevant. The whole issue of firmware-first, the mechanism by which firmware gets control, the System Error enables in Root Port Root Control registers, etc., is very murky to me. Jon has a sort of similar issue with VMD where he needs to leave System Errors enabled instead of disabling them as we currently do. Bjorn [1] https://lore.kernel.org/linux-pci/20181029210651.GB13681@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx