>> But there are places in the kernel where the code assumes that this >> EFAULT return was simply because of a page fault. The code takes some >> action to fix that, and then retries the access. This results in a second >> machine check. > > What about return EHWPOISON instead of EFAULT and update the callers > to handle EHWPOISON explicitly: i.e., not retry but give up on the page? That seems like a good idea to me. But I got some pushback when I started on this path earlier with some patches to the futex code. But back then I wasn't using error return of EHWPOISON ... possibly the code would look less hacky with that explicitly called out. The futex case was specifically for code using pagefault_disable(). Likely all the other callers would need to be audited (but there are only a few dozen places, so not too big of a deal). > My main concern is that the strong assumptions that the kernel can't hit more > than a fixed number of poisoned cache lines before turning to user space > may simply not be true. Agreed. > When DIMM goes bad, it can easily affect an entire bank or entire ram device > chip. Even with memory interleaving, it's possible that a kernel control path > touches lots of poisoned cache lines in the buffer it is working through. These larger failures have other problems ... dozens of unrelated pages may be affected. In a perfect world Linux would be told on the first error that this is just one of many errors ... and be given a list. But in the real world that isn't likely to happen :-( -Tony