On Fri, Feb 02, 2024 at 09:36:27PM +0000, Luck, Tony wrote: > There are two places in the pipeline where poison is significant. > > 1) When the memory controller gets a request to fetch some data. If the ECC > check on the bits returned from the DIMMs the memory controller will log > a "UCNA" signature error to a machine check bank for the memory channel > where the DIMMs live. If CMCI is enabled for that bank, then a CMCI is > sent to all logical CPUs that are in the scope of that bank (generally a > CPU socket). The data is marked with a POISON signature and passed > to the entity that requested it. Caches support this POISON signature > and preserve it as data is moved between caches, or written back to > memory. This may have been a prefetch or a speculative read. In these > cases there won't be a machine check. Linux uc_decode_notifier() will > try to offline pages when it sees UCNA signatures. Yap, deferred errors. > 2) When a CPU core tries to retire an instruction that consumes poison > data, or needs to retire a poisoned instruction. These log an SRAR signature > into a core scoped bank (on most Xeons to date bank 0 for poisoned instructions, > bank 1 for poisoned data consumption). Then they signal a machine check. And that is the #MC on a poison data load thing. FWIW, the other vendor does it very similarly. > Partial cacheline stores to data marked as POISON in the cache maintain > the poison status. Full cacheline writes (certainly with MOVDIR64B instruction, > possibly with some AVX512 instructions) can clear the POISON status (since > you have all new data). A sequence of partial cache line stores that overwrite > all data in a cache line will NOT clear the POISON status. That's interesting - partial stores don't clear poison data. > Nothing is logged or signaled when updating data in the cache. Ok, no #MC on writing to poisoned cachelines. Ok, so long story short, #MC only on loads. Good. Now, since you're explaining things today :) pls explain to me what this patchset is all about? You having reviewed patch 3 and all? Why is this pattern: if (copy_mc_user_highpage(dst, src, addr, vma)) { memory_failure_queue(page_to_pfn(src), 0); not good anymore? Or is the goal here to poison straight from the #MC handler and not waste time and potentially get another #MC while memory_failure_queue() on the source address is done? Or something completely different? Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette