On Wed, 19 Feb 2025 19:45:33 +0100 Borislav Petkov <bp@xxxxxxxxx> wrote: > On Tue, Feb 18, 2025 at 04:51:25PM +0000, Jonathan Cameron wrote: > > As a side note, if you are in the situation where the device can do > > memory repair without any disruption of memory access then my > > assumption is in the case where the device would set the maintenance > > needed + where it is considering soft repair (so no long term cost > > to a wrong decision) then the device would probably just do it > > autonomously and at most we might get a notification. > > And this is basically what I'm trying to hint at: if you can do recovery > action without userspace involvement, then please, by all means. There's no > need to noodle information back'n'forth through user if the kernel or the > device itself even, can handle it on its own. > > More involved stuff should obviously rely on userspace to do more involved > "pondering." Lets explore this further as a follow up. A policy switch to let the kernel do the 'easy' stuff (assuming device didn't do it) makes sense if this particular combination is common. > > > So I think that if we see this there will be some disruption. > > Latency spikes for soft repair or we are looking at hard repair. > > In that case we'd need policy on whether to repair at all. > > In general the rasdaemon handling in that series is intentionally > > simplistic. Real solutions will take time to refine but they > > don't need changes to the kernel interface, just when to poke it. > > I hope so. > > > The error record comes out as a trace point. Is there any precedence for > > injecting those back into the kernel? > > I'm just questioning the whole interface and its usability. Not saying it > doesn't make sense - we're simply weighing all options here. > > > That policy question is a long term one but I can suggest 'possible' policies > > that might help motivate the discussion > > > > 1. Repair may be very disruptive to memory latency. Delay until a maintenance > > window when latency spike is accepted by the customer until then rely on > > maintenance needed still representing a relatively low chance of failure. > > So during the maintenance window, the operator is supposed to do > > rasdaemon --start-expensive-repair-operations Yes, would be something along those lines. Or a script very similar to the the boot one Shiju wrote. Scan the DB and find what needs repairing + do so. > > ? > > > 2. Hard repair uses known limited resources - e.g. those are known to match up > > to a particular number of rows in each module. That is not discoverable under > > the CXL spec so would have to come from another source of metadata. > > Apply some sort of fall off function so that we repair only the very worst > > cases as we run out. Alternative is always soft offline the memory in the OS, > > aim is to reduce chance of having to do that a somewhat optimal fashion. > > I'm not sure on the appropriate stats, maybe assume a given granual failure > > rate follows a Poison distribution and attempt to estimate lambda? Would > > need an expert in appropriate failure modes or a lot of data to define > > this! > > I have no clue what you're saying here. :-) I'll write something up at some point as it's definitely a complex topic and I need to find a statistician + hardware folk with error models to help flesh it out. There is another topic to look at which is what to do with synchronous poison if we can repair the memory and bring it back into use. I can't find the thread, but last time I asked about recovering from that, the mm folk said they'd need to see the code + usecases (fair enough!). > > > It is the simplest interface that we have come up with so far. I'm fully open > > to alternatives that provide a clean way to get this data back into the > > kernel and play well with existing logging tooling (e.g. rasdaemon) > > > > Some things we could do, > > * Store binary of trace event and reinject. As above + we would have to be > > very careful that any changes to the event are made with knowledge that > > we need to handle this path. Little or now marshaling / formatting code > > in userspace, but new logging infrastructure needed + a chardev /ioctl > > to inject the data and a bit of userspace glue to talk to it. > > * Reinject a binary representation we define, via an ioctl on some > > chardev we create for the purpose. Userspace code has to take > > key value pairs and process them into this form. So similar amount > > of marshaling code to what we have for sysfs. > > * Or what we currently propose, write set of key value pairs to a simple > > (though multifile) sysfs interface. As you've noted marshaling is needed. > > ... and the advantage of having such a sysfs interface: it is human readable > and usable vs having to use a tool to create a binary blob in a certain > format... > > Ok, then. Let's give that API a try... I guess I need to pick up the EDAC > patches from here: > > https://lore.kernel.org/r/20250212143654.1893-1-shiju.jose@xxxxxxxxxx > > If so, there's an EDAC patch 14 which is not together with the first 4. And > I was thinking of taking the first 4 or 5 and then giving other folks an > immutable branch in the EDAC tree which they can use to base the CXL stuff on > top. > > What's up? My fault. I asked Shiju to split the more complex ABI for sparing out to build the complexity up rather than having it all in one patch. Should be fine for you to take 1-4 and 14 which is all the EDAC parts. For 5 and 6 Rafael acked the ACPI part (5), and the ACPI ras2 scrub driver has no other dependencies so I think that should go through your tree as well, though no need to be in the immutable branch. Dave Jiang can work his magic on the CXL stuff on top of a merge of your immutable branch. Thanks! Jonathan >