On Mon, Dec 02, 2024 at 11:05:27AM +0800, Shuai Xue wrote: > The memory uncorrected error could be signaled by asynchronous interrupt > (specifically, SPI in arm64 platform), e.g. when an error is detected by > a background scrubber, or signaled by synchronous exception > (specifically, data abort exception in arm64 platform), e.g. when a CPU > tries to access a poisoned cache line. Currently, both synchronous and > asynchronous error use memory_failure_queue() to schedule > memory_failure() to exectute in a kworker context. > > As a result, when a user-space process is accessing a poisoned data, a > data abort is taken and the memory_failure() is executed in the kworker > context, memory_failure(): > > - will send wrong si_code by SIGBUS signal in early_kill mode, and > - can not kill the user-space in some cases resulting a synchronous > error infinite loop > > Issue 1: send wrong si_code in early_kill mode > > Since commit a70297d22132 ("ACPI: APEI: set memory failure flags as > MF_ACTION_REQUIRED on synchronous events")', the flag MF_ACTION_REQUIRED > could be used to determine whether a synchronous exception occurs on > ARM64 platform. When a synchronous exception is detected, the kernel is > expected to terminate the current process which has accessed poisoned > page. This is done by sending a SIGBUS signal with an error code > BUS_MCEERR_AR, indicating an action-required machine check error on > read. > > However, when kill_proc() is called to terminate the processes who have > the poisoned page mapped, it sends the incorrect SIGBUS error code > BUS_MCEERR_AO because the context in which it operates is not the one > where the error was triggered. > > To reproduce this problem: > > #sysctl -w vm.memory_failure_early_kill=1 > vm.memory_failure_early_kill = 1 > > # STEP2: inject an UCE error and consume it to trigger a synchronous error > #einj_mem_uc single > 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 > injecting ... > triggering ... > signal 7 code 5 addr 0xffffb0d75000 > page not present > Test passed > > The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO > error and it is not the fact. > > After this patch: > > # STEP1: enable early kill mode > #sysctl -w vm.memory_failure_early_kill=1 > vm.memory_failure_early_kill = 1 > # STEP2: inject an UCE error and consume it to trigger a synchronous error > #einj_mem_uc single > 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 > injecting ... > triggering ... > signal 7 code 4 addr 0xffffb0d75000 > page not present > Test passed > > The si_code (code 4) from einj_mem_uc indicates that it is a BUS_MCEERR_AR > error as we expected. > > Issue 2: a synchronous error infinite loop > > If a user-space process, e.g. devmem, accesses a poisoned page for which > the HWPoison flag is set, kill_accessing_process() is called to send > SIGBUS to current processs with error info. Because the memory_failure() > is executed in the kworker context, it will just do nothing but return > EFAULT. So, devmem will access the posioned page and trigger an > exception again, resulting in a synchronous error infinite loop. Such > exception loop may cause platform firmware to exceed some threshold and > reboot when Linux could have recovered from this error. > > To reproduce this problem: > > # STEP 1: inject an UCE error, and kernel will set HWPosion flag for related page > #einj_mem_uc single > 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 > injecting ... > triggering ... > signal 7 code 4 addr 0xffffb0d75000 > page not present > Test passed > > # STEP 2: access the same page and it will trigger a synchronous error infinite loop > devmem 0x4092d55b400 > > To fix above two issues, queue memory_failure() as a task_work so that > it runs in the context of the process that is actually consuming the > poisoned data. > > Signed-off-by: Shuai Xue <xueshuai@xxxxxxxxxxxxxxxxx> > Tested-by: Ma Wupeng <mawupeng1@xxxxxxxxxx> > Reviewed-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> > Reviewed-by: Xiaofei Tan <tanxiaofei@xxxxxxxxxx> > Reviewed-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> > Reviewed-by: Jarkko Sakkinen <jarkko@xxxxxxxxxx> > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> > --- > drivers/acpi/apei/ghes.c | 77 +++++++++++++++++++++++----------------- > include/acpi/ghes.h | 3 -- > include/linux/mm.h | 1 - > mm/memory-failure.c | 13 ------- > 4 files changed, 44 insertions(+), 50 deletions(-) > > diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c > index 106486bdfefc..70f2ee3ad1a8 100644 > --- a/drivers/acpi/apei/ghes.c > +++ b/drivers/acpi/apei/ghes.c > @@ -467,28 +467,41 @@ static void ghes_clear_estatus(struct ghes *ghes, > } > > /* The "kernel-doc" format needs an opening "/**". > - * Called as task_work before returning to user-space. > - * Ensure any queued work has been done before we return to the context that > - * triggered the notification. > + * struct ghes_task_work - for synchronous RAS event > + * > + * @twork: callback_head for task work > + * @pfn: page frame number of corrupted page > + * @flags: work control flags > + * > + * Structure to pass task work to be handled before > + * returning to user-space via task_work_add(). > */ > -static void ghes_kick_task_work(struct callback_head *head) > +struct ghes_task_work { > + struct callback_head twork; > + u64 pfn; > + int flags; > +}; > + > +static void memory_failure_cb(struct callback_head *twork) > { > - struct acpi_hest_generic_status *estatus; > - struct ghes_estatus_node *estatus_node; > - u32 node_len; > + struct ghes_task_work *twcb = container_of(twork, struct ghes_task_work, twork); > + int ret; > > - estatus_node = container_of(head, struct ghes_estatus_node, task_work); > - if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) > - memory_failure_queue_kick(estatus_node->task_work_cpu); > + ret = memory_failure(twcb->pfn, twcb->flags); > + gen_pool_free(ghes_estatus_pool, (unsigned long)twcb, sizeof(*twcb)); > > - estatus = GHES_ESTATUS_FROM_NODE(estatus_node); > - node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus)); > - gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len); > + if (!ret || ret == -EHWPOISON || ret == -EOPNOTSUPP) > + return; > + > + pr_err("%#llx: Sending SIGBUS to %s:%d due to hardware memory corruption\n", > + twcb->pfn, current->comm, task_pid_nr(current)); This is basically the same as the message in kill_proc(). Was there any consideration to have a shared function? Maybe this could be a future patch. > + force_sig(SIGBUS); > } > > static bool ghes_do_memory_failure(u64 physical_addr, int flags) > { > unsigned long pfn; > + struct ghes_task_work *twcb; Minor nit: A common preference I've seen is to order variable declarations from longest->shortest line length. But overall, looks okay to me. Reviewed-by: Yazen Ghannam <yazen.ghannam@xxxxxxx> Thanks, Yazen