On Wed, 4 Dec 2024 15:46:49 -0700 Dave Jiang <dave.jiang@xxxxxxxxx> wrote: > Below is a setup with extended linear cache configuration with an example > layout of memory region shown below presented as a single memory region > consists of 256G memory where there's 128G of DRAM and 128G of CXL memory. > The kernel sees a region of total 256G of system memory. > > 128G DRAM 128G CXL memory > |-----------------------------------|-------------------------------------| > > Data resides in either DRAM or far memory (FM) with no replication. Hot > data is swapped into DRAM by the hardware behind the scenes. When error is > detected in one location, it is possible that error also resides in the > aliased location. Therefore when a memory location that is flagged by MCE > is part of the special region, the aliased memory location needs to be > offlined as well. > > Add an mce notify callback to identify if the MCE address location is part > of an extended linear cache region and handle accordingly. > > Added symbol export to set_mce_nospec() in x86 code in order to call > set_mce_nospec() from the CXL MCE notify callback. > > Link: https://lore.kernel.org/linux-cxl/668333b17e4b2_5639294fd@xxxxxxxxxxxxxxxxxxxxxxxxx.notmuch/ > Signed-off-by: Dave Jiang <dave.jiang@xxxxxxxxx> A couple of minor editorial comments. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> > diff --git a/drivers/cxl/core/mce.c b/drivers/cxl/core/mce.c > new file mode 100644 > index 000000000000..f983822992a4 > --- /dev/null > +++ b/drivers/cxl/core/mce.c > @@ -0,0 +1,52 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* Copyright(c) 2024 Intel Corporation. All rights reserved. */ > +#include <linux/notifier.h> > +#include <linux/set_memory.h> > +#include <asm/mce.h> > +#include <cxlmem.h> > +#include "mce.h" > + > +static int cxl_handle_mce(struct notifier_block *nb, unsigned long val, > + void *data) > +{ > + struct cxl_memdev_state *mds = container_of(nb, struct cxl_memdev_state, > + mce_notifier); > + struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; > + struct cxl_port *endpoint = cxlmd->endpoint; > + struct mce *mce = (struct mce *)data; Explicit cast not needed or useful. C lets us not bother when casting from void * > + u64 spa, spa_alias; > + unsigned long pfn; > + > + if (!mce || !mce_usable_address(mce)) > + return NOTIFY_DONE; > + > + spa = mce->addr & MCI_ADDR_PHYSADDR; > + > + pfn = spa >> PAGE_SHIFT; > + if (!pfn_valid(pfn)) > + return NOTIFY_DONE; > + > + spa_alias = cxl_port_get_spa_cache_alias(endpoint, spa); > + if (!spa_alias) > + return NOTIFY_DONE; > + > + pfn = spa_alias >> PAGE_SHIFT; > + > + /* > + * Take down the aliased memory page. The original memory page flagged > + * by the MCE will be taken cared of by the standard MCE handler. > + */ > + dev_emerg(mds->cxlds.dev, "Offlining aliased SPA address: %#llx\n", > + spa_alias); > + if (!memory_failure(pfn, 0)) > + set_mce_nospec(pfn); > + > + return NOTIFY_OK; > +} > diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c > index 8bf4efb2c48c..b279148ec3ff 100644 > --- a/drivers/cxl/core/region.c > +++ b/drivers/cxl/core/region.c > @@ -3435,6 +3435,31 @@ int cxl_add_to_region(struct cxl_port *root, struct cxl_endpoint_decoder *cxled) > } > EXPORT_SYMBOL_NS_GPL(cxl_add_to_region, CXL); > > +u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa) > +{ > + struct cxl_region_ref *iter; > + unsigned long index; > + > + guard(rwsem_write)(&cxl_region_rwsem); > + > + xa_for_each(&endpoint->regions, index, iter) { > + struct cxl_region_params *p = &iter->region->params; > + > + if (p->res->start <= spa && spa <= p->res->end) { > + if (!p->cache_size) > + return 0; > + > + if (spa > p->res->start + p->cache_size) > + return spa - p->cache_size; > + > + return spa + p->cache_size; > + } > + } > + > + return 0; > +} > +EXPORT_SYMBOL_NS_GPL(cxl_port_get_spa_cache_alias, CXL); Quotes needed (the patch that changed that has been annoying this cycle!)