On Wed, 7 Feb 2024 at 15:40, Christophe Kerello <christophe.kerello@xxxxxxxxxxx> wrote: > > Turning on CONFIG_DMA_API_DEBUG_SG results in the following warning: > > DMA-API: mmci-pl18x 48220000.mmc: cacheline tracking EEXIST, > overlapping mappings aren't supported > WARNING: CPU: 1 PID: 51 at kernel/dma/debug.c:568 > add_dma_entry+0x234/0x2f4 > Modules linked in: > CPU: 1 PID: 51 Comm: kworker/1:2 Not tainted 6.1.28 #1 > Hardware name: STMicroelectronics STM32MP257F-EV1 Evaluation Board (DT) > Workqueue: events_freezable mmc_rescan > Call trace: > add_dma_entry+0x234/0x2f4 > debug_dma_map_sg+0x198/0x350 > __dma_map_sg_attrs+0xa0/0x110 > dma_map_sg_attrs+0x10/0x2c > sdmmc_idma_prep_data+0x80/0xc0 > mmci_prep_data+0x38/0x84 > mmci_start_data+0x108/0x2dc > mmci_request+0xe4/0x190 > __mmc_start_request+0x68/0x140 > mmc_start_request+0x94/0xc0 > mmc_wait_for_req+0x70/0x100 > mmc_send_tuning+0x108/0x1ac > sdmmc_execute_tuning+0x14c/0x210 > mmc_execute_tuning+0x48/0xec > mmc_sd_init_uhs_card.part.0+0x208/0x464 > mmc_sd_init_card+0x318/0x89c > mmc_attach_sd+0xe4/0x180 > mmc_rescan+0x244/0x320 > > DMA API debug brings to light leaking dma-mappings as dma_map_sg and > dma_unmap_sg are not correctly balanced. > > If an error occurs in mmci_cmd_irq function, only mmci_dma_error > function is called and as this API is not managed on stm32 variant, > dma_unmap_sg is never called in this error path. > > Signed-off-by: Christophe Kerello <christophe.kerello@xxxxxxxxxxx> Applied for fixes and by adding a fixes- and stable-tag, thanks! Kind regards Uffe > --- > drivers/mmc/host/mmci_stm32_sdmmc.c | 24 ++++++++++++++++++++++++ > 1 file changed, 24 insertions(+) > > diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c > index 35067e1e6cd8..f5da7f9baa52 100644 > --- a/drivers/mmc/host/mmci_stm32_sdmmc.c > +++ b/drivers/mmc/host/mmci_stm32_sdmmc.c > @@ -225,6 +225,8 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl) > struct scatterlist *sg; > int i; > > + host->dma_in_progress = true; > + > if (!host->variant->dma_lli || data->sg_len == 1 || > idma->use_bounce_buffer) { > u32 dma_addr; > @@ -263,9 +265,30 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl) > return 0; > } > > +static void sdmmc_idma_error(struct mmci_host *host) > +{ > + struct mmc_data *data = host->data; > + struct sdmmc_idma *idma = host->dma_priv; > + > + if (!dma_inprogress(host)) > + return; > + > + writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR); > + host->dma_in_progress = false; > + data->host_cookie = 0; > + > + if (!idma->use_bounce_buffer) > + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, > + mmc_get_dma_dir(data)); > +} > + > static void sdmmc_idma_finalize(struct mmci_host *host, struct mmc_data *data) > { > + if (!dma_inprogress(host)) > + return; > + > writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR); > + host->dma_in_progress = false; > > if (!data->host_cookie) > sdmmc_idma_unprep_data(host, data, 0); > @@ -676,6 +699,7 @@ static struct mmci_host_ops sdmmc_variant_ops = { > .dma_setup = sdmmc_idma_setup, > .dma_start = sdmmc_idma_start, > .dma_finalize = sdmmc_idma_finalize, > + .dma_error = sdmmc_idma_error, > .set_clkreg = mmci_sdmmc_set_clkreg, > .set_pwrreg = mmci_sdmmc_set_pwrreg, > .busy_complete = sdmmc_busy_complete, > -- > 2.25.1 >