On Thu, 31 Mar 2022 at 09:32, Michael Wu <michael@xxxxxxxxxxxxxxxxx> wrote: > > The mmc core enables cache by default. But it only enables > cache-flushing when host supports cmd23 and eMMC supports > reliable-write. > For hosts which do not support cmd23 or eMMCs which do not support > reliable-write, the cache can not be flushed by `sync` command. > This may leads to cache data lost. > This patch enables cache-flushing as long as cache is enabled, > no matter host supports cmd23 and/or eMMC supports reliable write > or not. > For SD cards, backwards compatibility is guaranteed. Newer components > like SD5.0 which have cache are also supported in advance, which means > this patch will also be applicable if SD5.0 cache is added to the mmc > core in the future. SD 5.0 cache support was added in the commit 130206a615a9 below. No need to resend, I will take care of updating the commit message. > > Fixes: f4c5522b0a88 ("mmc: Reliable write support.") > Fixes: 881d1c25f765 ("mmc: core: Add cache control for eMMC4.5 device") > Fixes: 130206a615a9 ("mmc: core: Add support for cache ctrl for SD cards") > Fixes: d0c97cfb81eb ("mmc: core: Use CMD23 for multiblock transfers when we can.") > Fixes: e9d5c746246c ("mmc/block: switch to using blk_queue_write_cache()") I will have a look at the above to see what makes sense to add - and then I will add a stable tag too. > > Reviewed-by: Avri Altman <Avri.Altman@xxxxxxx> > Reviewed-by: Ulf Hansson <ulf.hansson@xxxxxxxxxx> > > Signed-off-by: Michael Wu <michael@xxxxxxxxxxxxxxxxx> Thanks, applied for fixes! Kind regards Uffe > --- > drivers/mmc/core/block.c | 12 +++++++++--- > 1 file changed, 9 insertions(+), 3 deletions(-) > > diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c > index 4e67c1403cc9..ec76ed82abb9 100644 > --- a/drivers/mmc/core/block.c > +++ b/drivers/mmc/core/block.c > @@ -2350,6 +2350,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, > struct mmc_blk_data *md; > int devidx, ret; > char cap_str[10]; > + bool cache_enabled = false; > + bool fua_enabled = false; > > devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL); > if (devidx < 0) { > @@ -2429,13 +2431,17 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, > md->flags |= MMC_BLK_CMD23; > } > > - if (mmc_card_mmc(card) && > - md->flags & MMC_BLK_CMD23 && > + if (md->flags & MMC_BLK_CMD23 && > ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) || > card->ext_csd.rel_sectors)) { > md->flags |= MMC_BLK_REL_WR; > - blk_queue_write_cache(md->queue.queue, true, true); > + fua_enabled = true; > + cache_enabled = true; > } > + if (mmc_cache_enabled(card->host)) > + cache_enabled = true; > + > + blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled); > > string_get_size((u64)size, 512, STRING_UNITS_2, > cap_str, sizeof(cap_str)); > -- > 2.29.0 >