Hi Pratyush, Pratyush Yadav <p.yadav@xxxxxx> wrote on Thu, 27 May 2021 15:30:17 +0530: > On 26/05/21 05:30PM, patrice.chotard@xxxxxxxxxxx wrote: > > From: Christophe Kerello <christophe.kerello@xxxxxxxxxxx> > > > > After power up, all SPI NAND's blocks are locked. Only read operations > > are allowed, write and erase operations are forbidden. > > The SPI NAND framework unlocks all the blocks during its initialization. > > > > During a standby low power, the memory is powered down, losing its > > configuration. > > During the resume, the QSPI driver state is restored but the SPI NAND > > framework does not reconfigured the memory. > > > > This patch adds spi nand mtd PM handlers for resume ops. > > SPI NAND resume op re-initializes SPI NAND flash to its probed state. > > > > Signed-off-by: Christophe Kerello <christophe.kerello@xxxxxxxxxxx> > > Signed-off-by: Patrice Chotard <patrice.chotard@xxxxxxxxxxx> > > --- > > drivers/mtd/nand/spi/core.c | 56 +++++++++++++++++++++++++++++++++++++ > > 1 file changed, 56 insertions(+) > > > > diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c > > index 17f63f95f4a2..6abaf874eb3f 100644 > > --- a/drivers/mtd/nand/spi/core.c > > +++ b/drivers/mtd/nand/spi/core.c > > @@ -1074,6 +1074,61 @@ static int spinand_detect(struct spinand_device *spinand) > > return 0; > > } > > > > +static void spinand_mtd_resume(struct mtd_info *mtd) > > +{ > > + struct spinand_device *spinand = mtd_to_spinand(mtd); > > + struct nand_device *nand = mtd_to_nanddev(mtd); > > + struct device *dev = &spinand->spimem->spi->dev; > > + int ret, i; > > + > > + ret = spinand_reset_op(spinand); > > + if (ret) > > + return; > > + > > + ret = spinand_init_quad_enable(spinand); > > + if (ret) { > > + dev_err(dev, > > + "Failed to initialize the quad part (err = %d)\n", > > + ret); > > + return; > > + } > > + > > + ret = spinand_upd_cfg(spinand, CFG_OTP_ENABLE, 0); > > + if (ret) { > > + dev_err(dev, > > + "Failed to updtae the OTP (err = %d)\n", > > + ret); > > + return; > > + } > > Since you have reset the flash, this cache is invalid. You should reset > the cache and re-populate it before using it in any way. > > > + > > + ret = spinand_manufacturer_init(spinand); > > + if (ret) { > > + dev_err(dev, > > + "Failed to initialize the SPI NAND chip (err = %d)\n", > > + ret); > > + return; > > + } > > + > > + /* After power up, all blocks are locked, so unlock them here. */ > > + for (i = 0; i < nand->memorg.ntargets; i++) { > > + ret = spinand_select_target(spinand, i); > > + if (ret) { > > + dev_err(dev, > > + "Failed to select the target (err = %d)\n", > > + ret); > > + return; > > + } > > + > > + ret = spinand_lock_block(spinand, BL_ALL_UNLOCKED); > > + if (ret) { > > + dev_err(dev, > > + "Failed to unlock block (err = %d)\n", > > + ret); > > + return; > > + } > > + } > > +} > > + > > Most of these seem to be copied from spinand_init(). I think it is > better to create a common function that can be called from both > spinand_init() and spinand_mtd_resume(). This way when someone adds > something new to the init procedure, like support for some other modes, > they won't have to remember to update it in two places. Agreed, let's write a common helper for more than just the unlocking sequence (still in a separate patch). > > > static int spinand_init(struct spinand_device *spinand) > > { > > struct device *dev = &spinand->spimem->spi->dev; > > @@ -1167,6 +1222,7 @@ static int spinand_init(struct spinand_device *spinand) > > mtd->_block_isreserved = spinand_mtd_block_isreserved; > > mtd->_erase = spinand_mtd_erase; > > mtd->_max_bad_blocks = nanddev_mtd_max_bad_blocks; > > + mtd->_resume = spinand_mtd_resume; > > Is it possible that the userspace can use this mtd device before the > resume is finished? Is there a way to temporarily "pause" or unregister > an mtd device? I don't expect this to happen, I would expect the kernel to resume entirely before giving the hand to userspace, but I am not 100% sure of that neither. > > > > > if (nand->ecc.engine) { > > ret = mtd_ooblayout_count_freebytes(mtd); > Thanks, Miquèl