On Fri, 24 Apr 2020 19:36:28 +0200 Miquel Raynal <miquel.raynal@xxxxxxxxxxx> wrote: > While performing any NAND operation is as simple as following the > cores order and send "command", "address" and "data" cycles as > provided in a list of instructions, certain controllers are "too > clever" and are not able to split the sending of these cycles. > > Try to find out at boot time if the controller will be problematic and > flag it. Additional changes will make use of this flag to workaround > the capricious controllers by proposing "packed" operations as an > alternative. > > Signed-off-by: Miquel Raynal <miquel.raynal@xxxxxxxxxxx> > --- > drivers/mtd/nand/raw/internals.h | 5 ++++ > drivers/mtd/nand/raw/nand_base.c | 44 ++++++++++++++++++++++++++++++++ > include/linux/mtd/rawnand.h | 8 ++++++ > 3 files changed, 57 insertions(+) > > diff --git a/drivers/mtd/nand/raw/internals.h b/drivers/mtd/nand/raw/internals.h > index 9d0caadf940e..38898b8639ee 100644 > --- a/drivers/mtd/nand/raw/internals.h > +++ b/drivers/mtd/nand/raw/internals.h > @@ -130,6 +130,11 @@ static inline bool nand_has_setup_data_iface(struct nand_chip *chip) > return true; > } > > +static inline bool nand_pack_ops(struct nand_chip *chip) > +{ > + return (chip->options & NAND_PACK_OPS); > +} > + > /* BBT functions */ > int nand_markbad_bbt(struct nand_chip *chip, loff_t offs); > int nand_isreserved_bbt(struct nand_chip *chip, loff_t offs); > diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c > index 15a9189b2307..6e4eabb9dc11 100644 > --- a/drivers/mtd/nand/raw/nand_base.c > +++ b/drivers/mtd/nand/raw/nand_base.c > @@ -5031,6 +5031,44 @@ static int nand_dt_init(struct nand_chip *chip) > return 0; > } > > +/** > + * nand_controller_needs_packed_op - Check the controller habilities to perform > + * a set of split operations that the core is > + * very likely to try. If one of them do not > + * pass, then try to pack operations together. > + * @chip: The NAND chip > + * > + * Returns @true if packing is needed, false otherwise. > + */ > +static bool nand_controller_needs_packed_op(struct nand_chip *chip) > +{ > + u8 tmp[8]; > + struct nand_op_instr data_in_instrs[] = { > + NAND_OP_DATA_IN(8, tmp, 0), > + }; > + struct nand_op_instr data_out_instrs[] = { > + NAND_OP_DATA_OUT(8, tmp, 0), > + }; > + struct nand_operation ops[] = { > + NAND_OPERATION(0, data_in_instrs), > + NAND_OPERATION(0, data_out_instrs), > + }; > + int ret, i; > + > + if (!nand_has_exec_op(chip)) > + return false; > + > + for (i = 0; i < ARRAY_SIZE(ops); i++) { > + ret = chip->controller->ops->exec_op(chip, &ops[i], true); > + if (ret) { > + pr_debug("Using ->exec_op() packed operations only\n"); > + return true; > + } > + } Hm, I'm not sure that's enough to detect all weird cases that the controller might support or not. The check should really be done on actual operations with accurate sizes instead of using a randomly chosen 8byte data in/out pattern to decide whether all ops need to be monolithic or not. So, let's take a step back and analyze your use cases. You seem to have 3 here: 1/ read param page 2/ program page 3/ read page For #1, we can just do the check before executing the operation because it's only done once at init time (not in the read/write/erase path where we care about perfs). For that one I'd suggest extending the nand_read_param_page_op() function to take a check_only parameter and doing the check directly in nand_{onfi,jedec}_detect(). For #2 and #3, I'd rather have per operation flags to pick the right variant, but maybe a simpler option would be to add new helpers for those monolithic read/write until we come up with a generic way to determine which variants of each perf-sensitive sequences should be used based on exec_op() checks. int nand_monolithic_read_page_raw(struct nand_chip *chip, u8 *buf, int oob_required, int page) { struct mtd_info *mtd = nand_to_mtd(chip); unsigned int size = mtd->writesize; u8 *read_buf = buf; int ret; if (oob_required) { size += mtd->oobsize; if (buf != chip->data_buf) { chip->pagecache.page = -1; read_buf = chip->data_buf; } } ret = nand_read_page_op(chip, page, 0, read_buf, size); if (ret) return ret; if (buf != read_buf) memcpy(buf, read_buf, mtd->writesize) return 0; } int nand_monolithic_write_page_raw(struct nand_chip *chip, const u8 *buf, int oob_required, int page) { struct mtd_info *mtd = nand_to_mtd(chip); unsigned int size = mtd->writesize; const u8 *write_buf = buf; int ret; if (oob_required) { size += mtd->oobsize; if (buf != chip->data_buf) { chip->pagecache.page = -1; memcpy(chip->data_buf, buf, mtd->writesize); write_buf = chip->data_buf; } } return nand_prog_page_op(chip, page, 0, write_buf, size); } ______________________________________________________ Linux MTD discussion mailing list http://lists.infradead.org/mailman/listinfo/linux-mtd/