This is a note to let you know that I've just added the patch titled mtd: rawnand: Add a helper for calculating a page index to the 6.8-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mtd-rawnand-add-a-helper-for-calculating-a-page-inde.patch and it can be found in the queue-6.8 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit c32dfea35a8601b452871ea76f3ece10ff339967 Author: Miquel Raynal <miquel.raynal@xxxxxxxxxxx> Date: Fri Feb 23 12:55:44 2024 +0100 mtd: rawnand: Add a helper for calculating a page index [ Upstream commit df9803bf5a91e3599f12b53c94722f2c4e144a86 ] For LUN crossing boundaries, it is handy to know what is the index of the last page in a LUN. This helper will soon be reused. At the same time I rename page_per_lun to ppl in the calling function to clarify the lines. Cc: stable@xxxxxxxxxxxxxxx # v6.7 Signed-off-by: Miquel Raynal <miquel.raynal@xxxxxxxxxxx> Link: https://lore.kernel.org/linux-mtd/20240223115545.354541-3-miquel.raynal@xxxxxxxxxxx Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c index bcfd99a1699fd..d6a27e08b1127 100644 --- a/drivers/mtd/nand/raw/nand_base.c +++ b/drivers/mtd/nand/raw/nand_base.c @@ -1211,19 +1211,25 @@ static int nand_lp_exec_read_page_op(struct nand_chip *chip, unsigned int page, return nand_exec_op(chip, &op); } +static unsigned int rawnand_last_page_of_lun(unsigned int pages_per_lun, unsigned int lun) +{ + /* lun is expected to be very small */ + return (lun * pages_per_lun) + pages_per_lun - 1; +} + static void rawnand_cap_cont_reads(struct nand_chip *chip) { struct nand_memory_organization *memorg; - unsigned int pages_per_lun, first_lun, last_lun; + unsigned int ppl, first_lun, last_lun; memorg = nanddev_get_memorg(&chip->base); - pages_per_lun = memorg->pages_per_eraseblock * memorg->eraseblocks_per_lun; - first_lun = chip->cont_read.first_page / pages_per_lun; - last_lun = chip->cont_read.last_page / pages_per_lun; + ppl = memorg->pages_per_eraseblock * memorg->eraseblocks_per_lun; + first_lun = chip->cont_read.first_page / ppl; + last_lun = chip->cont_read.last_page / ppl; /* Prevent sequential cache reads across LUN boundaries */ if (first_lun != last_lun) - chip->cont_read.pause_page = first_lun * pages_per_lun + pages_per_lun - 1; + chip->cont_read.pause_page = rawnand_last_page_of_lun(ppl, first_lun); else chip->cont_read.pause_page = chip->cont_read.last_page; }