Hello, On 20-12-23 16:49:04, Christoph Hellwig wrote: > set_blocksize just sets the block sise used for buffer heads and should > not be called by the driver. blkdev_get updates the block size, so > you must already have the fd re-reading the partition table open? > I'm not entirely sure how we can work around this except by avoiding > buffer head I/O in the partition reread code. Note that this affects > all block drivers where the block size could change at runtime. Thank you Christoph for your comment on this. Agreed. BLKRRPART leads us to block_read_full_page which takes buffer heads for I/O. Yes, __blkdev_get() sets i_blkbits of block device inode via set_init_blocksize. And Yes again as nvme-cli already opened the block device fd and requests the BLKRRPART with that fd. Also, __bdev_get() only updates the i_blkbits(blocksize) in case bdev->bd_openers == 0 which is the first time to open this block device. Then, how about having NVMe driver prevent underflow case for the request->__data_len is smaller than the logical block size like: diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index ce1b61519441..030353d203bf 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -803,7 +803,11 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, cmnd->rw.opcode = op; cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id); cmnd->rw.slba = cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req))); - cmnd->rw.length = cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1); + + if (unlikely(blk_rq_bytes(req) < (1 << ns->lba_shift))) + cmnd->rw.length = 0; + else + cmnd->rw.length = cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1); if (req_op(req) == REQ_OP_WRITE && ctrl->nr_streams) nvme_assign_write_stream(ctrl, req, &control, &dsmgmt); Thanks,