Re: [PATCH] lightnvm: pblk: fix bio leak on large sized io

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/30/19 2:53 AM, 김찬솔 wrote:

Changes:
  1. Function pblk_rw_io to get bio* as a reference
  2. In pblk_rw_io bio_put call on read case removed

A fix to address issue where
  1. pblk_make_rq calls pblk_rw_io passes bio* pointer as a value (0xA)
  2. pblk_rw_io calls blk_queue_split passing bio* pointer as reference
  3. In blk_queue_split, when there is a split, the original bio* (0xA)
     is passed to generic_make_requests, and the newly allocated bio is
     returned
  4. If NVM_IO_DONE returned, pblk_make_rq calls bio_endio on the bio*,
     that is not the one returned by blk_queue_split
  5. As a result bio_endio is not called on the newly allocated bio.

Signed-off-by: chansol.kim <chansol.kim@xxxxxxxxxxx>
---
  drivers/lightnvm/pblk-init.c | 22 ++++++++--------------
  1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index b57f764d..4efc929 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -31,30 +31,24 @@ static DECLARE_RWSEM(pblk_lock);
  struct bio_set pblk_bio_set;
static int pblk_rw_io(struct request_queue *q, struct pblk *pblk,
-			  struct bio *bio)
+			  struct bio **bio)
  {
-	int ret;
-
  	/* Read requests must be <= 256kb due to NVMe's 64 bit completion bitmap
  	 * constraint. Writes can be of arbitrary size.
  	 */
-	if (bio_data_dir(bio) == READ) {
-		blk_queue_split(q, &bio);
-		ret = pblk_submit_read(pblk, bio);
-		if (ret == NVM_IO_DONE && bio_flagged(bio, BIO_CLONED))
-			bio_put(bio);

Could we kill the NVM_DONE_IO check in the pblk_rw_io, that should achieve the same?

-
-		return ret;
+	if (bio_data_dir(*bio) == READ) {
+		blk_queue_split(q, bio);
+		return pblk_submit_read(pblk, *bio);
  	}
/* Prevent deadlock in the case of a modest LUN configuration and large
  	 * user I/Os. Unless stalled, the rate limiter leaves at least 256KB
  	 * available for user I/O.
  	 */
-	if (pblk_get_secs(bio) > pblk_rl_max_io(&pblk->rl))
-		blk_queue_split(q, &bio);
+	if (pblk_get_secs(*bio) > pblk_rl_max_io(&pblk->rl))
+		blk_queue_split(q, bio);
- return pblk_write_to_cache(pblk, bio, PBLK_IOTYPE_USER);
+	return pblk_write_to_cache(pblk, *bio, PBLK_IOTYPE_USER);
  }
static blk_qc_t pblk_make_rq(struct request_queue *q, struct bio *bio)
@@ -69,7 +63,7 @@ static blk_qc_t pblk_make_rq(struct request_queue *q, struct bio *bio)
  		}
  	}
- switch (pblk_rw_io(q, pblk, bio)) {
+	switch (pblk_rw_io(q, pblk, &bio)) {
  	case NVM_IO_ERR:
  		bio_io_error(bio);
  		break;





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux