Re: [Question] Why queue_work unneeded for REQUEUE bio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike,


I have another question about dm, though it's irrelevant to this original

mail.


Currently abnormal IO will call blk_queue_split() and normal IO will

be split considering @max_sectors/@chunk_sectos (in max_io_len()).


Question 1: Why bio should be split considering queue_limits in dm layer?

After all the underlying device will split themselves by queue_limits if

the dm layer doesn't split by queue_limits.


Then Question 2: Currently only @max_sectors is considered when splitting

normal IO in dm layer. Should we also consider @max_segments/@max_segment_size

as blk_queue_split() does?


Thanks,

Jeffle


On 11/3/20 5:23 PM, Jeffle Xu wrote:
Hi Mike,

Why queue_work() is unnecessary here for bio with BLK_STS_DM_REQUEUE
returned?

Thanks
Jeffle Xu

---
  drivers/md/dm.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c18fc2548518..ae550daa99b5 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -908,9 +908,11 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
  			 * Target requested pushing back the I/O.
  			 */
  			spin_lock_irqsave(&md->deferred_lock, flags);
-			if (__noflush_suspending(md))
+			if (__noflush_suspending(md)) {
  				/* NOTE early return due to BLK_STS_DM_REQUEUE below */
  				bio_list_add_head(&md->deferred, io->orig_bio);
+				queue_work(md->wq, &md->work);
+			}
  			else
  				/* noflush suspend was interrupted. */
  				io->status = BLK_STS_IOERR;

--
Thanks,
Jeffle

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux