Re: [RFC] block: enqueue splitted bios into same cpu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 22, 2020 at 08:19:00PM +0800, JeffleXu wrote:
> 
> On 9/22/20 7:56 PM, Ming Lei wrote:
> > On Tue, Sep 22, 2020 at 12:43:37PM +0800, JeffleXu wrote:
> > > Thanks for replying. Comments embedded below.
> > > 
> > > 
> > > On 9/13/20 10:00 PM, Ming Lei wrote:
> > > > On Fri, Sep 11, 2020 at 07:40:14PM +0800, JeffleXu wrote:
> > > > > Thanks for replying ;)
> > > > > 
> > > > > 
> > > > > On 9/11/20 7:01 PM, Ming Lei wrote:
> > > > > > On Fri, Sep 11, 2020 at 11:29:58AM +0800, Jeffle Xu wrote:
> > > > > > > Splitted bios of one source bio can be enqueued into different CPU since
> > > > > > > the submit_bio() routine can be preempted or fall asleep. However this
> > > > > > > behaviour can't work well with iopolling.
> > > > > > Do you have user visible problem wrt. io polling? If yes, can you
> > > > > > provide more details?
> > > > > No, there's no practical example yet. It's only a hint from the code base.
> > > > > 
> > > > > 
> > > > > > > Currently block iopolling only polls the hardwar queue of the input bio.
> > > > > > > If one bio is splitted to several bios, one (bio 1) of which is enqueued
> > > > > > > into CPU A, while the others enqueued into CPU B, then the polling of bio 1
> > > > > > > will cotinuously poll the hardware queue of CPU A, though the other
> > > > > > > splitted bios may be in other hardware queues.
> > > > > > If it is guaranteed that the returned cookie is from bio 1, poll is
> > > > > > supposed to work as expected, since bio 1 is the chained head of these
> > > > > > bios, and the whole fs bio can be thought as done when bio1 .end_bio
> > > > > > is called.
> > > > > Yes, it is, thanks for your explanation. But except for polling if the input
> > > > > bio has completed, one of the
> > > > > 
> > > > > important work of polling logic is to reap the completion queue. Let's say
> > > > > one bio is split into
> > > > > 
> > > > > two bios, bio 1 and bio 2, both of which are enqueued into the same hardware
> > > > > queue.When polling bio1,
> > > > > 
> > > > > though we have no idea about bio2 at all, the polling logic itself is still
> > > > > reaping the completion queue of
> > > > > 
> > > > > this hardware queue repeatedly, in which case the polling logic still
> > > > > stimulates reaping bio2.
> > > > > 
> > > > > 
> > > > > Then what if these two split bios enqueued into two different hardware
> > > > > queue? Let's say bio1 is enqueued
> > > > > 
> > > > > into hardware queue A, while bio2 is enqueued into hardware queue B. When
> > > > > polling bio1, though the polling
> > > > > 
> > > > > logic is repeatedly reaping the completion queue of hardware queue A, it
> > > > > doesn't help reap bio2. bio2 is reaped
> > > > > 
> > > > > by IRQ as usual. This certainly works currently, but this behavior may
> > > > > deviate the polling design? I'm not sure.
> > > > > 
> > > > > 
> > > > > In other words, if we can ensure that all split bios are enqueued into the
> > > > > same hardware queue, then the polling
> > > > > 
> > > > > logic *may* be faster.
> > > > __submit_bio_noacct_mq() returns cookie from the last bio in current->bio_list, and
> > > > this bio should be the bio passed to __submit_bio_noacct_mq() when bio splitting happens.
> > > > 
> > > > Suppose CPU migration happens during bio splitting, the last bio should be
> > > > submitted to LLD much late than other bios, so when blk_poll() finds
> > > > completion on the hw queue of the last bio, usually other bios should
> > > > be completed already most of times.
> > > > 
> > > > Also CPU migration itself causes much bigger latency, so it is reasonable to
> > > > not expect good IO performance when CPU migration is involved. And CPU migration
> > > > on IO task shouldn't have been done frequently. That said it should be
> > > > fine to miss the poll in this situation.
> > > Yes you're right. After diving into the code of nvme driver, currently nvme
> > > driver indeed allocate interrupt for polling queues,
> > No, nvme driver doesn't allocate interrupt for poll queues, please see
> > nvme_setup_irqs().
> 
> Sorry I was wrong here. Indeed interrupts are disabled for IO queues in
> polling mode. Then this can be a problem.
> 
> If CPU migration happens, separate split bios can be enqueued into different
> polling hardware queues (e.g. queue 1
> 
> and queue 2). The caller is continuously polling on one of the polling
> hardware queue (e.g. queue 1) indicated by the
> 
> returned cookie. If there's no other thread polling on the other hardware
> queue (e.g. queue 2), the split bio on queue 2
> 
> will not be reaped since the interrupt of queue 2 is disabled. Finally the
> completion of this bio (on queue 2) relies on
> 
> timeout mechanism.

OK, looks one real issue, just found that request can only be completed
explicitly in nvme_mq_ops.poll(). Without calling ->poll() on specified
poll hw queue, request can't be completed at all.


Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux