Re: [PATCH v2] fs-writeback: writeback_sb_inodes:Recalculate 'wrote' according skipped pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/18/22 1:43 PM, Linus Torvalds wrote:
> [ Adding some scheduler people - the background here is a ABBA
> deadlock because a plug never gets unplugged and the IO never starts
> and the buffer lock thus never gets released. That's simplified, see
>      https://lore.kernel.org/all/20220415013735.1610091-1-chengzhihao1@xxxxxxxxxx/
>   and
>      https://bugzilla.kernel.org/show_bug.cgi?id=215837
>    for details ]
> 
> On Mon, Apr 18, 2022 at 2:14 AM Zhihao Cheng <chengzhihao1@xxxxxxxxxx> wrote:
>>
>> In my test, 'need_resched()' (which is imported by 590dca3a71 "fs-writeback:
>> unplug before cond_resched in writeback_sb_inodes") in function
>> 'writeback_sb_inodes()' seldom comes true, unless cond_resched() is deleted
>> from write_cache_pages().
> 
> So I'm not reacting to the patch, but just to this part of the message...
> 
> I forget the exact history of plugging, but at some point (long long
> ago - we're talking pre-git days) it was device-specific and always
> released on a timeout (or, obviously, explicitly unplugged).

That is correct, it used to be a tq_disk list and each queue could be
added. This was back in the days when io_request_lock was a single
spinlock around all of bdev queuing, so quite a while ago :-)

> And then later it became per-process, and always released by task-work
> on any schedule() call.

kblock kickoff from schedule, we never do task-work for unplug. It's
either done in-line if not from schedule, or punted to kblockd. But not
really relevant to the problem at hand...

> But over time, that "any schedule" has gone away. It did so gradually,
> over time, and long ago:
> 
>   73c101011926 ("block: initial patch for on-stack per-task plugging")
>   6631e635c65d ("block: don't flush plugged IO on forced preemtion scheduling")
> 
> And that's *mostly* perfectly fine, but the problem ends up being that
> not everything necessarily triggers the flushing at all.
> 
> In fact, if you call "__schedule()" directly (rather than
> "schedule()") I think you may end up avoiding flush entirely. I'm
> looking at  do_task_dead() and schedule_idle() and the
> preempt_schedule() cases.
> 
> Similarly, tsk_is_pi_blocked() will disable the plug flush.
> 
> Back when it was a timer, the flushing was eventually guaranteed.
> 
> And then we would flush on any re-schedule, even if it was about
> preemption and the process might stay on the CPU.
> 
> But these days we can be in the situation where we really don't flush
> at all - the process may be scheduled away, but if it's still
> runnable, the blk plug won't be flushed.
> 
> To make things *really* confusing, doing an io_schedule() will force a
> plug flush, even  if the process might stay runnable. So io_schedule()
> has those old legacy "unconditional flush" guarantees that a normal
> schedule does not any more.

I think that's mostly to avoid hitting it in the schedule path, as it
involves a lock juggle at that point. If you're doing io_schedule(),
presumable chances are high that you have queued IO.

> Also note how the plug is per-process, so when another process *does*
> block (because it's waiting for some resource), that doesn't end up
> really unplugging the actual IO which was started by somebody else.
> Even if that other process is using io_schedule().
> 
> Which all brings us back to how we have that hacky thing in
> writeback_sb_inodes() that does
> 
>         if (need_resched()) {
>                 /*
>                  * We're trying to balance between building up a nice
>                  * long list of IOs to improve our merge rate, and
>                  * getting those IOs out quickly for anyone throttling
>                  * in balance_dirty_pages().  cond_resched() doesn't
>                  * unplug, so get our IOs out the door before we
>                  * give up the CPU.
>                  */
>                 blk_flush_plug(current->plug, false);
>                 cond_resched();
>         }
> 
> and that currently *mostly* ends up protecting us and flushing the
> plug when doing big writebacks, but as you can see from the email I'm
> quoting, it then doesn't always work very well, because
> "need_resched()" may end up being cleared by some other scheduling
> point, and is entirely meaningless when preemption is on anyway.
> 
> So I think that's basically just a random voodoo programming thing
> that has protected us in the past in some situations.
> 
> Now, Zhihao has a patch that fixes the problem by limiting the
> writeback by being better at accounting:
> 
>     https://lore.kernel.org/all/20220418092824.3018714-1-chengzhihao1@xxxxxxxxxx/
> 
> which is the email I'm answering, but I did want to bring in the
> scheduler people to the discussion to see if people have ideas.
> 
> I think the writeback accounting fix is the right thing to do
> regardless, but that whole need_resched() dance in
> writeback_sb_inodes() is, I think, a sign that we do have real issues
> here. That whole "flush plug if we need to reschedule" is simply a
> fundamentally broken concept, when there are other rescheduling
> points.
> 
> Comments?
> 
> The answer may just be that "the code in writeback_sb_inodes() is
> fundamentally broken and should be removed".
> 
> But the fact that we have that code at all makes me quite nervous
> about this. And we clearly *do* have situations where the writeback
> code seems to cause nasty unplugging delays.
> 
> So I'm not convinced that "fix up the writeback accounting" is the
> real and final fix.
> 
> I don't really have answers or suggestions, I just wanted people to
> look at this in case they have ideas.

Unless I'm missing something, this exclusively seems to be a problem
with being preempted (task scheduled out, still runnable) and the
original patch did flush for preemption. I wasn't aware of the writeback
work-around doing those need_resched() checks to explicitly work-around
not flushing on preemption, that seems like a somewhat nasty
work-around...

So as far as I can tell, we really have two options:

1) Don't preempt a task that has a plug active
2) Flush for any schedule out, not just going to sleep

1 may not be feasible if we're queueing lots of IO, which then leaves 2.
Linus, do you remember what your original patch here was motivated by?
I'm assuming it was an effiency thing, but do we really have a lot of
cases of IO submissions being preempted a lot and hence making the plug
less efficient than it should be at merging IO? Seems unlikely, but I
could be wrong.

-- 
Jens Axboe




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux