Re: [PATCH 3/7] multipathd: refresh multipath before calling set_no_path_retry()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 27, 2024 at 11:31:56PM +0200, Martin Wilck wrote:
> On Thu, 2024-06-27 at 14:39 -0400, Benjamin Marzinski wrote:
> > On Thu, Jun 27, 2024 at 02:33:00PM -0400, Benjamin Marzinski wrote:
> > > On Thu, Jun 27, 2024 at 09:47:33AM +0200, Martin Wilck wrote:
> > > > On Wed, 2024-06-05 at 19:22 -0400, Benjamin Marzinski wrote:


> > > Normally, check_path() will find out and resolve the issue within
> > > seconds. However, if the multipath device lost all its path
> > > devices,
> > > then check_path() won't ever call set_no_path_retry() on it
> > 
> > Err... It won't ever call update_multipath_strings(), which is
> > what will sync the mpp->features string with the kernel and make
> > set_no_path_retry() do the right thing.
> 
> Yeah, check_path() is not the right place for handling maps with no
> paths.
> 
> I have thought for a long time that the periodic checker, instead of
> blindly going through the paths vec, should follow the multipath vec,
> handle each path of the mpp in turn, and when that's done, act on the
> map. If we did it this way, we could also handle the special case of an
> empty map in the checker loop.

Not handling the multipath updated every check_path() makes a lot of
sense. I personally think it's fine, and even a good thing, to
periodically resync the daemon's mpp state with the kernel state, just
to make sure we didn't hit a corner case, since we know that they are
possible.  However, doing it once per checker interval makes more sence
than doing it once for every path per checker interval. I don't think
that buys us anything. We could even do it once every max checker
interval or something.

The only real thing that resyncing on every path check changes is that
it lowers the chance that we could think the path is up while we check
it, when in reality the kernel has already marked it down but we haven't
gotten the event yet. Most of the time when the kernel finds that the
path is down, and immediately after that, the path checker finds that
it's up, it's problem with the path checker not correctly capturing the
actual usability of the path, and we're just going to ping-pong or mark
it as marginal. So there's rarely any advantage in doing this for every
path. 


> Are you suggesting that we call refresh_multipath() (or
> update_multipath_table()) from set_no_path_retry()?

Not exactly. check_path() already calls update_multipath_strings() which
is refresh_multipath() but without the unnecessary call to
dm_get_info(). We probably are calling dm_get_info() more often than we
need to be already. But if we could remove that from
refresh_multipath(), and only call it when necessary(), we could
probably switch all the calls where we sync the state and then call
set_no_path_retry() to use a renamed setup_multipath().


> Before 0f850db ("multipathd: clean up set_no_path_retry"), multipathd
> would always set queue_if_no_path to the value it deemed correct, and I
> think this was ok, and generally race-free. IIUC, the main purpose of
> 0f850db was to avoid unnecessary message ioctls (which would cause
> locking and flag manipulation in the kernel).
> 
> 0f850db introduced the check of the current features string as cached
> by multipathd. Later we added some patches to be more confident that
> this cached string correctly reflects the kernel state, but we can
> never by 100% sure, of course.
> 
> But if we now call refresh_multipath() every time in
> set_no_path_retry(), we'll be replacing one DM ioctl (the original
> dm_message()) with multiple ioctls, while still maintaining a race
> window. That wouldn't make sense to me.

My purpose in writing 0f850db wasn't to cut down on ioctls. It was to
keep set_no_path_retry() from incorrectly handling queueing in cases
where no_path_retry was set to a number, and the device had no usable
paths.  My purpose in this patch is also to make sure
set_no_path_retry() does the right thing. Obviously, this one is more of
a corner case, and if instead we were occasionally resyncing with the
kernel even when a multipath device had no paths, that would solve this
issue, and I'd be fine with dropping this patch.

So unless it conflicts with work you're already doing, I'll send a new
patchset that changes checkerloop() to pull the multipath device
updating work out of check_path(). I'm not sure that it makes a lot of
sense to change how we loop through the paths, since check_path() does
work on some paths that aren't assigned to a multipath device. Plus
different paths in a multipath device will need checking at different
times depending on their states anyway, so you can't just go through all
the paths at once.

-Ben
 
> > > 
> > > Granted, there is an unavoidable race here. It's always possible
> > > that
> > > after multipathd updates the features string, it gets changed
> > > outside of
> > > its control. But this cuts the window down to something small
> > > instead
> > > of anytime after you lost all your paths. Any if you get unlucky
> > > enough
> > > to hit that window, you can just rerun the command and it will
> > > work.
> > > 
> > > My main reason for wanting this is that cli_disable_queueing() (and
> > > friends) is not a command you run all the time. It's a command to
> > > run
> > > when you are cleaning up a mess. It doesn't have to be fast. It has
> > > to
> > > work when things are stuck.
> > > 
> > > Thoughts?
> 
> I remain somewhat confused how this should be handled in general; see
> above. But I won't insist on rejecting your patch.
> 
> Martin





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux