Re: [PATCH v3 15/15] multipathd: enable pathgroups in checker_finished()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 17, 2025 at 09:27:38PM +0100, Martin Wilck wrote:
> multipathd calls enable_group() from update_path_state() if a path in a
> previously disabled pathgroup is reinstated. This call may be mistakenly
> skipped if the path group status wasn't up-to-date while update_path_state()
> was executed. This can happen after applying the previous patch "multipathd:
> sync maps at end of checkerloop", if the kernel has disabled the group during
> the last checker interval.
> 
> Therefore add another check in checker_finished() after calling sync_mpp(),
> and enable groups if necessary. This step can be skipped if the map was
> reloaded, because after a reload, all pathgroups are enabled by default.

The logic running in checker_finished() isn't the same as the logic
running in update_path_state(). It update_path_state(), we only enable a
pathgroup when a path switches to PATH_UP. In checker_finished(), we
enable it whenever a path is in PATH_UP. I worry that this might not
always be correct. Perhaps instead of just checking if the path is in
PATH_UP, we should also make sure that pp->is_checked isn't in
CHECK_PATH_SKIPPED, which would mean that the checker is pending or
messed up, and the path is using its old state. This still assumes that
the path switched to some other state before the pathgroup got delayed
in re-initializing again, but that seems pretty safe. I just don't want
to go re-enabling pathgroups where the controller is actually busy,
since I'm pretty sure that the kernel usually does the right thing
without multipathd's help.

Alternatively, we could make the check_finished() logic match the
update_path_state() logic exactly by just setting a flag in the path's
pathgroup during enable_group() (and possibly not call dm_enablegroup()
at all, but I suppose that there could be a benefit to re-enabling the
group as soon as possible. I'm still kinda fuzzy on whether the kernel's
own pathgroup re-enabling code makes all this redundant). Then, in
enable_pathrgoups(), instead of checking each path, we just need to
check if pgp->need_reenable is set for the pathgroup().

The other benefit to using a flag like pgp->need_reenable is that we
could clear it on all pathgroups if we called switch_pathgroups(), since
that will cause the kernel to enable all the pathgroups anyways, which
makes our pgp->status invalid. Although we probably should also update
pgp->status to be PGSTATE_ENABLED (or at least PGSTATE_UNDEF) if we
were messing with the pathgroups in switch_pathgroups(). And if we
update pgp->status, that would avoid unnecessary re-enables with my
first idea as well (since no pathgroups would be disabled anymore). 

But perhaps the best answer is to just to say that this is a corner case
where we skip a multipathd action that might be totally unnecessary. And
even if it is a problem, it will be fixed if multipathd ever decides
that the kernel is using the wrong pathgroup and should switch, or
whenever the table gets reloaded. Maybe the whole patch is unnecessary.

Thoughts? I'm clearly thinking too much.
-Ben

> 
> Signed-off-by: Martin Wilck <mwilck@xxxxxxxx>
> ---
>  multipathd/main.c | 34 +++++++++++++++++++++++++++++++++-
>  1 file changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/multipathd/main.c b/multipathd/main.c
> index 310d7ef..98abadc 100644
> --- a/multipathd/main.c
> +++ b/multipathd/main.c
> @@ -2917,6 +2917,34 @@ update_paths(struct vectors *vecs, int *num_paths_p, time_t start_secs)
>  	return CHECKER_FINISHED;
>  }
>  
> +static void enable_pathgroups(struct multipath *mpp)
> +{
> +	struct pathgroup *pgp;
> +	int i;
> +
> +	vector_foreach_slot(mpp->pg, pgp, i) {
> +		struct path *pp;
> +		int j;
> +
> +		if (pgp->status != PGSTATE_DISABLED)
> +			continue;
> +
> +		vector_foreach_slot(pgp->paths, pp, j) {
> +			if (pp->state != PATH_UP)
> +				continue;
> +
> +			if (dm_enablegroup(mpp->alias, i + 1) == 0) {
> +				condlog(2, "%s: enabled pathgroup #%i",
> +					mpp->alias, i + 1);
> +				pgp->status = PGSTATE_ENABLED;
> +			} else
> +				condlog(2, "%s: failed to enable pathgroup #%i",
> +					mpp->alias, i + 1);
> +			break;
> +		}
> +	}
> +}
> +
>  static void checker_finished(struct vectors *vecs, unsigned int ticks)
>  {
>  	struct multipath *mpp;
> @@ -2943,12 +2971,16 @@ static void checker_finished(struct vectors *vecs, unsigned int ticks)
>  				i--;
>  				continue;
>  			}
> -		} else if (prio_reload || failback_reload || ghost_reload || inconsistent)
> +		} else if (prio_reload || failback_reload || ghost_reload || inconsistent) {
>  			if (reload_and_sync_map(mpp, vecs) == 2) {
>  				/* multipath device deleted */
>  				i--;
>  				continue;
>  			}
> +		} else
> +			/* not necessary after map reloads */
> +			enable_pathgroups(mpp);
> +
>  		/* need_reload was cleared in dm_addmap and then set again */
>  		if (inconsistent && mpp->need_reload)
>  			condlog(1, "BUG: %s; map remained in inconsistent state after reload",
> -- 
> 2.47.1





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux