On Wed, Apr 27, 2016 at 01:10:29PM +0200, Hannes Reinecke wrote: > udev since v214 is placing a shared lock on the device node > whenever it's processing the event. This introduces a race > condition with multipathd, as multipathd is processing the > event for the block device at the same time as udev is > processing the events for the partitions. > And a lock on the partitions will also be visible on the > block device itself, hence multipathd won't be able to > lock the device. > When multipath manages to take a lock on the device, > udev will fail, and consequently ignore this entire event. > Which in turn might cause the system to malfunction as it > might have been a crucial event like 'remove' or 'link down'. > > So we should better use LOCK_SH here; with that the flock > call in multipathd _and_ udev will succeed and the events > can be processed. If we switch this to a shared lock, then what's the point in having it at all? The whole point of lock_multipath is to keep multipath and multipathd (or two concurrent calls to multipath) from trying to create a device at the same time, and both failing. Without an exclusive lock, this won't stop that. We can either decide that this is an unlikely scenario, and drop it entirely, or we can have multipath create its own lockfiles to prevent this issue without interfering with udev. But unless I'm missing something, this won't actually do anything. -Ben > Signed-off-by: Hannes Reinecke <hare@xxxxxxx> > --- > libmultipath/configure.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/libmultipath/configure.c b/libmultipath/configure.c > index 30c7259..ca20ba5 100644 > --- a/libmultipath/configure.c > +++ b/libmultipath/configure.c > @@ -546,7 +546,7 @@ lock_multipath (struct multipath * mpp, int lock) > if (!pgp->paths) > continue; > vector_foreach_slot(pgp->paths, pp, j) { > - if (lock && flock(pp->fd, LOCK_EX | LOCK_NB) && > + if (lock && flock(pp->fd, LOCK_SH | LOCK_NB) && > errno == EWOULDBLOCK) > goto fail; > else if (!lock) > -- > 2.6.6 -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel