Re: multipath breaks with recent udev/systemd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 17, 2014 at 1:04 PM, Hannes Reinecke <hare@xxxxxxx> wrote:
> On 12/16/2014 11:18 PM, Benjamin Marzinski wrote:
>> On Tue, Dec 16, 2014 at 04:10:44PM -0600, Benjamin Marzinski wrote:
>>> On Mon, Dec 15, 2014 at 10:31:44AM +0100, Hannes Reinecke wrote:
> [ .. ]
>>>> So during bootup it's anyone's guess who's first, multipath or udev.
>>>> And depending on the timing either multipath will fail to setup
>>>> the device-mapper device or udev will simply ignore the device.
>>>> Neither of those is a good, but the first is an absolute killer for
>>>> modern systems which _rely_ on udev to configure devices.
>>>>
>>>> So how it this supposed to work?
>>>> Why does udev ignore the entire event if it can't get the lock?
>>>> Shouldn't it rather be retried?
>>>> What is the supposed recovery here?
>>>
>>> Hannes, are you against the idea that Alexander mentioned in his first
>>> email, of just locking a file in /var/lock?  Multipathd doesn't create
>>> devices in parallel. Multipath doesn't create files in parallel.  We are
>>> explicitly trying to avoid multipath and multipathd creating files at
>>> the same time. So, we should only need a single file to lock, and
>>> /run/lock should always be there.
>>
>> O.k. So if we want to keep our current nonblocking behavior, we'll need
>> more lockfiles, either one per path or one per wwid.  This still seems
>> like a reasonable idea, if there is a good reason for systemd doing what
>> it's doing.
>>
> The problem is as follows:
>
> When multipathd is running we simply _cannot_ guarantee that no udev
> events are currently running.

All running block device events will take a shared lock of the device
node. This can be used?

> This currently hits us especially bad
> during system startup when device probing is still running during
> multipathd startup.
> Multipathd will then enumerate all block devices to setup the
> initial topology.
> But in doing so it might trip over device which are still processed
> by udev (or, worse still, _not yet_ processed by udev).
> (Yes, I know, libudev_enumerate should protect against this.
>  But it doesn't. )

There is a udev_enumerate_match_is_initialized(), which can help?

> So it's anyone guess what'll happen now; either multipath trips over
> the lock from udev when calling 'lock_multipath' (and consequently
> failing to setup the multipath device), or udev
> tripping over the lock from multipath and ignoring the event,
> leaving us with a non-functioning device.
>
> We can fixup the startup sequence (which we need to do anyway, given
> the libudev enumerate bug) to just re-trigger all block device
> events, but this still doesn't fix the actual issue.
> Point is, there might be _several_ events for the same device being
> queued (think of a flaky path with several PATH_FAILED /
> PATH_REINSTATED events in a row), and so multipathd might be
> processing one event for the device while udev is processing the
> next event for the same device.
>
> For this to work we need some synchronization with udev; _if_ there
> would be a libudev callout for 'is there an event for this device
> running' we can easily fail the 'lock_multipath' operation, knowing
> that we'll be getting another event shortly for the same device.
>
> Alternatively we can call flock(LOCK_EX) on that device, but that
> will only work if udev would _not_ abort event handling for that
> device, but rather issues a retry.

Exclusive locks signify "device ownership" and udev will suppress all
events to not disturb the owner's operations. There can't be any udev
dependencies or expectations of udev doing something during the time
an exclusive lock is held.

> After all, there _is_ a device timeout in udev. It should be
> relatively easy to retry the event and let it run into a timeout if
> the lock won't be released.

We should avoid any complex rules here, and if possible just reduce
the scheme to a simple ownership model.

Thanks,
Kay

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux