On Thu, Nov 24, 2016 at 10:21:10AM +0100, Martin Wilck wrote: > On Fri, 2016-11-18 at 16:26 -0600, Benjamin Marzinski wrote: > > > The issue is that we simply need to work on processing the whole list > > at > > a time. Martin's filtering definitely has a place here. He is > > correct > > that if we get a delete uevent for a device, we don't need to process > > any of the earlier ones. We do need to look at both the add and > > change > > uevents individually. For instance, one of the change uevents out of > > a > > group could be for changing the read-only status of a device. If we > > just > > took the most recent, we would miss that information. The good news > > is > > that we don't need to actually call uev_add_path and uev_update_path > > once for each of these events. All we need to do is read through the > > uev environment variables for all of them to find the information we > > need (like change in ro status), and then call the function once with > > that information. > > Do we need to implement the environment-merging ourselves? AFAICS this > logic is implemented in udev itself. So if the last event for a given > device is a change event, we'd just need to fetch the device properties > from the udev db. Therefore IMO we just have to look at the last > received event for a given path: > > - DELETE => discard the device > - ADD => use the udev properties coming with the event > - CHANGE => query udev db for current set of properties The issue is that when we get a change event, there may be udev environment variables that tell us what that specific event is for. For instance, a change in the Read-only status of the path device. Future change events will not have those environment variables set. However, if we want to only call uev_update_path once, we need to know all of the things that changed, so we need to examine the approriate udev environment variables for all the change events, so that when we call uev_update_path, we can take all the necessary actions (in this instance, changing the Read-only status of the multipath device). > > At any rate, I'd rather get rid of the gazillion waiter threads > > first. > > Hm, I thought the threads are good because this avoids one unresponsive > device to stall everything? There is work making dm events pollable, so that you can wait for any number of them with one thread. At the moment, once we get an event, we lock the vecs lock, which pretty much keeps everything else from running, so this doesn't really change that. -Ben > Martin > > -- > Dr. Martin Wilck <mwilck@xxxxxxxx>, Tel. +49 (0)911 74053 2107 > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton > HRB 21284 (AG Nürnberg) -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel