On 12/26/2016 1:10 AM, Alex Williamson wrote: > On Sun, 25 Dec 2016 22:39:47 +0530 > Kirti Wankhede <kwankhede@xxxxxxxxxx> wrote: > >> On 12/23/2016 1:51 AM, Alex Williamson wrote: >>> Using the mtty mdev sample driver we can generate a remove race by >>> starting one shell that continuously creates mtty devices and several >>> other shells all attempting to remove devices, in my case four remove >>> shells. The fault occurs in mdev_remove_sysfs_files() where the >>> passed type arg is NULL, which suggests we've received a struct device >>> in mdev_device_remove() but it's in some sort of teardown state. The >>> solution here is to make use of the accidentally unused list_head on >>> the mdev_device such that the mdev core keeps a list of all the mdev >>> devices. This allows us to validate that we have a valid mdev before >>> we start removal, remove it from the list to prevent others from >>> working on it, and if the vendor driver refuses to remove, we can >>> re-add it to the list. >>> >> >> Alex, >> >> Writing 1 on 'remove' first removes itself, i.e. calls >> device_remove_file_self(dev, attr). So if the file is removed then >> device_remove_file_self() should return false, isn't that returns false? >> kernfs_remove_self() hold the mutex that should handle this condition. > > In theory, I agree. In practice I was able to generate the race > described. We're getting through to call mdev_device_remove with > a struct device that resolves to an mdev where the type_kobj is > NULL, presumably it's been freed. Maybe there's a better fix > within kernfs, but this sanitizes the mdev on our end to resolve > it. To see the issue, simply run 'while true; do uuidgen > > create; done', then from a few other shells loop finding mdev > devices and remove any that are found. Set dmesg to only print > critical messages or else it'll slow create and delete to the > point where it'll be difficult to get the race. Thanks, > I see. pci-sysfs too uses mutex around its remove function even after device_remove_file_self() returned true. Yes, probably kernfs might have better fix. This change looks good to me. Thanks, Kirti -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html