On Tue, May 03, 2016 at 09:39:48PM -0500, Josh Poimboeuf wrote: > On Wed, May 04, 2016 at 12:31:12AM +0200, Jiri Kosina wrote: > > On Tue, 3 May 2016, Josh Poimboeuf wrote: > > > > > > 1. Do we really need a completion? If I am not missing something > > > > kobject_del() always waits for sysfs callers to leave thanks to kernfs > > > > active protection. > > > > > > What do you mean by "kernfs active protection"? I see that > > > kernfs_remove() gets the kernfs_mutex lock, but I can't find anywhere > > > that a write to a sysfs file uses that lock. > > > > > > I'm probably missing something... > > > > I don't want to speak on Miroslav's behalf, but I'm pretty sure that what > > he has on mind is per-kernfs_node active refcounting kernfs does (see > > kernfs_node->active, and especially it's usage in __kernfs_remove()). > > > > More specifically, execution of store() and show() sysfs callbacks is > > guaranteed (by kernfs) to happen with that particular attribute's active > > reference held for reading (and that makes it impossible for that > > attribute to vanish prematurely). > > Thanks, that makes sense. > > So what exactly is the problem the completion is trying to solve? Is it > to ensure that the kobject has been cleaned up before it returns to the > caller, in case the user wants to call klp_register() again after > unregistering? > > If so, that's quite an unusual use case which I think we should just > consider unsupported. In fact, if you try to do it, kobject_init() > complains loudly because kobj->state_initialized is still 1 because > kobjects aren't meant to be reused like that. ... and now I realize the point is actually to prevent the caller from freeing klp_patch before kobject_cleanup() runs. So yeah, it looks like we need the completion in case CONFIG_DEBUG_KOBJECT_RELEASE is enabled. Or alternatively we could convert patch->kobj to be dynamically allocated instead of embedded in klp_patch. -- Josh -- To unsubscribe from this list: send the line "unsubscribe live-patching" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html