Mike Anderson <andmike@xxxxxxxxxx> writes: > Eric W. Biederman <ebiederm@xxxxxxxxxxxx> wrote: >> I may be a little off but looking at the events types defined. >> device down, device up. Defining a completely new interface for this >> looks absolutely absurd. >> >> >> This is device hotplug isn't it? As such we should be using the >> hotplug infrastructure and not reinventing the wheel here. >> > > I assume device hotplug means kobject_uevent and KOBJ_* events. The > original intent was to have a little more structure in the data format the > env strings. I also wanted to reduce the number of allocations that where > happening with GFP_KERNEL to send an event. > > Currently the patch is only supporting a couple of events with the intent of > adding more over time. I see that I could map most events to KOBJ_CHANGE, > previously it did not seem like the correct fit. Yes. kobject_uevent is what I was thinking of. Even if it means it makes sense to add a few more types to handle the situation at first glance it seems to fit well. I know if I was looking for a notification that a something was added or removed from a raid array I would look at the generic hotplug events first. Possibly KOBJ_CHANGE isn't the proper fit, possibly you have something that is sufficient unique that it needs it's own enumeration type. If so we can fix the generic code. At the moment though KOBJ_CHANGE does seem to describe what you are doing and the event data you have don't look like they are actually a problem to fit into a user space environment. As for worry about kmallocs do these events happen often? I would not expect any of them in the normal course of operation for a system. Worst case you handle extra kmallocs with a library function. It's not like you are using GFP_ATOMIC. Plus there are other little bonuses if these events are reported with kobject_uevent such as an already existing user space tools to look at them and do something, and a scripting infrastructure to take action when things happen. >> If it isn't hotplug it looks like something that inotify should >> handle. >> >> If that isn't the case I am fairly certain that md already has a >> mechanism to handle this, and those two should stay in sync >> if at all possible on this kind of thing. >> > > Device mapper does have a "event happened" interface today, but post the > event the user must determine the context of the event (dm also sends a > kobject_uevent KOBJ_CHANGE only for a resume event). This patch was only > effecting dm, but I know the md has similar infrastructure. This patch > was passing out the event context through netlink that already existed but > was lost through the current generic event interface. > > The existing event interfaces was left in place to not effect existing users > allowing migration over to a netlink interface over time. I know sending some of the context with an event is important so that you can filter out events you don't need. Is the lack of context actually a problem today? >> So this appears to be a gratuitous user interface addition. >> Why do we need a new user interface for this? > > While I understand Evgeniy and Davids comment about utilizing the > genetlink interface, I guess I am not seeing that utilizing a netlink > channel for a subsystem as a gratuitous user interface addition vs. > running everything through kobject_uevent. Gratuitous in the sense that if we already have a mechanism for performing a function we should not invent a new mechanism for doing that same function for each different subsystem. The fewer unique types of user interfaces we have the easier it to maintain the kernel, the easier it is to write programs that work with several kernel subsystems, and the easier it is for other developers to understand and use the kernel. This of course assumes things are a good fit or the other pieces can be made to be a good fit. Currently I don't see the what makes device mapper's events not fit the existing models, and I don't see it in the description for this patch why we need to invent something new. Eric -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel