Re: Replaying event for a libudev monitor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 2, 2009 at 19:02, Marcel Holtmann <marcel@xxxxxxxxxxxx> wrote:
>> >> > I think you get it pretty much. You could describe it is as "daemon
>> >> > coldplug" for events for a specific RUN=+"socket:*".
>> >> >
>> >> > Something similar to what you have with "udevadm test" at the moment,
>> >> > but with the limitation that only this one socket gets the events.
>> >>
>> >> You mean the "trigger" not the "test", right?
>> >
>> > I think that I meant a combination of both. The "test" nicely shows with
>> > RUN operation are meant to be executed.
>> >
>> >> > As mentioned before, the reason behind this is that without some kind of
>> >> > support I have to put matching rules into a *.rules file for runtime
>> >> > detection and another set of matching logic into the client using
>> >> > libudev enumeration. I prefer to have both pieces in the *.rules files
>> >> > since then it is easy changeable.
>> >>
>> >> That sounds nice, sure.
>> >>
>> >> > So I do see your point with the matching rules that run external
>> >> > programs. I wasn't thinking about them since so far the matching rules
>> >> > are kinda simple. I do wanna avoid to just send all udev events to the
>> >> > monitor (like HAL and DeviceKit does) since that is just overhead and
>> >> > re-implementing the matching code and scripts is just not a good idea.
>> >> > The things that udev provides right now are perfect.
>> >> >
>> >> > My current simple idea to solve this would be to add another
>> >> > udev_ctrl_msg_type that libudev then can use to trigger this.
>> >> >
>> >> > Looking at the code it seems that you identify the socket already using
>> >> > udev_ctrl_new_from_socket() and so no need for an extra parameter to
>> >> > this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
>> >> > low-level command around udev_monitor_replay_events() for libudev. And
>> >> > then udevd is responsible for the threading, invoking of programs and
>> >> > making sure no other RUN+="socket:*" are executed.
>> >>
>> >> Maybe we could do something like:
>> >>   UDEV_CTRL_EVENT(socket-match, devpath, action)
>> >> to inject events into the daemon.
>> >>
>> >> We probably do not want the sysfs crawling logic running in the daemon.
>> >> The daemon would execute the single event, but ignore all RUN keys
>> >> without a matching socket string. We may use the enumerator to pass all
>> >> needed events to the daemon. One argument for udev_ctrl_send_event() is
>> >> the match for the RUN keys specified in the rules, only matching RUN
>> >> sockets would be executed.
>> >>
>> >> In many cases we need to limit the triggers to certain subsystems.
>> >> Like you want to ignore the "block" subsystem, if you don't need it,
>> >> with the possible 10.000+ block devices. :)
>> >>
>> >> In general I'm scared that people will use that and cause
>> >> hundreds/thousands of processes/threads with every daemon that needs to
>> >> initialize that way. It looks like the most correct solution from the
>> >> API/config side, because you have only a single rule, that filters and
>> >> sends events, where you hook your daemon code into. But on the other
>> >> hand, it also sounds like a very wrong, and _very_ expensive way to do a
>> >> "daemon initialization".
>> >>
>> >> People try to limit the current udev coldplug cost, and now we would
>> >> introduce the same thing for every daemon. :) We may not want to provide
>> >> such infrastructure, just imagine a system bootup where several daemons
>> >> trigger all devices, with a process/thread for every device on the
>> >> system.
>> >
>> > I started looking through the code and realized that there is potential
>> > for abuse (even if we limit it to UID 0). So I really think that we need
>> > some kind of facility to make this work, because as explained splitting
>> > matching rules between configuration files and code is bad.
>> >
>> > Maybe this would make it possible to have this functionality without the
>> > nasty overhead of the coldplug mess. The main assumption is that we have
>> > a rules file to begin with that defines which devices we are interested
>> > in and be able to monitor them via libudev.
>> >
>> >        SUBSYSTEM=="usb", ATTRS{idVendor}=="1234", TAG="MyDaemon"
>> >
>> >        TAG=="MyDaemon", RUN+="socket:@mydaemon_socket"
>> >
>> > Lets introduce another key (call it TAG for now) that allows us to tag
>> > certain matching rules and then only have these send to a socket. Then
>> > we could write a daemon like this:
>> >
>> >        ctx = udev_new();
>> >        mon = udev_monitor_new_from_socket(ctx, "@mydaemon_socket");
>> >        udev_monitor_enable_receiving(mon);
>> >
>> >        /* setup watch etc. */
>> >
>> >        udev_monitor_replay_devices(mon, "MyDaemon");
>> >
>> > This would limit the replayed devices to the actual monitor socket and
>> > also to a certain details inside the rules file. It is still possible to
>> > exploit this for global RUN actions, but that could be just forbidden.
>> >
>> > We might need to store the tag in the udev database, but it would be a
>> > minimal overhead. At least I assume that.
>> >
>> > In addition we could add an add_match helper to the enumeration API that
>> > allows applications, that don't care about runtime monitoring, just list
>> > the devices with such a defined tag.
>> >
>> > Would this work?
>>
>> I think you can do all that already. You "tag" all your devices by
>> setting an ENV key, and use the API David mentioned in the other mail:
>>   http://git.kernel.org/?p=linux/hotplug/udev.git;a=commitdiff;h=f089350234e39b868a5e3df71a8f8c036aaae4fd
>>
>> The test program shows the usage:
>>   $ udev/lib/test-libudev
>>  ...
>>  enumerate 'property IF_FS_*=filesystem'
>>  device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda10'
>> (block)
>>  device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda5'
>> (block)
>>  device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda6'
>> (block)
>>  device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda7'
>> (block)
>>  device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda9'
>> (block)
>>  found 5 devices
>>  ...
>>
>> That way you use the enumeration API and and get your devices. Isn't
>> that what you need?
>
> if that works then that would be good enough. I was under the assumption
> that the ENV settings are only temporary and used only during the rule
> matching itself. I will test it.

It should work. Or we will make it work. The API is just for that
exact use case. David and I will port the HAL ACL stuff over to udev,
and use that kind of "tags":
  http://git.kernel.org/?p=linux/hotplug/udev-extras.git;a=blob;f=udev-acl/70-acl.rules;hb=HEAD

> What do you think about still adding a:
>
>        udev_monitor_replay_devices(struct udev_monitor *, match_rule);
>
> That could be a shortcut for the enumeration API in case you are using a
> monitor anyway.

Do you really want to pass that over the socket, in most cases inside
the same process?

Why would we want to serialize the device data, send it, receive it,
de-serialize it? The monitor gives you a "struct udev_device", we
could just make the enumerator to give you also list of "struct
udev_device" (instead of only a list of syspaths), so you would get
ready-to-use data without any socket operations. Isn't that easier and
more efficient in your use case?

Kay
--
To unsubscribe from this list: send the line "unsubscribe linux-hotplug" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel]     [Linux DVB]     [Asterisk Internet PBX]     [DCCP]     [Netdev]     [X.org]     [Util Linux NG]     [Fedora Women]     [ALSA Devel]     [Linux USB]

  Powered by Linux