On Monday 19 October 2015 10:59:07 Andy Grover wrote: > On 02/04/2015 03:31 PM, Andy Grover wrote: > > On 02/04/2015 12:21 PM, Alex Elsayed wrote: > >>> Maybe we step back and define a DBus interface > >> > >> I'd be quite happy with that - I considered suggesting it, in fact, but > >> wasn't sure of the prevailing opinion re: dbus around here. > >> > >> What would you do re: discovery, though? > >> > >> (Explaining some DBus things, which readers may already be aware of) > >> > >> In DBus, there's a two-level hierarchy of busnames holding objects. > >> Busnames are either the inherent, connection-level one of the :\d+.\d+ > >> form, > >> or the human-readable well-known name form (reverse DNS). Objects, then, > >> implement interfaces. > >> > >> However, well-known names can only have a single owner - so discovering > >> which busnames have objects which implement an interface is non-trivial. > >> > >> The approach taken by KDE is to suffix the well-known name with the PID > >> (org.kde.StatusNotifierItem-2055 or whatever), call ListNames, and > >> filter in > >> the client. This has the drawback of making DBus activation impossible. > >> > >> Another approach is for every implementor to try to claim the well-known > >> name, and on failure contact the existing owner to republish their > >> objects > >> (possibly under a namespaced object path). This has the drawback of > >> complicating the implementation somewhat, as well as making bus > >> activation > >> only able to activate a single 'default' implementation. > >> > >> A third approach would be to explicitly define a multiplexor, which > >> backends > >> ask to republish their objects. This simplifies implementations, and it > >> could also provide its own API that requests a backend by name, and > >> ensures > >> that backend's object is available. This could be driven by something as > >> simple as a key-value mapping from backend name to a well-known DBus name > >> specific to that backend, which the multiplexor calls to trigger service > >> activation. > >> > >> Thoughts? > > > > It really seems to come down to: will multiple independent user-handler > > daemons be needed? Because I'm trying really hard to make tcmu-runner > > good enough so that the answer is no :-) > > Hi Alex, > > Given the interest that Rancher and QEMU have expressed for integrating > processing of user handlers into their own event loops, it's clear now > that we *do* need to support multiple implementors. The third approach > you describe above is looking very desirable, to present users with a > unified view of what handlers are available. In that case, one example of successfully using that pattern may be of interest: the Telepathy instant-messaging framework. Specifically, the "mission-control-5" acts as a multiplexer in almost exactly that manner to the various connection managers, each of which handles one-or-more protocols. > One interesting wrinkle is that Oleg from Rancher is already making > progress libifying bits of tcmu-runner. This gives us the opportunity to > implement backend-to-multiplexor coordination in a library that backends > just have to link to and use. That would certainly be nice! > So...this is turning into a whole big thing after all. I just wanted to > ping you since we started discussing this back in February, and see if > you had any more advice on how best to proceed, either about the design > or the implementation. > > Thoughts? > > Are there any other projects we should be using as a model? As mentioned above, Telepathy is likely the most relevant - especially seeing as there are libraries, such as telepathy-qt and telepathy-glib, which similarly abstract over the multiplexing concern. -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html