Re: plans for user-backed device support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andy Grover wrote:

> On 02/04/2015 09:50 AM, Alex Elsayed wrote:
>>>> Here's a proof-of-concept of what targetcli integration might look
>>>> like: https://github.com/agrover/targetcli-fb/tree/user-backstore-poc
>  >>
>>> I have a  couple questions about this.
>>>
>>> 1.) What about alternate frontend implementations that still want to use
>>> TCMU? By making the config module python, not only do you impose a
>>> presort on the implementations, you also likely have duplicated logic
>>> (the TCMU backend will need to validate parameters anyway, after all.)
>  >>
>>> 2.) Why not have a dynbs_tcmu.py that talks to TCMU somehow, and have a
>>> few additional functions (explicit param validation, etc) added to the
>>> API that TCMU expects a backend to expose? That saves the backend
>>> implementer from having to care about rtslib's API; they just worry
>>> about the TCMU API they already were working with.
>>
>> (Or heck, even a tiny python lib that dlopen's the handler_*.so and calls
>> the TCMU-defined API bits via FFI)
> 
> Very good points. So you're saying we don't want to tie the user-handler
> discovery mechanism to our current configtool or its language.
> 
> It would also be nice to allow an alternate implementation of the TCMU
> handler daemon, which dictates a degree of abstraction going the other
> way. The current user-kernel interface is of course not dependent on
> tcmu-runner, but also lets unrelated processes handle different sets of
> user-backed backstores.
> 
> Maybe we step back and define a DBus interface that
> $tcmu-handler-daemons would implement, which would allow $configtools to
> enumerate handlers and pre-validate parameters. This would allow tight
> integration of user-handled backstores in targetcli, but also keep
> things loosely coupled enough to allow alternate implementations of
> either side.

I'd be quite happy with that - I considered suggesting it, in fact, but 
wasn't sure of the prevailing opinion re: dbus around here.

What would you do re: discovery, though?

(Explaining some DBus things, which readers may already be aware of)

In DBus, there's a two-level hierarchy of busnames holding objects.
Busnames are either the inherent, connection-level one of the :\d+.\d+ form, 
or the human-readable well-known name form (reverse DNS). Objects, then, 
implement interfaces.

However, well-known names can only have a single owner - so discovering 
which busnames have objects which implement an interface is non-trivial.

The approach taken by KDE is to suffix the well-known name with the PID
(org.kde.StatusNotifierItem-2055 or whatever), call ListNames, and filter in 
the client. This has the drawback of making DBus activation impossible.

Another approach is for every implementor to try to claim the well-known 
name, and on failure contact the existing owner to republish their objects 
(possibly under a namespaced object path). This has the drawback of 
complicating the implementation somewhat, as well as making bus activation 
only able to activate a single 'default' implementation.

A third approach would be to explicitly define a multiplexor, which backends 
ask to republish their objects. This simplifies implementations, and it 
could also provide its own API that requests a backend by name, and ensures 
that backend's object is available. This could be driven by something as 
simple as a key-value mapping from backend name to a well-known DBus name 
specific to that backend, which the multiplexor calls to trigger service 
activation.

Thoughts?

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux