Hi all, I just wanted to share my current thoughts for support for
user-backed backstores in rtslib and targetcli, and hopefully developed
and adopted in common across -fb and Datera versions.
As background, user-backed backstores (also known as TCMU) allow the
processing of a LUN's commands be passed-through to a user process,
instead of being handled by one of LIO's kernel backstore modules. This
might be needed to work with a userspace-only API, or to implement
less-common SCSI command sets such as streaming (tape) emulation that
are not supported in the kernel backstores.
I think we want to strive to make user backstores appear as much as
possible like the built-in kernel ones, and hide as much of their
increased complexity as we can. We can make targetcli and/or rtslib
extensible so the installation of a new userspace handler will result in
that handler being listed as a backstore in targetcli.
For example, a Gluster-backed handler could consist of two parts:
1) handler_gluster.so, part of the tcmu-runner daemon. This would
actually handle converting the SCSI commands received for a
userspace-backed LUN into calls into Gluster API calls.
2) dynbs_gluster.py. This would be packaged along with
handler_gluster.so, but to a different directory where targetcli instead
of tcmu-runner would find it. Targetcli discovers and execs the file,
which defines a UIGlusterBackstore class and instantiates it. This puts
'gluster' in targetcli's tree, right alongside the built-in kernel-based
backstores. Its ui_command_create() does arg validation specific to
Gluster. If the args are valid, it then creates an instance of rtslib
UserBackedStorageObject, and this starts the chain of events that
results in handler_gluster.so being ready to accept commands for the new
storage object.
Here's a proof-of-concept of what targetcli integration might look like:
https://github.com/agrover/targetcli-fb/tree/user-backstore-poc
Regards -- Andy
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html