On Thu, 2012-04-19 at 07:30 -0500, Anthony Liguori wrote: > Hi, > Hi Anthony, > As I've mentioned before in the past, I will apply vhost-* without an extremely > compelling argument for it. > > The reason we ultimately settled on vhost-net is that in the absence of a > fundamental change in the userspace networking interface (with something like VJ > channels), it's extremely unlikely that we would ever get the right interfaces > to do zero-copy transmit/receive in userspace. > > However, for storage, be it scsi or direct access, the same problem really > doesn't exist. There isn't an obvious benefit to being in the kernel. > In the modern Linux v3.x tree, it was decided there is an obvious benefit to fabric drivers developers for going ahead and putting proper SCSI target logic directly into the kernel.. ;) > There are many downsides though. It's a big security risk. SCSI is a complex > protocol and I'd rather the piece of code that's parsing and emulating SCSI cdbs > was unprivileged and sandboxed than running within the kernel context. > It has historically been a security risk doing raw SCSI passthrough of complex CDBs to underlying SCSI LLD code, because SCSI CDB emulation support within specific LLDs had an easy and sure-fire chance to get said complex CDB emulation wrong.. (eg: no generic library in the kernel for LLDs until libata was created) With Linux v3.x hosts we now have universal target mode support in the upstream kernel for BLOCK+FILEIO backends with full SPC-3 (NAA IEEE Registered Extended WWN naming, Explict/Implict ALUA multipath, Persistent Reservations) using a control plane with ConfigFS representing objects at the VFS layer for parent/child and intra-module kernel data structure relationships. We also have userspace rtslib (supported in downstream distros) that exposes the ConfigFS layout of tcm_vhost fabric endpoints as easily scriptable python library objects into higher level application code. > So before we get too deep in patches, we need some solid justification first. > So the potential performance benefit is one thing that will be in favor of vhost-scsi, I think the ability to utilize the underlying TCM fabric and run concurrent ALUA multipath using multiple virtio-scsi LUNs to the same /sys/kernel/config/target/core/$HBA/$DEV/ backend can potentially give us some nice flexibility when dynamically managing paths into the virtio-scsi guest. Also, since client side ALUA is supported across pretty much all server class SCSI clients in the last years, it does end up getting alot of usage in the SCSI world. It's a client side SCSI multipath feature that is fabric independent, and that we know is already supported for free across all Linux flavours + other modern server class guests. --nab -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html