Re: [LSF/VM TOPIC] Handling of invalid requests in virtual HBAs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2010-04-01 at 10:15 +0200, Hannes Reinecke wrote:
> Hi all,
> 

Greetings Hannes,

Just a few comments on your proposal..

> [Topic]
> Handling of invalid requests in virtual HBAs
> 
> [Abstract]
> This discussion will focus on the problem of correct request handling with virtual HBAs.
> For KVM I have implemented a 'megasas' HBA emulation which serves as a backend for the
> megaraid_sas linux driver.
> It is now possible to connect several disks from different (physical) HBAs to that
> HBA emulation, each having different logical capabilities wrt transfersize,
> sgl size, sgl length etc.
> 
> The goal of this discussion is how to determine the 'best' capability setting for the
> virtual HBA and how to handle hotplug scenarios, where a disk might be plugged in
> which has incompatible settings from the one the virtual HBA is using currently.
> 

Most of what you are describing here in terms of having a kernel target
enforce underlying LLD limitiations for LUNs is already available in TCM
v3.x.  Current TCM code will automatically handle the processing of a
single DATA_SG_IO CDB generated by KVM Guest + megasas emulation that
exceeds the underlying LLD max_sectors, and generate the multiple
internal se_task_t's in order to complete the original I/O generated by
KVM Guest + megasas.

This is one example but the main underlying question wrt to TCM and
interaction with Linux subsystems has historically been:

What values should be enforced by TCM based on metadata presented by TCM
subsystem plugins (pSCSI, IBLOCK, FILEIO) for struct block_device, and
what is expected to be enforced by underlying Linux subsystems
presenting struct block_device..?

For the virtual TCM subsystem plugin cases (IBLOCK, FILEIO, RAMDISK) the
can_queue is a competely arbitary value and is enforced by the
underyling Linux subsystem.  There are a couple of special cases:

*) For TCM/pSCSI, can_queue is enforced from struct scsi_device->queue_depth
   and max_sectors from the smaller of the two values from struct Scsi_Host->max_sectors
   and struct scsi_device->request_queue->limits.max_sectors.

*) For TCM/IBLOCK, max_sectors is enforced based on struct request_queue->limits.max_sectors.

*) For TCM/FILEIO and TCM/RAMDISK, both can_queue and max_sectors are
   set to arbitrarly high values.

Also I should mention that TCM_Loop code currently uses a hardcoded
struct scsi_host_template->can_queue=1 and ->max_sectors=128, but will
work fine with larger values.   Being able to change the Linux/SCSI
queue depth on the fly for TCM_Loop virtual SAS ports being used in KVM
guest could be quite useful for managing KVM Guest megasas emulation I/O
traffic on a larger scale..

The other big advantage of using TCM_Loop with your megasas guest
emulation means that existing TCM logic for >= SPC-3 T10 NAA naming, PR,
and ALUA emulation is immediately available to KVM guest, and does not
have to be reproduced in QEMU code.

Who knows, it might be interesting to be able to control KVM Guest disks
using ALUA primary and sectors access states, or even share a single
TCM_Loop virtual SAS Port using across multiple KVM Guests for cluster
purposes using persistent reservations..!

Best,

--nab

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux