Hi Krishna, On 08/14/2012 12:48 AM, Krishna Gudipati wrote:
From: Steffen Maier [mailto:maier@xxxxxxxxxxxxxxxxxx] On 08/11/2012 04:35 AM, kgudipat@xxxxxxxxxxx wrote:
[KRISHNA]: Steffen, yes you are right, currently in this proposal we only have 3 interfaces exported from SCSI mid-layer for LLD to configure LUN Masking. I believe with the interfaces provided currently we should be able to enable LUN masking even though we don't have a persistent storage or a vendor-specific BSG based interface. The interfaces scsi_target_mask_lun() to mask a LUN and scsi_target_unmask_lun() to unmask a LUN can be called dynamically even from your current sysfs implementation of LUN Masking.
Right, but this would mean unnecessary code duplication because each LLD would have to implement its own user space interface, whereas common code in the midlayer could handle all this transparently for any (unmodified) LLD.
A common user space interface would also ease establishing a common persistency mechanism in user space that can be used transparently by any LLD.
The advantage of having persistent storage in the LLD is we can have LUN Masking enabled/configured during the driver load, as we can call the APIs to configure LUN Masking from target_alloc() entry point and can avoid LUN Masking configuration from user space during every module load and during the target offline/online events as above.
I understand this makes sense for an LLD that already has a persistency layer. However, in general, this is probably not the case. Therefore, it seems to me as if your approach focuses on the specific case rather than the generic one.
[KRISHNA]: The disadvantage of the LUN Masking implementation using the SCSI slave callouts is we cannot do a REPORT_LUNS based SCSI scan and need to fall back to Sequential LUN scan. The reason being if we return -ENXIO from slave_alloc() for any LUN that is part of the REPORT_LUNS response payload this would result in the scan being aborted. So we need to do a sequential LUN scan which is not so good as we end up scanning for 16K LUNs, for SPARSE LUNs. So we came up with this proposal.
Good point, thanks for the explanation. This is definitely a big pro for midlayer lun masking.
In zfcp, we haven't had this issue so far, because it was either no lun scanning at all or allow all luns scanned by the midlayer. With midlayer lun masking, we could even allow the user to filter/select luns for the latter case which is very useful for big storage sites with relaxed target lun masking.
Speaking of big SANs, this could also extend to initiator based "zoning" one day, i.e. having a common mechanism in scsi_transport_fc to let the user specify which remote target port WWPNs she would like to use and only login to those. But I guess, this is just wishful thinking right now. We have users with >30 Linux images sharing each (of multiple for multipathing) HBA and thus making up a lot of initiators often configured into the same FC zone and this can cause all kinds of trouble especially if rezoning the fabric (preferably into single initiator zones which can get many) is not an option.
In addition the design can be enhanced to have something like udev rules to maintain persistency as you mentioned; I will think it through, please let me know if you have some ideas.
Have a look at how SuSE configures s390 devices by means of udev rules /etc/udev/rules.d/51-....rules written by /sbin/zfcp_disk_configure being part of the s390-tools rpm package. Instead of writing to zfcp specific sysfs attributes, it could write to midlayer sysfs attributes provided for lun masking, as an example:
https://build.opensuse.org/package/view_file?file=zfcp_disk_configure&package=s390-tools&project=Base%3ASystem Regards, Steffen Linux on System z Development IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294 -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html