On Thu, Feb 11, 2016 at 6:57 PM, Schlacta, Christ <aarcane@xxxxxxxxxxx> wrote: > On Thu, Feb 11, 2016 at 3:58 PM, Nicholas A. Bellinger > <nab@xxxxxxxxxxxxxxx> wrote: >> On Thu, 2016-02-11 at 09:44 -0800, Schlacta, Christ wrote: >>> I apologize, as last night I accidentally replied off-list to this >>> thread due to a mis-configured list. I've recreated the messages >>> below, updated the time, and corrected a typo. >>> >>> >>> On Thu, Feb 11, 2016 at 12:00 AM, Nicholas A. Bellinger >>> <nab@xxxxxxxxxxxxxxx> wrote: >>> > On Wed, 2016-02-10 at 18:36 -0800, Schlacta, Christ wrote: >>> >> I'm trying to run lio with targetcli and tcm_qla2xxx and of course >>> >> qla2xxx on debian jessie with kernel debian 3.16.0-4, and I'm running >>> >> into the strangest issue. I'm seeing everything as apparently correct >>> >> in targetcli, with no unexpected errors in dmesg or otherwise. The >>> >> card works fine, as it functioned correctly with scst, but I get only >>> >> one device on the initiator now, and it's not any of the devices I've >>> >> configured. It appears as follows in the client on QConvergeConsole >>> >> (The management software for the windows driver): >>> >> >>> >> Product Vendor: >>> >> LIO-ORG >>> >> Product ID: >>> >> RAMDISK-MCP >>> >> Product Revision: >>> >> 4.0 >>> >> LUN: >>> >> 0 >>> >> Size: >>> >> Unknown >>> >> Type: >>> >> Unknown >>> >> WWULN: >>> >> 4C-49-4F-2D-4F-52-47-00-00 >>> >> >>> > >>> > The LUN=0 above is a Virtual LUN=0 provided by LIO to all demo-mode >>> > target_core_mod sessions, when no matching explicit NodeACL + MappedLUNs >>> > groups for LUN=0 within tcm_qla2xxx endpoint + port configfs exist. >>> >>> I figured it was something like that. For the record, scst provides no >>> such nodes, and entire loops can have no nodes present, or even nodes >>> that start above zero. It's useful for maintaining (virtual) disk -> >>> lun mapping. >> >> Not sure how that relates to NodeACL + MappedLUNs here. >> >>> >>> > >>> >> I got no idea what to try, but I'm thinking about switching back to >>> >> scst even though I upgraded to debian jessie to get in-kernel lio if I >>> >> can't figure out what's wrong here. Tell me what you'd like to see, >>> >> and I'll share it. Below are some log snippets that I doubt will be >>> >> useful, but may be. >>> >> >>> >> [11717.118353] qla2xxx [0000:05:00.0]-505f:5: Link is operational (4 Gbps). >>> >> [11717.527716] qla2xxx [0000:05:00.0]-1020:5: **** Failed mbx[0]=4005, >>> >> mb[1]=4, mb[2]=8807, mb[3]=ffff, cmd=70 ****. >>> >> [11717.528120] qla2xxx [0000:05:00.0]-0121:5: Failed to enable >>> >> receiving of RSCN requests: 0x2. >>> >> >>> >> /qla2xxx/21:0...9d:12:dd/acls> ls / >>> >> o- / .............................................. [...] >>> >> o- backstores ................................... [...] >>> >> | o- fileio ....................... [3 Storage Objects] >>> >> | | o- izanami01 . [256.0G, /vdisk02/izanami01, in use] >>> >> | | o- test01 ....... [256.0G, /vdisk02/test01, in use] >>> >> | | o- test02 ....... [256.0G, /vdisk02/test02, in use] >>> >> | o- iblock ........................ [0 Storage Object] >>> >> | o- pscsi ......................... [0 Storage Object] >>> >> | o- rd_mcp ........................ [0 Storage Object] >>> >> o- iscsi .................................. [0 Targets] >>> >> o- loopback ............................... [0 Targets] >>> >> o- qla2xxx ................................. [1 Target] >>> >> | o- 21:00:00:1b:32:9d:12:dd ................ [enabled] >>> >> | o- acls ................................... [1 ACL] >>> >> | | o- 20:00:00:1b:32:9d:77:da ...... [3 Mapped LUNs] >>> >> | | o- mapped_lun0 .................... [lun0 (rw)] >>> >> | | o- mapped_lun1 .................... [lun1 (rw)] >>> >> | | o- mapped_lun2 .................... [lun2 (rw)] >>> >> | o- luns .................................. [3 LUNs] >>> >> | o- lun0 . [fileio/izanami01 (/vdisk02/izanami01)] >>> >> | o- lun1 ....... [fileio/test01 (/vdisk02/test01)] >>> >> | o- lun2 ....... [fileio/test02 (/vdisk02/test02)] >>> >> o- vhost .................................. [0 Targets] >>> > >>> > Looks fine as an NodeACL + MappedLUN for 20:00:00:1b:32:9d:77:da >>> > >>> > Please confirm your FC WWPN initiator port (non-NPIV) value Also, insofar as I can verify, the FC WWPN initiator port (non-NPIV) value is as follows: Hostname: localhost HBA Model: QLE2462 Node Name: 20-00-00-1B-32-9D-77-DA Port Name: 21-00-00-1B-32-9D-77-DA HBA Port: 1 Port ID: 00-00-02 >>> > >>> >>> I'm not at the initiator now, but I'll check later. >> >> In recent versions of tcm_qla2xxx, there is a configfs attribute to list >> which 'demo-mode' sessions are active: >> >> cat /sys/kernel/config/target/qla2xxx/$WWPN/tpgt_1/dynamic_sessions >> >> This will show which $INITIATOR_WWPN ports have logged in, vs. what's >> currently configured in /qla2xxx/$WWPN/acls/$INITIATOR_WWPN/ for >> explicit NodeACLs. > > I'm on 3.16.0-4 on debian wheezy, so I'm guessing this isn't a new > enough version. Is there a DKMS package I can install newer modules > with? That kinda defeats the purpose of switching to lio though > >> >>> In the mean time, >>> can I somehow disable acls, as there is only the two endpoints on the >>> physical loop, and that's enough access control for testing purposes? >>> (No switch) >> >> Yes, in targetcli under each /qla2xxx/$WWPN/ context, do: >> >> /> cd /qla2xxx/21:00:00:24:ff:48:97:7e/ >> /qla2xxx/21:0...4:ff:48:97:7e> set attribute generate_node_acls=1 >> cache_dynamic_acls=1 demo_mode_write_protect=1 >> Parameter generate_node_acls is now '1'. >> Parameter cache_dynamic_acls is now '1'. >> Parameter demo_mode_write_protect is now '1'. > > Those are the default values already set. Are you sure some of them > shouldn't be 0s? I can't find documentation on them anywhere. > >> >> Keep in mind, if your initiator WWPNs has already logged in to the >> individual target WWPN, and session instance is resident in memory. >> >> In order for the demo-mode changes to take effect on current sessions >> without an explicit NodeACL, you'll want to 'saveconfig' and >> then /etc/init.d/target restart. >> -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html