Re:

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I apologize, as last night I accidentally replied off-list to this
thread due to a mis-configured list. I've recreated the messages
below, updated the time, and corrected a typo.


On Thu, Feb 11, 2016 at 12:00 AM, Nicholas A. Bellinger
<nab@xxxxxxxxxxxxxxx> wrote:
> On Wed, 2016-02-10 at 18:36 -0800, Schlacta, Christ wrote:
>> I'm trying to run lio with targetcli and tcm_qla2xxx and of course
>> qla2xxx on debian jessie with kernel debian 3.16.0-4, and I'm running
>> into the strangest issue.  I'm seeing everything as apparently correct
>> in targetcli, with no unexpected errors in dmesg or otherwise.  The
>> card works fine, as it functioned correctly with scst, but I get only
>> one device on the initiator now, and it's not any of the devices I've
>> configured.  It appears as follows in the client on QConvergeConsole
>> (The management software for the windows driver):
>>
>> Product Vendor:
>> LIO-ORG
>> Product ID:
>> RAMDISK-MCP
>> Product Revision:
>> 4.0
>> LUN:
>> 0
>> Size:
>> Unknown
>> Type:
>> Unknown
>> WWULN:
>> 4C-49-4F-2D-4F-52-47-00-00
>>
>
> The LUN=0 above is a Virtual LUN=0 provided by LIO to all demo-mode
> target_core_mod sessions, when no matching explicit NodeACL + MappedLUNs
> groups for LUN=0 within tcm_qla2xxx endpoint + port configfs exist.

I figured it was something like that. For the record, scst provides no
such nodes, and entire loops can have no nodes present, or even nodes
that start above zero. It's useful for maintaining (virtual) disk ->
lun mapping.

>
>> I got no idea what to try, but I'm thinking about switching back to
>> scst even though I upgraded to debian jessie to get in-kernel lio if I
>> can't figure out what's wrong here.  Tell me what you'd like to see,
>> and I'll share it.  Below are some log snippets that I doubt will be
>> useful, but may be.
>>
>> [11717.118353] qla2xxx [0000:05:00.0]-505f:5: Link is operational (4 Gbps).
>> [11717.527716] qla2xxx [0000:05:00.0]-1020:5: **** Failed mbx[0]=4005,
>> mb[1]=4, mb[2]=8807, mb[3]=ffff, cmd=70 ****.
>> [11717.528120] qla2xxx [0000:05:00.0]-0121:5: Failed to enable
>> receiving of RSCN requests: 0x2.
>>
>> /qla2xxx/21:0...9d:12:dd/acls> ls /
>> o- / .............................................. [...]
>>   o- backstores ................................... [...]
>>   | o- fileio ....................... [3 Storage Objects]
>>   | | o- izanami01 . [256.0G, /vdisk02/izanami01, in use]
>>   | | o- test01 ....... [256.0G, /vdisk02/test01, in use]
>>   | | o- test02 ....... [256.0G, /vdisk02/test02, in use]
>>   | o- iblock ........................ [0 Storage Object]
>>   | o- pscsi ......................... [0 Storage Object]
>>   | o- rd_mcp ........................ [0 Storage Object]
>>   o- iscsi .................................. [0 Targets]
>>   o- loopback ............................... [0 Targets]
>>   o- qla2xxx ................................. [1 Target]
>>   | o- 21:00:00:1b:32:9d:12:dd ................ [enabled]
>>   |   o- acls ................................... [1 ACL]
>>   |   | o- 20:00:00:1b:32:9d:77:da ...... [3 Mapped LUNs]
>>   |   |   o- mapped_lun0 .................... [lun0 (rw)]
>>   |   |   o- mapped_lun1 .................... [lun1 (rw)]
>>   |   |   o- mapped_lun2 .................... [lun2 (rw)]
>>   |   o- luns .................................. [3 LUNs]
>>   |     o- lun0 . [fileio/izanami01 (/vdisk02/izanami01)]
>>   |     o- lun1 ....... [fileio/test01 (/vdisk02/test01)]
>>   |     o- lun2 ....... [fileio/test02 (/vdisk02/test02)]
>>   o- vhost .................................. [0 Targets]
>
> Looks fine as an NodeACL + MappedLUN for 20:00:00:1b:32:9d:77:da
>
> Please confirm your FC WWPN initiator port (non-NPIV) value
>

I'm not at the initiator now, but I'll check later.  In the mean time,
can I somehow disable acls, as there is only the two endpoints on the
physical loop, and that's enough access control for testing purposes?
(No switch)
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux