Re: Reviving Ibmvscsi target/Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:

On Wed, 2016-04-06 at 11:30 -0400, Bryant G Ly wrote:
Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:


<SNIP>

I think mode_sense can also
be scrapped and just use the common spc_emulate code. For inquiry we can
either use the existing or try to make spc_emulate_inquiry account for
this emulation.

For INQUIRY, it would probably be easier to just provide ibmvscsis with
it's own caller for populating inquiry payload, removed from the
existing spc_emulate_inquiry_std() + spc_emulate_evpd_83().

Reason being that mixing and matching these two for what ibmvscsis needs
for INQUIRY is likely not going to be useful to other drivers, and I
assume VIOS initiators want to avoid being returned INQUIRY EVPD=0x83
identifiers / designators.


Okay, that makes sense. Ill just fix up what is currently in the driver
and ill remove the Report Luns and mode sense and utilize the emulation in
target_core_spc.


But on a side note, I think I have a SCSI scan that starts after the
login request is complete, which is good I think that is the report_luns
request that I'm seeking. Do you know how to make the target init go first?

As in having transport_init_session, core_tpg_check_initiator_node_acl,
and transport_register_session all occur prior to ibmvscsis_probe being
called? This way I can ensure target has mapped backstores/luns prior
to this driver starting. I think this will fix the whole issue with
client adapter not seeing the luns.


Mmmmm.  AFAIK in original code, vio_register_driver() and subsequent
ibmvscsis_probe() where done at ibmvscsis module load time.

It would be possible to do the vio_register_driver() -> probe() after
TFO->make_tpg() and /sys/kernel/config/target/ibmvscsis/$WWN/$TPGT/ has
enabled, but that would certainly break support for multiple endpoints.

Looking at the original code, I don't see how it signaled to VIOS
initiator to perform the rescan after the endpoint came up, but AFAIK
that was working at some point.

So it sounds like there is still some other issue going on here.


I have found the issue. But I have changed some things... I made the
probe function allocate everything and basically just wait until there is
something dumped into it's queue (basically whenever the client side is ready)
this is most cases since server's usually running and ready first prior to
client. At this point everything is already up and allocated, which includes
userspace mapping aka the target mapped backstores from targetcli. Then
at this point a client would come online and initiate. I'll include a log.

[   70.846395] ibmvscsis: getsysteminfo
[   70.846414] ibmvscsis: start register template
[   70.846418] Setup generic discovery
[   70.846420] Setup generic wwn
[   70.846421] Setup generic wwn_fabric_stats
[   70.846422] Setup generic tpg
[   70.846423] Setup generic tpg_base
[   70.846424] Setup generic tpg_port
[   70.846426] Setup generic tpg_port_stat
[   70.846427] Setup generic tpg_lun
[   70.846428] Setup generic tpg_np
[   70.846429] Setup generic tpg_np_base
[   70.846430] Setup generic tpg_attrib
[   70.846431] Setup generic tpg_auth
[   70.846432] Setup generic tpg_param
[   70.846433] Setup generic tpg_nacl
[   70.846434] Setup generic tpg_nacl_base
[   70.846436] Setup generic tpg_nacl_attrib
[   70.846437] Setup generic tpg_nacl_auth
[   70.846438] Setup generic tpg_nacl_param
[   70.846439] Setup generic tpg_nacl_stat
[   70.846440] Setup generic tpg_mappedlun
[   70.846442] Setup generic tpg_mappedlun_stat
[   70.846458] ibmvscsis: Probe for UA 0x3000000e
[   70.846461] ibmvscsis: probe- adapter: c0000001f37cf000,
target:c0000001f776ca00, tpg:c0000001f39fb800, tport:c0000001f387c400
tport_name:3000000e
[   70.846463] ibmvscsis: Probe: liobn 0x1000000e, riobn 0x1300000e
[   70.851769] ibmvscsis: Partner adapter not ready
[   70.851921] ibmvscsis: failed crq_queue_create ret: 0
[   70.851923] ibmvscsis: ibmvscsis_send_crq(0x3000000e, 0xc001000000000000,
0x0000000000000000)
[   70.851927] ibmvscsis: ibmvcsis_send_crq rc = 0x2
[   70.851928] ibmvscsis: Failed to send CRQ message
[   93.768456] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: usb_gadget
[   93.768460] target_core_register_fabric() trying autoload for usb_gadget
[   93.768462] target_core_get_fabric() failed for usb_gadget
[   93.768688] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: qla2xxx
[   93.768689] target_core_register_fabric() trying autoload for qla2xxx
[   93.768691] target_core_get_fabric() failed for qla2xxx
[   93.769021] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: srpt
[   93.769023] target_core_register_fabric() trying autoload for srpt
[   93.769024] target_core_get_fabric() failed for srpt
[   93.775123] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: ibmvscsis
[   93.775128] Target_Core_ConfigFS: REGISTER -> Located fabric: ibmvscsis
[   93.775129] Target_Core_ConfigFS: REGISTER tfc_wwn_cit -> c0000001f39fc138
[   93.775131] Target_Core_ConfigFS: REGISTER -> Allocated Fabric: ibmvscsis
[   93.776042] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: vhost
[   93.776044] target_core_register_fabric() trying autoload for vhost
[   93.776045] target_core_get_fabric() failed for vhost
[   93.776290] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: fc
[   93.776291] target_core_register_fabric() trying autoload for fc
[   93.776292] target_core_get_fabric() failed for fc
[  102.739709] Setup generic dev
[  102.739712] Setup generic dev_attrib
[  102.739713] Setup generic dev_pr
[  102.739714] Setup generic dev_wwn
[  102.739716] Setup generic dev_alua_tg_pt_gps
[  102.739717] Setup generic dev_stat
[  102.739718] TCM: Registered subsystem plugin: user struct module:
d0000000034a3d80
[  102.739836] CORE_HBA[1] - Attached HBA to Generic Target Core
[  105.506137] CORE_HBA[2] - Attached HBA to Generic Target Core
[  105.527482] CORE_HBA[3] - Attached HBA to Generic Target Core
[  105.528177] Target_Core_ConfigFS: fileio_2/LUN_0 set udev_path:
/tmp/tmp.img
[  105.528391] fileio: Adding to default ALUA LU Group:
core/alua/lu_gps/default_lu_gp
[  105.528481]   Vendor: LIO-ORG
[  105.528482]   Model: FILEIO
[  105.528483]   Revision: 4.0
[  105.528485]   Type:   Direct-Access
[  105.528712] Target_Core_ConfigFS: Set emulated VPD Unit Serial:
f1bf54c6-0963-4d03-9786-a17fb2f03b20
[  109.705145] ibmvscsis: make_tport(3000000e),
pointer:c0000001f387c400 tport_id:4
[  109.705257] ibmvscsis: maketpg
[  109.705263] ibmvscsis: make_tpg name:3000000e, tport_proto_id:4,
tpgt:1
[  109.705276] TARGET_CORE[ibmvscsis]: Allocated portal_group
for endpoint: 3000000e, Proto: 4, Portal Tag: 1
[  113.752442] ibmvscsis_TPG[1]_LUN[0] - Activated ibmvscsis
Logical Unit from CORE HBA: 3
[  116.129901] Target_Core_ConfigFS: REGISTER ->
group: d000000002e754f8 name: usb_gadget
[  116.129906] target_core_register_fabric() trying autoload for usb_gadget
[  116.129908] target_core_get_fabric() failed for usb_gadget
[  116.130133] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: qla2xxx
[  116.130134] target_core_register_fabric() trying autoload for qla2xxx
[  116.130136] target_core_get_fabric() failed for qla2xxx
[  116.130398] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: srpt
[  116.130399] target_core_register_fabric() trying autoload for srpt
[  116.130401] target_core_get_fabric() failed for srpt
[  116.142162] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: vhost
[  116.142166] target_core_register_fabric() trying autoload for vhost
[  116.142168] target_core_get_fabric() failed for vhost
[  116.142442] Target_Core_ConfigFS: REGISTER -> group: d000000002e754f8
name: fc
[  116.142444] target_core_register_fabric() trying autoload for fc
[  116.142445] target_core_get_fabric() failed for fc
[  127.135866] ibmvscsis: there is an interrupt
[  127.135877] ibmvscsis: ibmvscsis_send_crq(0x3000000e, 0xc002000000000000,
0x0000000000000000)
[  127.135883] ibmvscsis: ibmvcsis_send_crq rc = 0x0
[  127.137877] ibmvscsis: there is an interrupt
[  127.137885] ibmvscsis: case viosrp mad crq: 0x80, 0x1, 0x0, 0x0,
0x0, 0x40, 0xe000
[  127.137888] libsrp: srp_iu_get
[  127.137898] ibmvscsis: process srpiu
[  127.137900] ibmvscsis: srploginreq
[  127.137902] ibmvscsis: make nexus
[  127.137904] ibmvscsis: make_nexus: se_sess:c0000001f7ef0c80,
tpg(c0000001f39fb800)
[  127.137906] ibmvsciss: initiator name:(null), se_tpg:          (null)
[  127.137915] TARGET_CORE[ibmvscsis]->TPG[1]_LUN[0] - Adding READ-WRITE
access for LUN in Demo Mode
[  127.137920] ibmvscsis_TPG[1] - Added DYNAMIC ACL with TCQ Depth: 1 for
ibmvscsis Initiator Node: 3000000e
[  127.137923] TARGET_CORE[ibmvscsis]: Registered fabric_sess_ptr:
c0000001f39fb800
[  127.137926] ibmvscsis: process_login, tag:1585267068834414592
[  127.137929] ibmvscsis: send_iu: 0x34 0x1000000e 0xf210000 0x1300000e 0xe000
[  127.137934] ibmvscsis: crq pre cooked: 0x1, 0x34, 0x1600000000000000
[  127.137936] ibmvscsis: send crq: 0x3000000e, 0x8001009900000034, 0x16
[  127.137938] libsrp: srp_iu_put
[  127.137941] ibmvscsis: ibmvscsis_send_crq(0x3000000e, 0x8001009900000034,
0x0000000000000016)
[  127.137945] ibmvscsis: ibmvcsis_send_crq rc = 0x0
[  128.180515] ibmvscsis: there is an interrupt
[  128.180523] ibmvscsis: case viosrp mad crq: 0x80, 0x1, 0x0, 0x0, 0x0,
0x40, 0xe000
[  128.180525] libsrp: srp_iu_get
[ 128.180531] ibmvscsis: process srpiu <--- this is actually cdb of report_luns
[  128.180532] ibmvscsis: srpcmd
[  128.180534] ibmvscsis: process_srp_iu, iu_entry: c0000001f1c10000
[  128.180536] ibmvscsis: ibmvscsis_queuecommand
[  128.180538] ibmvscsis: tcm_queuecommand

So now my question is to get LIO to emulate report luns and mode sense,
do you just target_submit_cmd with the cdb and payload of the request
from client and then have target handle it and callback to fabric module
to handle the send_rsp back to VIOS via hcall?

>> Offtopic: I am going to be in North Carolina for the VAULT conference
>> on the 18th and flying out the 21st after the last session. Let me
>> know of a good time
>> we can meet with a colleague of mine who is the VIOS architect. We
>> are in the
>> works of open sourcing the current existing VIOS driver that is IBM
>> proprietary, and
>> plan on using it to further develop this driver/add to LIO. For
>> example we would like to add
>> virtual disks into LIO as a backstore (which currently already exists in our
>> own properitary code).
>
> Neat.  8-)
>
>>  It's also the reason for the slower development
>> and progress of this driver since there is still an internal debate in
>> regards to whether or not we want to work off the internal driver
>> or add to this
>> one that you have seen.
>>
>
> I'll be around the entire week, so let me know a time off-list that
> works for you and the VIOS folks, and will plan accordingly.
>

I'll be free anytime on the 19th, but Bob, the VIOS architect wont be there
until the 20th. We can either meet first and talk about technical stuff in
relations to LIO. Or on the 20th we can meet and talk about our level of
commitment and mission towards Vscsi/LIO enhancements along with technical
questions in relation to LIO. On the 20th, Bob and I can accommodate any time
that you are free.


Both days work for me, and I'm sure we'll have plenty of time for
discussion.

Let's exchange contact info off-list.  :)

Will do, I will send a separate email to you with my contact info.


--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux