On Sun, 2016-02-28 at 03:19 +0000, Bryant G Ly wrote: > Hi Nick, Btw, please avoid top-posting in responses, as it makes the thread more difficult for others to follow. ;) > > This is the current tree I have with my current level of code: > > /sys/kernel/config/ > ├── target > │ ├── core <SNIP> > │ ├── ibmvscsis > │ │ ├── discovery_auth > │ │ ├── naa.600140530916541d > │ │ │ ├── fabric_statistics > │ │ │ └── tpgt_1 > │ │ │ ├── acls > │ │ │ ├── attrib > │ │ │ │ └── fabric_prot_type > │ │ │ ├── auth > │ │ │ ├── lun > │ │ │ │ └── lun_0 > │ │ │ │ ├── 89926afce -> ../../../../../../target/core/fileio_1/test > │ │ │ │ ├── alua_tg_pt_gp > │ │ │ │ ├── alua_tg_pt_offline > │ │ │ │ ├── alua_tg_pt_status > | │ │ │ ├── alua_tg_pt_write_md > │ │ │ │ └── statistics > │ │ │ │ ├── scsi_port > │ │ │ │ │ ├── busy_count > │ │ │ │ │ ├── dev > │ │ │ │ │ ├── indx > │ │ │ │ │ ├── inst > │ │ │ │ │ └── role > │ │ │ │ ├── scsi_tgt_port > │ │ │ │ │ ├── dev > │ │ │ │ │ ├── hs_in_cmds > │ │ │ │ │ ├── in_cmds > │ │ │ │ │ ├── indx > │ │ │ │ │ ├── inst > │ │ │ │ │ ├── name > │ │ │ │ │ ├── port_index > │ │ │ │ │ ├── read_mbytes > │ │ │ │ │ └── write_mbytes > │ │ │ │ └── scsi_transport > │ │ │ │ ├── device > │ │ │ │ ├── dev_name > │ │ │ │ ├── indx > │ │ │ │ └── inst > │ │ │ ├── nexus > │ │ │ ├── np > │ │ │ └── param Ok, so you've got a new ../ibmvscsis/$WWPN/$TPGT/nexus attribute. Did you add a ibmvscsis per-endpoint 'nexus' configfs attribute handler for driving se_session creation, via user-space 'echo 1 > ../nexus'..? Note that Tomo-san's original code actually drove session creation -> se_node_acl lookup -> transport_register_session() directly from the configfs TFO->fabric_make_tpg -> ibmvscsis_make_tpg() callback here: https://git.kernel.org/cgit/linux/kernel/git/nab/target-pending.git/tree/drivers/scsi/ibmvscsi/ibmvscsis.c?h=for-next-vscsi#n318 and did not use a seperate 'nexus' attribute for this. Doing this via a 'nexus' attribute (which existing user-space supports, a la tcm_loop, vhost_scsi, and xen-scsiback) does make more sense, but off-hand I don't recall if there was a specific reason it was not done this way originally. > And the Dmesg to go along: > <SNIP> drop the various non ibmvscsis output: > [ 3.963121] IBMVSCSIS fabric module v0.1 on Linux/ppc64leon 4.5.0-rc1 > [ 3.963124] ibmvscsis: getsysteminfo > [ 3.963135] ibmvscsis: start register template > [ 3.963138] Setup generic discovery > [ 3.963140] Setup generic wwn > [ 3.963141] Setup generic wwn_fabric_stats > [ 3.963143] Setup generic tpg > [ 3.963144] Setup generic tpg_base > [ 3.963145] Setup generic tpg_port > [ 3.963147] Setup generic tpg_port_stat > [ 3.963148] Setup generic tpg_lun > [ 3.963149] Setup generic tpg_np > [ 3.963150] Setup generic tpg_np_base > [ 3.963152] Setup generic tpg_attrib > [ 3.963153] Setup generic tpg_auth > [ 3.963154] Setup generic tpg_param > [ 3.963155] Setup generic tpg_nacl > [ 3.963156] Setup generic tpg_nacl_base > [ 3.963157] Setup generic tpg_nacl_attrib > [ 3.963158] Setup generic tpg_nacl_auth > [ 3.963160] Setup generic tpg_nacl_param > [ 3.963161] Setup generic tpg_nacl_stat > [ 3.963162] Setup generic tpg_mappedlun > [ 3.963164] Setup generic tpg_mappedlun_stat > [ 3.963166] ibmvscsis: end register template > [ 3.963184] ibmvscsis: Probe for UA 0x3000000c > [ 3.963188] ibmvscsis: Probe: liobn 0x1000000c, riobn 0x1300000c > [ 3.963599] ibmvscsi 3000000b: partner initialized > [ 3.963626] ibmvscsis: case viosrp mad crq: 0x80, 0x2, 0x0, 0x0, 0x1e, 0x100, 0x18010100 > [ 3.963648] ibmvscsis: get_remote_info: 0x94 0x1000000c 0xc110000 0x1300000c 0xd008010000000000 > [ 3.963654] ibmvscsis: Client connect: z2434ev2 (50331648) > [ 3.963658] ibmvscsis: send info to remote: 0x94 0x1000000c 0xc110000 0x1300000c 0xd008010000000000 > [ 3.963670] ibmvscsis: send_iu: 0x18 0x1000000c 0xc000000 0x1300000c 0x18010100 > [ 3.963675] ibmvscsis: crq pre cooked: 0x2, 0x18, 0xc0000001f90001d8 > [ 3.963677] ibmvscsis: send crq: 0x3000000c, 0x8002009900000018, 0xd80100f9010000c0 > [ 3.963684] ibmvscsis: finished handle crq now handle cmd > [ 3.963701] ibmvscsi 3000000b: host srp version: 16.a, host partition z2434ev2 (3), OS 33554432, max io 512 > [ 3.963715] ibmvscsi 3000000b: sent SRP login > [ 3.963731] ibmvscsis: case viosrp mad crq: 0x80, 0x1, 0x0, 0x0, 0x3c, 0x100, 0x18010200 > [ 3.963745] ibmvscsis: send_iu: 0x34 0x1000000c 0xc010000 0x1300000c 0x18010200 > [ 3.963752] ibmvscsis: crq pre cooked: 0x1, 0x34, 0xc0000001f90003b0 > [ 3.963756] ibmvscsis: send crq: 0x3000000c, 0x8001009900000034, 0xb00300f9010000c0 > [ 3.963766] ibmvscsis: finished handle crq now handle cmd > [ 3.963782] ibmvscsi 3000000b: SRP_LOGIN succeeded > [ 7.865152] Target_Core_ConfigFS: REGISTER -> group: d0000000026245f0 name: ibmvscsis > [ 7.865156] Target_Core_ConfigFS: REGISTER -> Located fabric: ibmvscsis > [ 7.865158] Target_Core_ConfigFS: REGISTER tfc_wwn_cit -> c0000001ec8d3138 > [ 7.865161] Target_Core_ConfigFS: REGISTER -> Allocated Fabric: ibmvscsis <RESTART> > [ 8.374865] TARGET_CORE[0]: Loading Generic Kernel Storage Engine: v5.0 on Linux/ppc64le on 4.5.0-rc1 > [ 8.375215] TARGET_CORE[0]: Initialized ConfigFS Fabric Infrastructure: v5.0 on Linux/ppc64le on 4.5.0-rc1 > [ 9.229389] IBMVSCSIS fabric module v0.1 on Linux/ppc64leon 4.5.0-rc1 > [ 9.229392] ibmvscsis: getsysteminfo > [ 9.229402] ibmvscsis: start register template > [ 9.229430] ibmvscsis: end register template > [ 9.229447] ibmvscsis: Probe for UA 0x3000000c > [ 9.229450] ibmvscsis: Probe: liobn 0x1000000c, riobn 0x1300000c > [ 9.229794] ibmvscsis: couldn't register crq--rc 0xfffffff0 > [ 9.229816] ibmvscsis: Error 0xfffffff0 opening virtual adapter > [ 9.229831] ibmvscsis: failed crq_queue_create ret: -1 > [ 9.229903] ibmvscsis: probe of 3000000c failed with error -1 > [ 9.910876] Target_Core_ConfigFS: REGISTER -> group: d0000000051d45f0 name: ibmvscsis > [ 9.910878] Target_Core_ConfigFS: REGISTER -> Located fabric: ibmvscsis > [ 9.910879] Target_Core_ConfigFS: REGISTER tfc_wwn_cit -> c0000001ec6f6938 > [ 9.910880] Target_Core_ConfigFS: REGISTER -> Allocated Fabric: ibmvscsis > [ 150.261823] TARGET_CORE[ibmvscsis]: Allocated portal_group for endpoint: naa.600140530916541d, Proto: 4, Portal Tag: 1 > [34016.730043] Setup generic dev > [34016.730048] Setup generic dev_attrib > [34016.730050] Setup generic dev_pr > [34016.730051] Setup generic dev_wwn > [34016.730053] Setup generic dev_alua_tg_pt_gps > [34016.730055] Setup generic dev_stat > [34016.730057] TCM: Registered subsystem plugin: user struct module: d000000008ab3d00 > [34016.730236] CORE_HBA[1] - Attached HBA to Generic Target Core > [34016.754179] CORE_HBA[2] - Attached HBA to Generic Target Core > [34016.754961] Target_Core_ConfigFS: fileio_1/test set udev_path: /tmp/test1.img > [34016.755188] fileio: Adding to default ALUA LU Group: core/alua/lu_gps/default_lu_gp > [34016.755243] Vendor: LIO-ORG > [34016.755245] Model: FILEIO > [34016.755246] Revision: 4.0 > [34016.755249] Type: Direct-Access > [34016.755571] Target_Core_ConfigFS: Set emulated VPD Unit Serial: 6cf8b125-0f83-4bdb-aa96-34da120942e8 > [34057.245997] ibmvscsis_TPG[1]_LUN[0] - Activated ibmvscsis Logical Unit from CORE HBA: 2 > > > One big thing I noticed was that adding the ibmvscsis.conf file for > some reason caused the target to de-register and reregister... which > then caused another init of the ibmvscsis.module, hence the failure in > probing on the second one. Keep in mind the /var/target/fabric/ibmvscsis.spec from our off-list email only works for non rtslib-fb based code. Can you confirm what user-space your currently using..? > > Also, in regards to having the code on a public git, it's still in the > works. I have sent a proposal to our legal team, hopefully it will be > approved soon. > Yes, please. > Lastly, would you like everything hosted on your end, or should we (as > ibm) provide the full kernel on github? I think having something like > your old repo > (git://git.kernel.org/pub/scm/linux/kernel/git/nab/lio-core.git), > would be best? Let me know what you think. A repo on github is useful for groking early WIP code. As the code gets up and running, and posted to target-devel for review, target-pending.git/for-next-merge is location for new fabric drivers going upstream. -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html