Re: Reviving Ibmvscsi target/Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Bryant,

(Adding target-devel CC')

On Thu, 2016-02-25 at 21:30 -0800, Nicholas A. Bellinger wrote:
> On Wed, 2016-02-24 at 19:02 +0000, Bryant G Ly wrote:
> > Hi Nick, 
> >  
> > Thanks for getting back to me! I am currently working on getting
> > approval for putting
> > it on my own local github, it should be done by the end of today. 
> >  
> 
> Thanks.  :)
> 
> Also, let's move further discussion to target-devel@xxxxxxxxxxxxxxx
> mailing list.
> 
> > I was wondering if you used any sort of IRC or anything of that nature
> > so that I can 
> > contact you during the day. 
> >  
> > Lastly, can you explain what in the target_register_template actually
> > sets up the initial
> > process of telling TCM that there is something to register? I'd like
> > to be able to get some
> > print statements around so that I can determine which parts of the
> > template that I don't have
> > written correctly... But I can't get anything to print. 
> >  
> 
> Keep in mind that configfs is completely driven by userspace syscalls,
> eg:
> 
>   mkdir /sys/kernel/config/target/$FABRIC_NAME/
> 
> will invoke target_core_register_fabric() to register the top-level
> /sys/kernel/config/target/ibmvscsis/ config_group for use with
> fabric independent target_core_fabric_configfs.c logic.
> 
> Take a look at Tomo-san's original 'tree' output here:
> 
> http://linux-iscsi.org/index.php?title=IBM_vSCSI&oldid=10359#Object_tree
> 
> So subsequently:
> 
>    mkdir ../target/$FABRIC_NAME/$WWPN/
> 
> invokes target_core_fabric_ops->fabric_make_wwn(), and
> 
>    mkdir ../target/$FABRIC_NAME/$WWPN/$TPGT/
> 
> invokes target_core_fabric_ops->fabric_make_tpg(), etc.
> 
> Depending on what rtslib userspace you're using, you'll need to add the
> following /var/target/fabric/ibmvscsis.spec to automatically drive this
> for targetcli + friends:
> 
> # cat /var/target/fabric/ibmvscsis.spec
> # WARNING: This is a draft specfile supplied for testing only.
> 
> # The fabric module feature set
> features = none
> 
> # Use free-form WWNs.
> #wwn_type = naa
> 
> # Non-standard module naming scheme
> kernel_module = ibmvscsis
> 
> # The configfs group
> configfs_group = ibmvscsis
> 
> If you're using rtslib-fb, this actually needs to be added into python
> code (eg: it doesn't use external $FABRIC.spec files).
> 
> > Here is a dmesg:
> > 
> > [    4.046252] TARGET_CORE[0]: Loading Generic Kernel Storage Engine: v5.0 on Linux/ppc64le on 4.5.0-rc1
> > [    4.046547] TARGET_CORE[0]: Initialized ConfigFS Fabric Infrastructure: v5.0 on Linux/ppc64le on 4.5.0-rc1
> > [    4.046550] Setup generic dev
> > [    4.046552] Setup generic dev_attrib
> > [    4.046554] Setup generic dev_pr
> > [    4.046556] Setup generic dev_wwn
> > [    4.046559] Setup generic dev_alua_tg_pt_gps
> > [    4.046561] Setup generic dev_stat
> > [    4.046568] Rounding down aligned max_sectors from 4294967295 to 4294967168
> > [    4.061129] ibmvscsis: module verification failed: signature and/or required key missing - tainting kernel
> > [    4.061336] ibmvscsis: Register VSCSI Target Driver
> > [    4.061338] ibmvscsis: getsysteminfo
> > [    4.061357] Setup generic discovery
> > [    4.061359] Setup generic wwn
> > [    4.061361] Setup generic wwn_fabric_stats
> > [    4.061363] Setup generic tpg
> > [    4.061365] Setup generic tpg_base
> > [    4.061367] Setup generic tpg_port
> > [    4.061370] Setup generic tpg_port_stat
> > [    4.061374] Setup generic tpg_lun
> > [    4.061377] Setup generic tpg_np
> > [    4.061381] Setup generic tpg_np_base
> > [    4.061385] Setup generic tpg_attrib
> > [    4.061389] Setup generic tpg_auth
> > [    4.061392] Setup generic tpg_param
> > [    4.061397] Setup generic tpg_nacl
> > [    4.061400] Setup generic tpg_nacl_base
> > [    4.061404] Setup generic tpg_nacl_attrib
> > [    4.061408] Setup generic tpg_nacl_auth
> > [    4.061411] Setup generic tpg_nacl_param
> > [    4.061421] Setup generic tpg_nacl_stat
> > [    4.061425] Setup generic tpg_mappedlun
> > [    4.061428] Setup generic tpg_mappedlun_stat
> > [    4.061454] ibmvscsis: Probe for UA 0x3000000c
> > [    4.061458] ibmvscsis: Probe: liobn 0x1000000c, riobn 0x1300000c
> > [    4.061460] ibmvscsis: passed Init
> > [    4.061666] ibmvscsis: passed srp_target_alloc
> > [    4.061909] ibmvscsis: 0x0 h_send_crq
> > [    4.061922] ibmvscsi 3000000b: partner initialized
> > [    4.061944] ibmvscsis: case viosrp mad crq: 0x80, 0x2,                     0x0, 0x0, 0x1e, 0x100, 0x18010100
> > [    4.061949] ibmvscsis: IU from pool is c0000001fe1f4000
> > [    4.061953] ibmvscsis: send_open: 0x100 0x1000000c 0xc000000 0x1300000c 0x18010100
> > [    4.061957] ibmvmc: h_copy_rdma(0x100, 0x1300000c, 0x18010100, 0x1000000c, 0xc000000
> > [    4.061979] ibmvmc: h_copy_rdma rc = 0x0
> > [    4.061986] ibmvscsis: mad common type: 0x3
> > [    4.061989] ibmvscsis: viosrp adapter info type
> > [    4.062005] ibmvscsis: get_remote_info: 0x94 0x1000000c 0xc110000 0x1300000c 0xd008010000000000
> > [    4.062007] ibmvmc: h_copy_rdma(0x94, 0x1300000c, 0x108d0, 0x1000000c, 0xc110000
> > [    4.062011] ibmvmc: h_copy_rdma rc = 0x0
> > [    4.062014] ibmvscsis: Client connect: z2434ev2 (50331648)
> > [    4.062017] ibmvscsis: send info to remote: 0x94 0x1000000c 0xc110000             0x1300000c 0xd008010000000000
> > [    4.062020] ibmvmc: h_copy_rdma(0x94, 0x1000000c, 0xc110000, 0x1300000c, 0x108d0
> > [    4.062024] ibmvmc: h_copy_rdma rc = 0x0
> > [    4.062031] ibmvscsis: send_iu: 0x18 0x1000000c 0xc000000 0x1300000c 0x18010100
> > [    4.062033] ibmvmc: h_copy_rdma(0x18, 0x1000000c, 0xc000000, 0x1300000c, 0x18010100
> > [    4.062036] ibmvmc: h_copy_rdma rc = 0x0
> > [    4.062038] ibmvscsis: crq pre cooked: 0x2, 0x18, 0xc0000001ecb901d8
> > [    4.062040] ibmvscsis: send crq: 0x3000000c, 0x8002009900000018, 0xd801b9ec010000c0
> > [    4.062045] ibmvscsis: process_mad_io
> > [    4.062048] ibmvscsis: finished process of crq
> > [    4.062051] ibmvscsis: finished handle crq now handle cmd
> > [    4.062055] ibmvscsi 3000000b: host srp version: 16.a, host partition z2434ev2 (3), OS 33554432, max io 512
> > [    4.062062] ibmvscsi 3000000b: sent SRP login
> > [    4.062072] ibmvscsis: case viosrp mad crq: 0x80, 0x1,                     0x0, 0x0, 0x3c, 0x100, 0x18010200
> > [    4.062074] ibmvscsis: IU from pool is c0000001fe1f4030
> > [    4.062076] ibmvscsis: send_open: 0x100 0x1000000c 0xc010000 0x1300000c 0x18010200
> > [    4.062077] ibmvmc: h_copy_rdma(0x100, 0x1300000c, 0x18010200, 0x1000000c, 0xc010000
> > [    4.062081] ibmvmc: h_copy_rdma rc = 0x0
> > [    4.062082] ibmvscsis: process_srp_io
> > [    4.062084] ibmvscsis: send_iu: 0x34 0x1000000c 0xc010000 0x1300000c 0x18010200
> > [    4.062086] ibmvmc: h_copy_rdma(0x34, 0x1000000c, 0xc010000, 0x1300000c, 0x18010200
> > [    4.062089] ibmvmc: h_copy_rdma rc = 0x0
> > [    4.062091] ibmvscsis: crq pre cooked: 0x1, 0x34, 0xc0000001ecb903b0
> > [    4.062092] ibmvscsis: send crq: 0x3000000c, 0x8001009900000034, 0xb003b9ec010000c0
> > [    4.062096] ibmvscsis: finished process of crq
> > [    4.062098] ibmvscsis: finished handle crq now handle cmd
> > [    4.062103] ibmvscsi 3000000b: SRP_LOGIN succeeded
> 
> Neat.  :)

Any luck getting /sys/kernel/config/target/ibmvscsis/ endpoints
configured for Linux/ppc64le on v4.5 code..?

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux