Re: Reviving Ibmvscsi target/Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2016-04-06 at 11:30 -0400, Bryant G Ly wrote:
> Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:
> 
> > On Mon, 2016-04-04 at 13:59 -0400, Bryant G Ly wrote:
> >> Hi Nick,
> >>
> >> Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:
> >> > On Wed, 2016-03-16 at 14:29 -0400, Bryant G Ly wrote:
> >> >> Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:
> 
> <SNIP>
> 
> >> >> Also, do you know if host aka ibmvscsi is supposed to add something
> >> >> into our queue
> >> >> asking for what we have attached aka luns in this scenario? I would think
> >> >> after login, they should add a queue to ask the target what we have?
> >> >>
> >> >
> >> > Mmmm, AFAIK this is done using the REPORT_LUNS payload with the special
> >> > make_lun() encoding in ibmvscsis_report_luns().
> >> >
> >>
> >> You had mentioned to use spc_emulate_report_luns(). By this do you
> >> mean checking the cdb for REPORT_LUNS, then if so modify the  
> >> deve->mapped_lun so that it
> >> is in the make_lun() encoding and then calling spc_emulate_report_luns()?
> >
> > deve->mapped_lun should not be modified directly.  ;)
> >
> > I was thinking to add an optional target_core_fabric_ops->encode_lun()
> > function pointer, that allows drivers to provide a fabric dependent
> > method invoked by spc_emulate_report_luns() to handle the special case.
> >
> > In the case of ibmvscsis, the existing make_lun() code would be invoked
> > by the new ->encode_lun() caller from generic code.
> >
> > Beyond that, I'm not sure if ibmvscsis_report_luns() has any other
> > limitations wrt to buffer size, maxinum number of luns, etc, that
> > spc_emulate_report_luns() would also need to honor.
> >
> >>
> >> Afterwards probably call ibmvscsis_queue_data_in() ?
> >>
> >
> > Correct, target_core_fabric_ops->queue_data_in() is already invoked to
> > queue the response after spc_emulate_report_luns() has populated the
> > payload.
> >
> 
> I had learned from the VIOS folks that report luns doesn't need any special
> encoding, so we should be able to just use spc_emulate_report_luns() directly.
> Therefore, I wont be making any changes to do encoding stuff, unless  
> you think it'll be useful for other drivers?

In that case for REPORT_LUNS, it's likely not useful to add atm.

> I think mode_sense can also
> be scrapped and just use the common spc_emulate code. For inquiry we can
> either use the existing or try to make spc_emulate_inquiry account for
> this emulation.

For INQUIRY, it would probably be easier to just provide ibmvscsis with
it's own caller for populating inquiry payload, removed from the
existing spc_emulate_inquiry_std() + spc_emulate_evpd_83().

Reason being that mixing and matching these two for what ibmvscsis needs
for INQUIRY is likely not going to be useful to other drivers, and I
assume VIOS initiators want to avoid being returned INQUIRY EVPD=0x83
identifiers / designators.

> 
> But on a side note, I think I have a SCSI scan that starts after the  
> login request is complete, which is good I think that is the report_luns
> request that I'm seeking. Do you know how to make the target init go first?
>
> As in having transport_init_session, core_tpg_check_initiator_node_acl,
> and transport_register_session all occur prior to ibmvscsis_probe being  
> called? This way I can ensure target has mapped backstores/luns prior
> to this driver starting. I think this will fix the whole issue with
> client adapter not seeing the luns.
> 

Mmmmm.  AFAIK in original code, vio_register_driver() and subsequent
ibmvscsis_probe() where done at ibmvscsis module load time.

It would be possible to do the vio_register_driver() -> probe() after
TFO->make_tpg() and /sys/kernel/config/target/ibmvscsis/$WWN/$TPGT/ has
enabled, but that would certainly break support for multiple endpoints.

Looking at the original code, I don't see how it signaled to VIOS
initiator to perform the rescan after the endpoint came up, but AFAIK
that was working at some point.

So it sounds like there is still some other issue going on here.

> >> Offtopic: I am going to be in North Carolina for the VAULT conference
> >> on the 18th and flying out the 21st after the last session. Let me  
> >> know of a good time
> >> we can meet with a colleague of mine who is the VIOS architect. We  
> >> are in the
> >> works of open sourcing the current existing VIOS driver that is IBM  
> >> proprietary, and
> >> plan on using it to further develop this driver/add to LIO. For  
> >> example we would like to add
> >> virtual disks into LIO as a backstore (which currently already exists in our
> >> own properitary code).
> >
> > Neat.  8-)
> >
> >>  It's also the reason for the slower development
> >> and progress of this driver since there is still an internal debate in
> >> regards to whether or not we want to work off the internal driver  
> >> or add to this
> >> one that you have seen.
> >>
> >
> > I'll be around the entire week, so let me know a time off-list that
> > works for you and the VIOS folks, and will plan accordingly.
> >
> 
> I'll be free anytime on the 19th, but Bob, the VIOS architect wont be there
> until the 20th. We can either meet first and talk about technical stuff in
> relations to LIO. Or on the 20th we can meet and talk about our level of
> commitment and mission towards Vscsi/LIO enhancements along with technical
> questions in relation to LIO. On the 20th, Bob and I can accommodate any time
> that you are free.
> 

Both days work for me, and I'm sure we'll have plenty of time for
discussion. 

Let's exchange contact info off-list.  :)

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux