Re: Reviving Ibmvscsi target/Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2016-04-04 at 13:59 -0400, Bryant G Ly wrote:
> Hi Nick,
> 
> Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:
> > On Wed, 2016-03-16 at 14:29 -0400, Bryant G Ly wrote:
> >> Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:

<SNIP>

> >
> > Great, thanks for addressing the early feedback.  :)
> >
> > So next you'll want to generate the patch series using:
> >
> >     git format-patch --cover-letter -n HEAD~3
> >
> > where 'HEAD~3' contains the three ibmvscsis patches, along with a git
> > cover-letter from your ibmvscsis/development branch.
> >
> > From there, have a look at 'git-send-email' to post the series as a
> > threaded email to target-devel and linux-scsi lists for initial public
> > review + feedback.
> >
> 
> Did you still want me to generate a patch series even though the code doesn't
> fully work? Basically I'm stuck where I'm expecting the client to request
> for a reporting of the luns, right after the SRP login request is complete,
> but in it's current state nothing is occurring after the srp login.
> 

It's really your call.

There's nothing that says things need to work for a first RFC, but the
driver is probably far enough along that it makes sense to wait until
REPORT_LUNs (and I/O) is functioning, before posting the patch series
for review.

> >> Also I have created Wiki's that contain config files for targetcli/WIP Log.
> >> https://github.com/powervm/ibmvscsis/wiki
> >>
> >
> > Btw, as ibmvscsis is only using tpgt=1 per endpoint you can safely
> > comment out the unnecessary feature bit in ibmvscsis.spec:
> >
> >      features = tpgts
> >
> >> Lastly, I was wondering if we can remove handle_cmd_queue()? I think
> >> it would be
> >> better to re-write some of the queue handling so that we aren't spinning.
> >> https://git.kernel.org/cgit/linux/kernel/git/nab/target-pending.git/tree/drivers/scsi/ibmvscsi/ibmvscsis.c?h=for-next-vscsi#n1428
> >>
> >
> > Looks like a reasonable optimization to consider.
> >
> > It might be easier to just list_splice_init the active target->cmd_queue
> > with the lock held in handle_cmd_queue(), and then drain the outstanding
> > *iue without releasing+acquring the lock each time, and re-adding to as
> > necessary to target->cmd_queue during exception status.
> >
> > Or even better, we may be able to use a llist_del_all() here.  For an
> > example, check out how drivers/vhost/scsi.c handles completions using a
> > lock-less list.
> >
> 
> Thanks, i'll take a look at this.
> 
> >> Also, do you know if host aka ibmvscsi is supposed to add something
> >> into our queue
> >> asking for what we have attached aka luns in this scenario? I would think
> >> after login, they should add a queue to ask the target what we have?
> >>
> >
> > Mmmm, AFAIK this is done using the REPORT_LUNS payload with the special
> > make_lun() encoding in ibmvscsis_report_luns().
> >
> 
> You had mentioned to use spc_emulate_report_luns(). By this do you  
> mean checking the cdb for REPORT_LUNS, then if so modify the deve->mapped_lun so that it  
> is in the make_lun() encoding and then calling spc_emulate_report_luns()?

deve->mapped_lun should not be modified directly.  ;)

I was thinking to add an optional target_core_fabric_ops->encode_lun()
function pointer, that allows drivers to provide a fabric dependent
method invoked by spc_emulate_report_luns() to handle the special case.

In the case of ibmvscsis, the existing make_lun() code would be invoked
by the new ->encode_lun() caller from generic code.

Beyond that, I'm not sure if ibmvscsis_report_luns() has any other
limitations wrt to buffer size, maxinum number of luns, etc, that
spc_emulate_report_luns() would also need to honor.

>   
> Afterwards probably call ibmvscsis_queue_data_in() ?
> 

Correct, target_core_fabric_ops->queue_data_in() is already invoked to
queue the response after spc_emulate_report_luns() has populated the
payload.

> Offtopic: I am going to be in North Carolina for the VAULT conference  
> on the 18th and flying out the 21st after the last session. Let me know of a good time  
> we can meet with a colleague of mine who is the VIOS architect. We are in the  
> works of open sourcing the current existing VIOS driver that is IBM proprietary, and  
> plan on using it to further develop this driver/add to LIO. For example we would like to add
> virtual disks into LIO as a backstore (which currently already exists in our
> own properitary code).

Neat.  8-)

>  It's also the reason for the slower development
> and progress of this driver since there is still an internal debate in  
> regards to whether or not we want to work off the internal driver or add to this  
> one that you have seen.
> 

I'll be around the entire week, so let me know a time off-list that
works for you and the VIOS folks, and will plan accordingly.

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux