Re: Reviving Ibmvscsi target/Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Nick,

Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:

Hey Bryant,

On Wed, 2016-03-16 at 14:29 -0400, Bryant G Ly wrote:
Hello Nick,


Apologies for the delayed follow-up here.  Comments below.

Quoting "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>:

> Hi Bryant,
>
<SNIP>
>> >> Yes, it finally has been approved for me to post code on a public
>> >> github.
>> >> The link is: github.com/powervm/ibmvscsis
>> >>
>> >
>> > Great.
>> >
>> > I noticed https://github.com/powervm/ibmvscsis is stand-lone files, and
>> > not a clone of upstream linux.git.
>> >
>> > So you'll want to go ahead and create a fork from:
>> >
>> >     https://github.com/torvalds/linux
>> >
>> > and then use target-pending/for-next (eg: the target code that will
>> > become v4.6-rc1) as a starting point for further ibmvscsis development.
>> >
>> > Also, please let me know if you have any questions wrt to git.
>> >
>>
>> I will make these changes, but to clarify do you want us to carry a full
>> kernel repo upstream or can we just carry the target-pending?
>>
>
> It's useful to have a tree based on linux.git to PULL from for early
> developments, but it's just for WIP code ahead of an initial upstream
> driver merge.
>

I was wondering how you would like WIP code reviewed? I have created a pull
request against my own development branch which is a branch of for-next
with some initial changes. https://github.com/powervm/ibmvscsis/pull/5
Let me know if that is how you want code reviewed.


Great, thanks for addressing the early feedback.  :)

So next you'll want to generate the patch series using:

    git format-patch --cover-letter -n HEAD~3

where 'HEAD~3' contains the three ibmvscsis patches, along with a git
cover-letter from your ibmvscsis/development branch.

From there, have a look at 'git-send-email' to post the series as a
threaded email to target-devel and linux-scsi lists for initial public
review + feedback.


Did you still want me to generate a patch series even though the code doesn't
fully work? Basically I'm stuck where I'm expecting the client to request
for a reporting of the luns, right after the SRP login request is complete,
but in it's current state nothing is occurring after the srp login.

Also I have created Wiki's that contain config files for targetcli/WIP Log.
https://github.com/powervm/ibmvscsis/wiki


Btw, as ibmvscsis is only using tpgt=1 per endpoint you can safely
comment out the unnecessary feature bit in ibmvscsis.spec:

     features = tpgts

Lastly, I was wondering if we can remove handle_cmd_queue()? I think
it would be
better to re-write some of the queue handling so that we aren't spinning.
https://git.kernel.org/cgit/linux/kernel/git/nab/target-pending.git/tree/drivers/scsi/ibmvscsi/ibmvscsis.c?h=for-next-vscsi#n1428


Looks like a reasonable optimization to consider.

It might be easier to just list_splice_init the active target->cmd_queue
with the lock held in handle_cmd_queue(), and then drain the outstanding
*iue without releasing+acquring the lock each time, and re-adding to as
necessary to target->cmd_queue during exception status.

Or even better, we may be able to use a llist_del_all() here.  For an
example, check out how drivers/vhost/scsi.c handles completions using a
lock-less list.


Thanks, i'll take a look at this.

Also, do you know if host aka ibmvscsi is supposed to add something
into our queue
asking for what we have attached aka luns in this scenario? I would think
after login, they should add a queue to ask the target what we have?


Mmmm, AFAIK this is done using the REPORT_LUNS payload with the special
make_lun() encoding in ibmvscsis_report_luns().


You had mentioned to use spc_emulate_report_luns(). By this do you mean checking the cdb for REPORT_LUNS, then if so modify the deve->mapped_lun so that it is in the make_lun() encoding and then calling spc_emulate_report_luns()? Afterwards probably
call ibmvscsis_queue_data_in() ?

Offtopic: I am going to be in North Carolina for the VAULT conference on the 18th and flying out the 21st after the last session. Let me know of a good time we can meet with a colleague of mine who is the VIOS architect. We are in the works of open sourcing the current existing VIOS driver that is IBM proprietary, and plan on using
it to further develop this driver/add to LIO. For example we would like to add
virtual disks into LIO as a backstore (which currently already exists in our
own properitary code). It's also the reason for the slower development
and progress of this driver since there is still an internal debate in regards to whether or not we want to work off the internal driver or add to this one that you have
seen.

Thanks,

Bryant Ly


--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux