Re: [PATCH 04/10] drivers: convert fc drivers calling scsi_track_queue_full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2009-09-11 at 11:18 -0500, Mike Christie wrote:
> On 09/04/2009 04:43 PM, Vasu Dev wrote:
> > On Fri, 2009-09-04 at 06:47 -0700, Alex.Iannicelli@xxxxxxxxxx wrote:
> >> It looks like you moved the ramp up functionality into the scsi layer,
> >> but did not move the ramp up code from the lpfc driver in the
> >
> > Correct.
> >
> >>   lpfc_scsi_cmd_iocb_cmpl routine (just above the code that was removed
> >> for the ramp down in this patch) to the new lpfc_change_queue_depth
> >> routine. I think that this new routine should handle both ramp up and
> >> ramp down but you have it only handling the ramp down case.
> >>
> >
> > I agree all FC HBA should handle both ramp down and up as per added new
> > change_queue_depth interface by this series. I did this for libfc/fcoe
> > and Chrirstof did this for zfcp driver but lpfc&  qla2xxx got only ramp
> > down changes from Mike, now that Mike is busy with other stuff I don't
> > know how to complete them in this series since I don't understand lpfc
> > and qla2xxx enough and neither I have way to test changes to these
> > drivers.
> >
> > So I'm going to update this series to have just libfc and zfcp driver
> > changes for now and lpfc and qla2xxx can be updated later by someone
> > familiar lpfc and qla2xxx, their ramp down changes can be collect from
> > this series post.
> >
> 
> I think it is fine not to convert a driver immediately and let the 
> driver maintainer handle it. I normally like to take a stab at it to try 
> and give the driver maintainer some more info on the how I think it 
> should work.
> 
> I think at the very least you want to make sure your code will work for 
> other drivers, so sometimes doing a pseudo patch is useful for another 
> reason.
> 

I'll try to come up with a compile tested code for lpfc and you already
did that for qla2xx ramp up today. As you said "it is fine not to
convert a driver immediately", so I'll provide this separately and will
try to do it earliest possible.

Modified change_queue_depth interface changes by this series should be
sufficient to later do lpfc and qla2xx changes.
 
> For the case of lpfc and rampup I think we need a little more code. It 
> looks like lpfc will ramp down queues if it gets a reject on its port 
> (when we get a IOSTAT_LOCAL_REJECT we call lpfc_rampdown_queue_depth). 
> When it then tries to ramp up, it also takes that rampdown event into 
> account. The common code being added by Vasu, only tracks rampdown 
> events from QUEUE_FULLs.
> 

The qla2xx ramps down only on QUEUE_FULL beside added zfcp and lpfc
doing same only on QUEUE_FULL condition, but still a HBAs could call
their own change_queue_depth function for other conditions to ramp down
e.g. lpfc for IOSTAT_LOCAL_REJECT.

The lpfc and qla2xxx both are calling ramp up code only after specified
time interval since last ramp down/up on a successful IO completion, so
does the added code  does the same with tunable time interval. I chose
least 120HZ from qla2xxx as default.

So added common code for ramp down/up is sufficient for majority of FC
HBAs and lpfc specific additional criteria should be limited to only
lpfc HBA code. 
 
> You probably want to make a common fc class function which loops over 
> vports and will ramp down queues. The fc LLD can then call this and that 
> common fc function can make sure the common rampdown tracking is done so 
> later on the ramp up code can take that into account.
> 

The qdepth should be adjusted for all luns of a target and doing so via
scsi-ml struct is more common to all scsi transport classes. I think we
need to limit qdepth adjustment to only all luns of a target/vport at a
time as currently qla2xxx does and same is done by added common ramp up
code.

> Reviewing lpfc also brings up the question of why it was not doing the 
> ramp up in the main IO path. It looks like it is done in a worker 
> thread. The ramp down for a QUEUE_FULL is probably done in the main IO 
> path because we are already getting errors and we want to prevent new 
> ones. But for the ramp up, did Emulex experience performance hits when 
> ramping up in the main IO path?

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux