Re: [RFC PATCH 9/9] libfc: adds queue_depth ramp up to libfc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 27, 2009 at 01:56:31PM -0700, Vasu Dev wrote:
> On Thu, 2009-08-27 at 12:19 +0200, Christof Schmitt wrote:
> > On Wed, Aug 26, 2009 at 11:04:03AM -0700, Vasu Dev wrote:
> > > Increases queue_depth by one on fc_change_queue_depth call back
> > > with reason SCSI_QDEPTH_RAMP_UP.
> > > 
> > > Signed-off-by: Vasu Dev <vasu.dev@xxxxxxxxx>
> > > ---
> > > 
> > >  drivers/scsi/libfc/fc_fcp.c |    5 +++++
> > >  1 files changed, 5 insertions(+), 0 deletions(-)
> > > 
> > > diff --git a/drivers/scsi/libfc/fc_fcp.c b/drivers/scsi/libfc/fc_fcp.c
> > > index dda4162..92e8a1b 100644
> > > --- a/drivers/scsi/libfc/fc_fcp.c
> > > +++ b/drivers/scsi/libfc/fc_fcp.c
> > > @@ -2054,6 +2054,11 @@ int fc_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason)
> > >  	case SCSI_QDEPTH_QFULL:
> > >  		scsi_track_queue_full(sdev, qdepth);
> > >  		break;
> > > +	case SCSI_QDEPTH_RAMP_UP:
> > > +		if (qdepth + 1 <= FC_FCP_DFLT_QUEUE_DEPTH)
> > > +			scsi_adjust_queue_depth(sdev, scsi_get_tag_type(sdev),
> > > +						qdepth + 1);
> > > +		break;
> > >  	default:
> > >  		return -EOPNOTSUPP;
> > >  	}
> > 
> > Overall the approach looks good to me.
> > 
> > I am trying to find out how this applies to the zfcp driver. Is the
> > approach in fc_change_queue_depth a good example for a driver that
> > does not have to adjust internal resources when changing the queue
> > depth?
> > 
> 
> Yes, this is the case with libfc also since libfc also doesn't make any
> additional resource adjustments on this call back. However If needed
> then this call back can be used to make additional resource adjustments
> also as needed by lpfc driver in lpfc_change_queue_depth.

There are also no resource adjustments necessary inside the zfcp
driver, i attached a first patch to adapt the zfcp change_queue_depth
callback.

I reused the default_depth settings for checking the maximum queue
depth. But i am wondering if the check should happen differently.
Would it make more sense to have an adjustable maximum_depth attribute
for each SCSI device? Or would it be possible to always increase the
queue depth until the storage returns QUEUE_FULL again?

Christof

---
zfcp: Adapt change_queue_depth for queue full tracking

From: Christof Schmitt <christof.schmitt@xxxxxxxxxx>

Adapt the change_queue_depth callback in zfcp for the new reason
parameter. Simply pass each call back to the SCSI midlayer, there are
no resource adjustments necessary for zfcp.

Signed-off-by: Christof Schmitt <christof.schmitt@xxxxxxxxxx>
---
 drivers/s390/scsi/zfcp_scsi.c |   20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

--- a/drivers/s390/scsi/zfcp_scsi.c	2009-08-28 12:00:12.000000000 +0200
+++ b/drivers/s390/scsi/zfcp_scsi.c	2009-08-28 12:01:27.000000000 +0200
@@ -28,9 +28,25 @@ char *zfcp_get_fcp_sns_info_ptr(struct f
 	return fcp_sns_info_ptr;
 }
 
-static int zfcp_scsi_change_queue_depth(struct scsi_device *sdev, int depth)
+static int zfcp_scsi_change_queue_depth(struct scsi_device *sdev, int depth,
+					int reason)
 {
-	scsi_adjust_queue_depth(sdev, scsi_get_tag_type(sdev), depth);
+	switch (reason) {
+	case SCSI_QDEPTH_SYSFS_REQ:
+		scsi_adjust_queue_depth(sdev, scsi_get_tag_type(sdev), depth);
+		break;
+	case SCSI_QDEPTH_QFULL:
+		scsi_track_queue_full(sdev, depth);
+		break;
+	case SCSI_QDEPTH_RAMP_UP:
+		depth++;
+		if (depth <= default_depth)
+			scsi_adjust_queue_depth(sdev, scsi_get_tag_type(sdev),
+						depth);
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
 	return sdev->queue_depth;
 }
 
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux