RE: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out of the staging area

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: James Bottomley [mailto:James.Bottomley@xxxxxxxxxxxxxxxxxxxxx]
> Sent: Thursday, November 17, 2011 1:27 PM
> To: KY Srinivasan
> Cc: gregkh@xxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> devel@xxxxxxxxxxxxxxxxxxxxxx; virtualization@xxxxxxxxxxxxxx; linux-
> scsi@xxxxxxxxxxxxxxx; ohering@xxxxxxxx; hch@xxxxxxxxxxxxx
> Subject: Re: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out of the
> staging area
> 
> On Tue, 2011-11-08 at 10:13 -0800, K. Y. Srinivasan wrote:
> > The storage driver (storvsc_drv.c) handles all block storage devices
> > assigned to Linux guests hosted on Hyper-V. This driver has been in the
> > staging tree for a while and this patch moves it out of the staging area.
> > As per Greg's recommendation, this patch makes no changes to the staging/hv
> > directory. Once the driver moves out of staging, we will cleanup the
> > staging/hv directory.
> >
> > This patch includes all the patches that I have sent against the staging/hv
> > tree to address the comments I have gotten to date on this storage driver.
> 
> First comment is that it would have been easier to see the individual
> patches for comment before you committed them.

I am not sure if the patches have been committed yet. All patches were sent
to various mailing lists and you were copied as well. In the future, I will include
the scsi mailing list in the set of lists I include for the staging patches. 
 
> 
> The way you did mempool isn't entirely right: the problem is that to
> prevent a memory to I/O deadlock we need to ensure forward progress on
> the drain device.  Just having 64 commands available to the host doesn't
> necessarily achieve this because LUN1 could consume them all and starve
> LUN0 which is the drain device leading to the deadlock, so the mempool
> really needs to be per device using slave_alloc.

I will do this per LUN.

> 
> +static int storvsc_device_alloc(struct scsi_device *sdevice)
> +{
> +	/*
> +	 * This enables luns to be located sparsely. Otherwise, we may not
> +	 * discovered them.
> +	 */
> +	sdevice->sdev_bflags |= BLIST_SPARSELUN | BLIST_LARGELUN;
> +	return 0;
> +}
> 
> Looks bogus ... this should happen automatically for SCSI-3 devices ...
> unless your hypervisor has some strange (and wrong) identification?  I
> really think you want to use SCSI-3 because it will do report LUN
> scanning, which consumes far fewer resources.

I will see if I can clean this up.

> 
> I still think you need to disable clustering and junk the bvec merge
> function.  Your object seems to be to accumulate in page size multiples
> (and not aggregate over this) ... that's what clustering is designed to
> do.

As part of addressing your first round of comments, I experimented with your
suggestions and I could not get rid of the code that does the bounce buffer handling.
I could generate I/O patterns that would require bounce buffer handling with your
suggestions in place.

Regards,

K. Y  

��.n��������+%������w��{.n�����{������ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux