> -----Original Message----- > From: KY Srinivasan > Sent: Thursday, November 17, 2011 11:53 PM > To: 'James Bottomley' > Cc: gregkh@xxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; > devel@xxxxxxxxxxxxxxxxxxxxxx; virtualization@xxxxxxxxxxxxxx; linux- > scsi@xxxxxxxxxxxxxxx; ohering@xxxxxxxx; hch@xxxxxxxxxxxxx > Subject: RE: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out of the > staging area > > > > > -----Original Message----- > > From: James Bottomley [mailto:James.Bottomley@xxxxxxxxxxxxxxxxxxxxx] > > Sent: Thursday, November 17, 2011 1:27 PM > > To: KY Srinivasan > > Cc: gregkh@xxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; > > devel@xxxxxxxxxxxxxxxxxxxxxx; virtualization@xxxxxxxxxxxxxx; linux- > > scsi@xxxxxxxxxxxxxxx; ohering@xxxxxxxx; hch@xxxxxxxxxxxxx > > Subject: Re: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out of > the > > staging area > > > > On Tue, 2011-11-08 at 10:13 -0800, K. Y. Srinivasan wrote: > > > The storage driver (storvsc_drv.c) handles all block storage devices > > > assigned to Linux guests hosted on Hyper-V. This driver has been in the > > > staging tree for a while and this patch moves it out of the staging area. > > > As per Greg's recommendation, this patch makes no changes to the > staging/hv > > > directory. Once the driver moves out of staging, we will cleanup the > > > staging/hv directory. > > > > > > This patch includes all the patches that I have sent against the staging/hv > > > tree to address the comments I have gotten to date on this storage driver. > > > > First comment is that it would have been easier to see the individual > > patches for comment before you committed them. > > I am not sure if the patches have been committed yet. All patches were sent > to various mailing lists and you were copied as well. In the future, I will include > the scsi mailing list in the set of lists I include for the staging patches. Greg has checked in these patches now. > > > > > The way you did mempool isn't entirely right: the problem is that to > > prevent a memory to I/O deadlock we need to ensure forward progress on > > the drain device. Just having 64 commands available to the host doesn't > > necessarily achieve this because LUN1 could consume them all and starve > > LUN0 which is the drain device leading to the deadlock, so the mempool > > really needs to be per device using slave_alloc. > Presently, Linux on Hyper-V can only boot from IDE devices. While IDE devices are handled via this stor driver, the way we handle them is a little different compared to block devices configured as scsi devices: For IDE devices; we have a HBA per LUN. Given that for a long time the drain device will be an IDE device, my current implementation does address the concern that you had raised with regards to deadlock avoidance. > > > > > +static int storvsc_device_alloc(struct scsi_device *sdevice) > > +{ > > + /* > > + * This enables luns to be located sparsely. Otherwise, we may not > > + * discovered them. > > + */ > > + sdevice->sdev_bflags |= BLIST_SPARSELUN | BLIST_LARGELUN; > > + return 0; > > +} > > > > Looks bogus ... this should happen automatically for SCSI-3 devices ... > > unless your hypervisor has some strange (and wrong) identification? I > > really think you want to use SCSI-3 because it will do report LUN > > scanning, which consumes far fewer resources. > Done. > > > > > I still think you need to disable clustering and junk the bvec merge > > function. Your object seems to be to accumulate in page size multiples > > (and not aggregate over this) ... that's what clustering is designed to > > do. Done. James, I am going to send you (and the scsi mailing list) the patches addressing your comments. I will also send out a consolidated patch for getting the driver out of staging as well. I would like to thank you for your help in cleaning up this driver. Regards, K. Y _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/devel