Re: [PATCH v13 0/3] scsi: ufs: Add Host Performance Booster Support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 07, 2020 at 10:54:58AM -0800, James Bottomley wrote:
> On Mon, 2020-12-07 at 19:35 +0100, Greg KH wrote:
> > On Mon, Dec 07, 2020 at 06:26:03PM +0000, Christoph Hellwig wrote:
> > > On Mon, Dec 07, 2020 at 07:23:12PM +0100, Greg KH wrote:
> > > > What "real workload" test can be run on this to help show if it
> > > > is useful or not?  These vendors seem to think it helps for some
> > > > reason, otherwise they wouldn't have added it to their silicon :)
> > > > 
> > > > Should they run fio?  If so, any hints on a config that would be
> > > > good to show any performance increases?
> > > 
> > > A real actual workload that matters.  Then again that was Martins
> > > request to even justify it.  I don't think the broken addressing
> > > that breaks a whole in the SCSI addressing has absolutely not
> > > business being supported in Linux ever.  The vendors should have
> > > thought about the design before committing transistors to something
> > > that fundamentally does not make sense.
> 
> Actually, that's not the way it works: vendors add commands because
> standards mandate.  That's why people who want weird commands go and
> join standard committees.  Unfortunately this means that a lot of the
> commands the standard mandates end up not being very useful in
> practice.  For instance in SCSI we really only implement a fraction of
> the commands in the standard.
> 
> In this case, the industry already tried a very similar approach with
> GEN 1 hybrid drives and it turned into a complete disaster, which is
> why the mode became optional in shingle drives and much better modes,
> which didn't have the huge shared state problem, superseded it.  Plus
> truncating the LBA of a READ 16 to 4 bytes is asking for capacity
> problems down the line, so even the actual implementation seems to be
> problematic.
> 
> All in all, this looks like a short term fix which will go away when
> the drive capacity improves and thus all the effort changing the driver
> will eventually be wasted.

"short term" in the embedded world means "this device is stuck with this
chip for the next 8 years", it's not like a storage device you can
replace, so this might be different than the shingle drive mess.  Also,
I see many old SoCs still showing up in brand new devices many many
years after they were first introduced, on-chip storage controllers is
something we need to support well if we don't want to see huge
out-of-tree patchsets like UFS traditionally has been lugging around for
many years.

> > So "time to boot an android system with this enabled and disabled"
> > would be a valid workload, right?  I'm guessing that's what the
> > vendors here actually care about, otherwise there is no real stress-
> > test on a UFS system that I know of.
> 
> Um, does it?  I don't believe even the UFS people have claimed this. 
> The problem is that HPB creates a shared state between the driver and
> the device.  That shared state has to be populated, which has to happen
> at start of day, so it's entirely unclear if this is a win or a slow
> down for boot.

Ok, showing that this actually matters is a good rule, Daejun, can you
provide that if you resubmit this patchset?

thanks,

greg k-h



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux