Re: [Lsf-pc] [LSF/MM TOPIC] really large storage sectors - going beyond 4096 bytes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 22, 2014 at 10:47:01AM -0800, James Bottomley wrote:
> On Wed, 2014-01-22 at 18:37 +0000, Chris Mason wrote:
> > On Wed, 2014-01-22 at 10:13 -0800, James Bottomley wrote:
> > > On Wed, 2014-01-22 at 18:02 +0000, Chris Mason wrote:
> [agreement cut because it's boring for the reader]
> > > Realistically, if you look at what the I/O schedulers output on a
> > > standard (spinning rust) workload, it's mostly large transfers.
> > > Obviously these are misalgned at the ends, but we can fix some of that
> > > in the scheduler.  Particularly if the FS helps us with layout.  My
> > > instinct tells me that we can fix 99% of this with layout on the FS + io
> > > schedulers ... the remaining 1% goes to the drive as needing to do RMW
> > > in the device, but the net impact to our throughput shouldn't be that
> > > great.
> > 
> > There are a few workloads where the VM and the FS would team up to make
> > this fairly miserable
> > 
> > Small files.  Delayed allocation fixes a lot of this, but the VM doesn't
> > realize that fileA, fileB, fileC, and fileD all need to be written at
> > the same time to avoid RMW.  Btrfs and MD have setup plugging callbacks
> > to accumulate full stripes as much as possible, but it still hurts.
> > 
> > Metadata.  These writes are very latency sensitive and we'll gain a lot
> > if the FS is explicitly trying to build full sector IOs.
> 
> OK, so these two cases I buy ... the question is can we do something
> about them today without increasing the block size?
> 
> The metadata problem, in particular, might be block independent: we
> still have a lot of small chunks to write out at fractured locations.
> With a large block size, the FS knows it's been bad and can expect the
> rolled up newspaper, but it's not clear what it could do about it.
> 
> The small files issue looks like something we should be tackling today
> since writing out adjacent files would actually help us get bigger
> transfers.

ocfs2 can actually take significant advantage here, because we store
small file data in-inode.  This would grow our in-inode size from ~3K to
~15K or ~63K.  We'd actually have to do more work to start putting more
than one inode in a block (thought that would be a promising avenue too
once the coordination is solved generically.

Joel


-- 

"One of the symptoms of an approaching nervous breakdown is the
 belief that one's work is terribly important."
         - Bertrand Russell 

			http://www.jlbec.org/
			jlbec@xxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux