Re: [LSF/MM/BPF TOPIC] Cloud storage optimizations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 04, 2023 at 08:41:04AM -0500, James Bottomley wrote:
> On Sat, 2023-03-04 at 07:34 +0000, Matthew Wilcox wrote:
> > On Fri, Mar 03, 2023 at 08:11:47AM -0500, James Bottomley wrote:
> > > On Fri, 2023-03-03 at 03:49 +0000, Matthew Wilcox wrote:
> > > > On Thu, Mar 02, 2023 at 06:58:58PM -0700, Keith Busch wrote:
> > > > > That said, I was hoping you were going to suggest supporting
> > > > > 16k logical block sizes. Not a problem on some arch's, but
> > > > > still problematic when PAGE_SIZE is 4k. :)
> > > > 
> > > > I was hoping Luis was going to propose a session on LBA size >
> > > > PAGE_SIZE. Funnily, while the pressure is coming from the storage
> > > > vendors, I don't think there's any work to be done in the storage
> > > > layers.  It's purely a FS+MM problem.
> > > 
> > > Heh, I can do the fools rush in bit, especially if what we're
> > > interested in the minimum it would take to support this ...
> > > 
> > > The FS problem could be solved simply by saying FS block size must
> > > equal device block size, then it becomes purely a MM issue.
> > 
> > Spoken like somebody who's never converted a filesystem to
> > supporting large folios.  There are a number of issues:
> > 
> > 1. The obvious; use of PAGE_SIZE and/or PAGE_SHIFT
> 
> Well, yes, a filesystem has to be aware it's using a block size larger
> than page size.
> 
> > 2. Use of kmap-family to access, eg directories.  You can't kmap
> >    an entire folio, only one page at a time.  And if a dentry is
> > split across a page boundary ...
> 
> Is kmap relevant?  It's only used for reading user pages in the kernel
> and I can't see why a filesystem would use it unless it wants to pack
> inodes into pages that also contain user data, which is an optimization
> not a fundamental issue (although I grant that as the blocksize grows
> it becomes more useful) so it doesn't have to be part of the minimum
> viable prototype.

Filesystems often choose to store their metadata in HIGHMEM.  This wasn't
an entirely crazy idea back in, say, 2005, when you might be running
an ext2 filesystem on a machine with 32GB of RAM, and only 800MB of
address space for it.

Now it's silly.  Buy a real computer.  I'm getting more and more
comfortable with the idea that "Linux doesn't support block sizes >
PAGE_SIZE on 32-bit machines" is an acceptable answer.

> > 3. buffer_heads do not currently support large folios.  Working on
> > it.
> 
> Yes, I always forget filesystems still use the buffer cache.  But
> fundamentally the buffer_head structure can cope with buffers that span
> pages so most of the logic changes would be around grow_dev_page().  It
> seems somewhat messy but not too hard.

I forgot one particularly nasty case; we have filesystems (including the
mpage code used by a number of filesystems) which put an array of block
numbers on the stack.  Not a big deal when that's 8 entries (4kB/512 * 8
bytes = 64 bytes), but it starts to get noticable at 64kB PAGE_SIZE (1kB
is a little large for a stack allocation) and downright unreasonable
if you try to do something to a 2MB allocation (32kB).

> > Probably a few other things I forget.  But look through the recent
> > patches to AFS, CIFS, NFS, XFS, iomap that do folio conversions.
> > A lot of it is pretty mechanical, but some of it takes hard thought.
> > And if you have ideas about how to handle ext2 directories, I'm all
> > ears.
> 
> OK, so I can see you were waiting for someone to touch a nerve, but if
> I can go back to the stated goal, I never really thought *every*
> filesystem would be suitable for block size > page size, so simply
> getting a few of the modern ones working would be good enough for the
> minimum viable prototype.

XFS already works with arbitrary-order folios.  The only needed piece is
specifying to the VFS that there's a minimum order for this particular
inode, and having the VFS honour that everywhere.

What "touches a nerve" is people who clearly haven't been paying attention
to the problem making sweeping assertions about what the easy and hard
parts are.

> I fully understand that eventually we'll need to get a single large
> buffer to span discontiguous pages ... I noted that in the bit you cut,
> but I don't see why the prototype shouldn't start with contiguous
> pages.

I disagree that this is a desirable goal.  To solve the scalability
issues we have in the VFS, we need to manage memory in larger chunks
than PAGE_SIZE.  That makes the concerns expressed in previous years moot.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux