Re: [PATCH] smb3: add rasize mount parameter to improve performance of readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Forgot to mention another obvious point ... the number of 'channels'
is dynamic for some filesystems.
For example clustered SMB3 servers can notify the clients
asynchronously when more 'channels'
are added (more network connections added - e.g. in cloud environments
or clustered high performance
server environments you can temporarily add more high performance
ethernet or RDMA adapters temporarily to the host)
so it is quite possible that the server can indicate to the client
that more network throughput is now available
(Windows takes advantage of this, and even polls to check for new
interfaces every 10 minutes, but the Linux client
does not yet - but it is something we will likely add soon).

On Sat, May 1, 2021 at 1:47 PM Steve French <smfrench@xxxxxxxxx> wrote:
>
> On Sat, May 1, 2021 at 1:35 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> >
> > On Fri, Apr 30, 2021 at 02:22:20PM -0500, Steve French wrote:
> > > On Fri, Apr 30, 2021 at 7:00 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On Fri, Apr 30, 2021 at 04:19:27PM +0530, Shyam Prasad N wrote:
> > > > > Although ideally, I feel that we (cifs.ko) should be able to read in
> > > > > larger granular "chunks" even for small reads, in expectation that
> > > > > surrounding offsets will be read soon.
> > > >
> > > > Why?  How is CIFS special and different from every other filesystem that
> > > > means you know what the access pattern of userspace is going to be better
> > > > than the generic VFS?
> > >
> > > In general small chunks are bad for network file systems since the 'cost' of
> > > sending a large read or write on the network (and in the call stack on
> > > the client
> > > and server, with various task switches etc) is not much more than a small one.
> > > This can be different on a local file system with less latency between request
> > > and response and fewer task switches involved on client and server.
> >
> > Block-based filesystems are often, but not always local.  For example,
> > we might be using nbd, iSCSI, FCoE or something similar to include
> > network latency between the filesystem and its storage.  Even without
> > those possibilities, a NAND SSD looks pretty similar.  Look at the
> > graphic titled "Idle Average Random Read Latency" on this page:
> >
> > https://www.intel.ca/content/www/ca/en/architecture-and-technology/optane-technology/balancing-bandwidth-and-latency-article-brief.html
> >
> > That seems to be showing 5us software latency for an SSD with 80us of
> > hardware latency.  That says to me we should have 16 outstanding reads
> > to a NAND SSD in order to keep the pipeline full.
> >
> > Conversely, a network filesystem might be talking to localhost,
> > and seeing much lower latency compared to going across the data
> > center, between data centres or across the Pacific.
> >
> > So, my point is that Linux's readahead is pretty poor.  Adding
> > hacks in for individual filesystems isn't a good route to fixing it,
> > and reading larger chunks has already passed the point of dimnishing
> > returns for many workloads.
> >
> > I laid it out in a bit more detail here:
> > https://lore.kernel.org/linux-fsdevel/20210224155121.GQ2858050@xxxxxxxxxxxxxxxxxxxx/
>
> Yes - those are good points.  Because the latencies vary the most for
> network/cluster filesystems which can vary by more than a million
> times greater (from localhost and RDMA (aka smbdirect) which can be
> very low latencies, to some cloud workloads which have longer
> latencies by high throughput, or to servers where the files are
> 'offline' (archived or in the cloud) where I have seen some examples
> where it could take minutes instead) - it is especially important for
> this in the long run to be better tunable.  In the short term, at
> least having some tuneables on the file system mount (like Ceph's
> "rapages") makes sense.
>
> Seems like there are three problems to solve:
> - the things your note mentions about how to get the core readahead
> code to ramp up a 'reasonable' number of I/Os are very important
> but also
> - how to let a filesystem signal the readahead code to slow down or
> allow partially fulfilling read ahead requests (in the SMB3 case this
> can be done when 'credits' on the connection (one 'credit' is needed
> for each 64K of I/O) are starting to get lower)
> - how to let a filesystem signal the readahead code to temporarily
> stop readahead (or max readahead at one i/o of size = readsize).  This
> could happen e.g. when the filesystem gets an "out of resources" error
> message from the server, or when reconnect is triggered
>
>
> --
> Thanks,
>
> Steve



-- 
Thanks,

Steve



[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux