On Fri, Apr 30, 2021 at 7:00 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Fri, Apr 30, 2021 at 04:19:27PM +0530, Shyam Prasad N wrote: > > Although ideally, I feel that we (cifs.ko) should be able to read in > > larger granular "chunks" even for small reads, in expectation that > > surrounding offsets will be read soon. > > Why? How is CIFS special and different from every other filesystem that > means you know what the access pattern of userspace is going to be better > than the generic VFS? In general small chunks are bad for network file systems since the 'cost' of sending a large read or write on the network (and in the call stack on the client and server, with various task switches etc) is not much more than a small one. This can be different on a local file system with less latency between request and response and fewer task switches involved on client and server. There are tradeoffs between - having multiple small chunks in flight vs. fewer large chunks in flight - but a general idea is that if possible it can be much faster to keep some requests in flight and keep some activity: - on the network - on the server side - on the client side to avoid "dead time" where nothing is happening on the network due to latency decrypting on the client or server etc. For this reason it makes sense that having multiple 4 1MB reads in flight (e.g. copying a file with new "rasize" mount parm set to (e.g.) 4MB for cifs.ko) can be much faster than only having 1 1MB read in flight at one time, and much, much faster than using direct i/o where some tools like "rsync" use quite small i/o sizes (cp uses 1MB i/o if uncached i/o for case where mounted to cifs and nfs but rsync uses a small size which hurts uncached performance greatly) uses much smaller) -- Thanks, Steve