I feel this is not just a problem with cifs. Any filesystem (particularly network filesystems involving higher latency I/O to fetch data from the server) will have issues coping up with a large number of small random reads. My point here is that if all reads to the server were of a minimum "chunk" size (a contiguous range of pages), page cache could be populated in chunks. Any future read to other pages in the chunk could be satisfied from the page cache, thereby improving the overall performance for similar workloads. Regards, Shyam On Fri, Apr 30, 2021 at 5:30 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Fri, Apr 30, 2021 at 04:19:27PM +0530, Shyam Prasad N wrote: > > Although ideally, I feel that we (cifs.ko) should be able to read in > > larger granular "chunks" even for small reads, in expectation that > > surrounding offsets will be read soon. > > Why? How is CIFS special and different from every other filesystem that > means you know what the access pattern of userspace is going to be better > than the generic VFS? > > There are definitely shortcomings in the readahead code that should > be addressed, but in almost no circumstances is "read bigger chunks" > the right answer. -- Regards, Shyam