Hi Matthew, Sorry for the late reply. Still catching up on my emails. No. I did not see read-aheads ramping up with random reads, so I feel we're okay there with or without this patch. Although ideally, I feel that we (cifs.ko) should be able to read in larger granular "chunks" even for small reads, in expectation that surrounding offsets will be read soon. This is especially useful if the read comes from something like a loop device backed file. Is there a way for a filesystem to indicate to the mm/readahead layer to read in chunks of N bytes? Even for random workloads? Even if the actual read is much smaller? I did some code reading of mm/readahead.c and understand that if the file is opened with fadvise flag of FADV_RANDOM, there's some logic to read in chunks. But that seems to work only if the actual read size is bigger. Regards, Shyam On Mon, Apr 26, 2021 at 5:25 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Mon, Apr 26, 2021 at 10:22:27AM +0530, Shyam Prasad N wrote: > > Agree with this. Was experimenting on the similar lines on Friday. > > Does show good improvements with sequential workload. > > For random read/write workload, the user can use the default value. > > For a random access workload, Linux's readahead shouldn't kick in. > Do you see a slowdown when using this patch with a random I/O workload? > -- Regards, Shyam