On Mon, Apr 26, 2021 at 6:55 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Mon, Apr 26, 2021 at 10:22:27AM +0530, Shyam Prasad N wrote: > > Agree with this. Was experimenting on the similar lines on Friday. > > Does show good improvements with sequential workload. > > For random read/write workload, the user can use the default value. > > For a random access workload, Linux's readahead shouldn't kick in. > Do you see a slowdown when using this patch with a random I/O workload? I see few slowdowns in the 20 or so xfstests I have tried, but the value of rasize varies a lot by server type and network type and number of channels. I don't have enough data to set rasize well (I have experimented with 4MB to 12MB but need more data). In some experiments running 20+ typical xfstests So far I see really good improvement in multichannel to Azure with setting rasize to 4 times negotiated read size. But I saw less than a 1% gain though to a slower Windows server running in a VM (without multichannel). I also didn't see a gain to localhost Samba in the simple examples I tried. I saw about an 11% gain to a typical Azure share without multichannel. See some example perf data below. The numbers on right are perf with rasize set to 4MB, ie 4 times the negotiated read size, instead of results using defaults of ra_pages = 1MB rsize (on the left). generic/001 113s ... 117s generic/005 35s ... 34s generic/006 567s ... 503s generic/010 1s ... 1s generic/011 620s ... 594s generic/024 10s ... 10s generic/028 5s ... 5s generic/029 2s ... 2s generic/030 3s ... 2s generic/036 11s ... 11s generic/069 7s ... 7s generic/070 287s ... 270s generic/080 3s ... 2s generic/084 6s ... 6s generic/086 1s ... 1s generic/095 25s ... 23s generic/098 1s ... 1s generic/109 469s ... 328s generic/117 219s ... 201s generic/124 21s ... 20s generic/125 63s ... 62s generic/130 27s ... 24s generic/132 24s ... 25s -- Thanks, Steve