Re: [PATCH] smb3: add rasize mount parameter to improve performance of readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Agree with this. Was experimenting on the similar lines on Friday.
Does show good improvements with sequential workload.
For random read/write workload, the user can use the default value.

Reviewed-by: Shyam Prasad N <sprasad@xxxxxxxxxxxxx>

On Sun, Apr 25, 2021 at 10:20 PM Steve French <smfrench@xxxxxxxxx> wrote:
>
> Updated patch attached. It does seem to help - just tried an experiment
>
>       dd if=/mnt/test/1GBfile of=/dev/null bs=1M count=1024
>
> to the same server, same share and compared mounting with rasize=6MB
> vs. default (1MB to Azure)
>
> (rw,relatime,vers=3.1.1,cache=strict,username=linuxsmb3testsharesmc,uid=0,noforceuid,gid=0,noforcegid,addr=20.150.70.104,file_mode=0777,dir_mode=0777,soft,persistenthandles,nounix,serverino,mapposix,mfsymlinks,nostrictsync,rsize=1048576,wsize=1048576,bsize=1048576,echo_interval=60,actimeo=1,multichannel,max_channels=2)
>
> Got 391 MB/s  with rasize=6MB, much faster than default (which ends up
> as 1MB with current code) of 163MB/s
>
>
>
>
>
>
> # dd if=/mnt/test/394.29520 of=/dev/null bs=1M count=1024 ; dd
> if=/mnt/scratch/394.29520 of=/mnt/test/junk1 bs=1M count=1024 ;dd
> if=/mnt/test/394.29520 of=/dev/null bs=1M count=1024 ; dd
> if=/mnt/scratch/394.29520 of=/mnt/test/junk1 bs=1M count=1024 ;
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.06764 s, 264 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.5912 s, 85.3 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.0573 s, 351 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.58283 s, 125 MB/s
>
> On Sat, Apr 24, 2021 at 9:36 PM Steve French <smfrench@xxxxxxxxx> wrote:
> >
> > Yep - good catch.  It is missing part of my patch :(
> >
> > Ugh
> >
> > Will need to rerun and get real numbers
> >
> > On Sat, Apr 24, 2021 at 9:10 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> > >
> > > On Sat, Apr 24, 2021 at 02:27:11PM -0500, Steve French wrote:
> > > > Using the buildbot test systems, this resulted in an average improvement
> > > > of 14% to the Windows server test target for the first 12 tests I
> > > > tried (no multichannel)
> > > > changing to 12MB rasize (read ahead size).   Similarly increasing the
> > > > rasize to 12MB to Azure (this time with multichannel, 4 channels)
> > > > improved performance 37%
> > > >
> > > > Note that Ceph had already introduced a mount parameter "rasize" to
> > > > allow controlling this.  Add mount parameter "rasize" to cifs.ko to
> > > > allow control of read ahead (rasize defaults to 4MB which is typically
> > > > what it used to default to to the many servers whose rsize was that).
> > >
> > > I think something was missing from this patch -- I see you parse it and
> > > set it in the mount context, but I don't see where it then gets used to
> > > actually affect readahead.
> >
> >
> >
> > --
> > Thanks,
> >
> > Steve
>
>
>
> --
> Thanks,
>
> Steve



-- 
Regards,
Shyam




[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux