On Tue, May 21, 2024 at 07:49:39PM +0800, Xu Yang wrote: > Since commit (5d8edfb900d5 "iomap: Copy larger chunks from userspace"), > iomap will try to copy in larger chunks than PAGE_SIZE. However, if the > mapping doesn't support large folio, only one page of maximum 4KB will > be created and 4KB data will be writen to pagecache each time. Then, > next 4KB will be handled in next iteration. This will cause potential > write performance problem. > > If chunk is 2MB, total 512 pages need to be handled finally. During this > period, fault_in_iov_iter_readable() is called to check iov_iter readable > validity. Since only 4KB will be handled each time, below address space > will be checked over and over again: > > start end > - > buf, buf+2MB > buf+4KB, buf+2MB > buf+8KB, buf+2MB > ... > buf+2044KB buf+2MB > > Obviously the checking size is wrong since only 4KB will be handled each > time. So this will get a correct chunk to let iomap work well in non-large > folio case. > > With this change, the write speed will be stable. Tested on ARM64 device. > > Before: > > - dd if=/dev/zero of=/dev/sda bs=400K count=10485 (334 MB/s) > - dd if=/dev/zero of=/dev/sda bs=800K count=5242 (278 MB/s) > - dd if=/dev/zero of=/dev/sda bs=1600K count=2621 (204 MB/s) > - dd if=/dev/zero of=/dev/sda bs=2200K count=1906 (170 MB/s) > - dd if=/dev/zero of=/dev/sda bs=3000K count=1398 (150 MB/s) > - dd if=/dev/zero of=/dev/sda bs=4500K count=932 (139 MB/s) > > After: > > - dd if=/dev/zero of=/dev/sda bs=400K count=10485 (339 MB/s) > - dd if=/dev/zero of=/dev/sda bs=800K count=5242 (330 MB/s) > - dd if=/dev/zero of=/dev/sda bs=1600K count=2621 (332 MB/s) > - dd if=/dev/zero of=/dev/sda bs=2200K count=1906 (333 MB/s) > - dd if=/dev/zero of=/dev/sda bs=3000K count=1398 (333 MB/s) > - dd if=/dev/zero of=/dev/sda bs=4500K count=932 (333 MB/s) > > Fixes: 5d8edfb900d5 ("iomap: Copy larger chunks from userspace") > Cc: stable@xxxxxxxxxxxxxxx > Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> > Signed-off-by: Xu Yang <xu.yang_2@xxxxxxx> Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>