On Fri, May 14, 2021 at 03:32:41PM +0900, Changheun Lee wrote: > I tested 512MB file read with direct I/O. and chunk size is 64MB. > - on SCSI disk, with no limit of bio max size(4GB) : avg. 630 MB/s > - on SCSI disk, with limit bio max size to 1MB : avg. 645 MB/s > - on ramdisk, with no limit of bio max size(4GB) : avg. 2749 MB/s > - on ramdisk, with limit bio max size to 1MB : avg. 3068 MB/s > > I set ramdisk environment as below. > - dd if=/dev/zero of=/mnt/ramdisk.img bs=$((1024*1024)) count=1024 > - mkfs.ext4 /mnt/ramdisk.img > - mkdir /mnt/ext4ramdisk > - mount -o loop /mnt/ramdisk.img /mnt/ext4ramdisk > > With low performance disk, bio submit delay caused by large bio size is > not big protion. So it can't be feel easily. But it will be shown in high > performance disk. So let's attack the problem properly: 1) switch f2fs to a direct I/O implementation that does not suck 2) look into optimizing the iomap code to e.g. submit the bio once it is larger than queue_io_opt() without failing to add to a bio which would be annoying for things like huge pages.