Re: Performance Difference between ext4 and Raw Block Device Access with buffer_io

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 15, 2023 at 02:20:02PM +0000, Niklas Cassel wrote:
> On Wed, Nov 15, 2023 at 05:19:28PM +0800, Ming Lei wrote:
> > On Mon, Nov 13, 2023 at 05:57:52PM -0800, Ming Lin wrote:
> > > Hi,
> > > 
> > > We are currently conducting performance tests on an application that
> > > involves writing/reading data to/from ext4 or a raw block device.
> > > Specifically, for raw block device access, we have implemented a
> > > simple "userspace filesystem" directly on top of it.
> > > 
> > > All write/read operations are being tested using buffer_io. However,
> > > we have observed that the ext4+buffer_io performance significantly
> > > outperforms raw_block_device+buffer_io:
> > > 
> > > ext4: write 18G/s, read 40G/s
> > > raw block device: write 18G/s, read 21G/s
> > 
> > Can you share your exact test case?
> > 
> > I tried the following fio test on both ext4 over nvme and raw nvme, and the
> > result is the opposite: raw block device throughput is 2X ext4, and it
> > can be observed in both VM and read hardware.
> > 
> > 1) raw NVMe
> > 
> > fio --direct=0 --size=128G --bs=64k --runtime=20 --numjobs=8 --ioengine=psync \
> >     --group_reporting=1 --filename=/dev/nvme0n1 --name=test-read --rw=read
> > 
> > 2) ext4
> > 
> > fio --size=1G --time_based --bs=4k --runtime=20 --numjobs=8 \
> > 	--ioengine=psync --directory=$DIR --group_reporting=1 \
> > 	--unlink=0 --direct=0 --fsync=0 --name=f1 --stonewall --rw=read
> 
> Hello Ming,
> 
> 1) uses bs=64k, 2) uses bs=4k, was this intentional?

It is a typo, actually both two are taking bs=64k.

> 
> 2) uses stonewall, but 1) doesn't, was this intentional?

To be honest, both are run from different existed two scripts,
just run again by adding --stonewall to raw block test, not see
difference.

> 
> For fairness, you might want to use the same size (1G vs 128G).

For fs test, each io job creates one file and run IO against each file,
but there is only one 'file' in raw block test, and all 8 jobs run
IO on same block device.

So just start one quick randread test, similar gap can be observed
too compared with read test.

> 
> And perhaps clear the page cache before each fio invocation:
> # echo 1 > /proc/sys/vm/drop_caches

Yes, it is always done before running the two buffered IO tests.


thanks,
Ming





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux