Re: RBD huge diff between random vs non-random IOPs - all flash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 30, 2020 at 8:28 AM <tri@xxxxxxxxxx> wrote:
>
> Hi all,
>
> I'm trying to troubleshoot an interesting problem with RBD performance for VMs. Tests were done using fio both outside and inside the VMs shows that random read/write is 20-30% slower than bulk read/write at QD=1. However, at QD=16/32/64, random read/write is sometimes 3X faster than bulk read/write. Inside the VMs, tests were done with -direct=1 -sync=1 using libaio. Outside VMs, test were done with -direct=1 -sync=1 with both librbd and libaio.
>
> The gap between random and bulk I/O narrows with increasing QD to 128. However, there's always a 20-30% difference with random I/O being faster. Read and write tests show similar results both inside and outside VMs.
>
> Typically, the random I/O performance would be less (or much less) than bulk. Any idea as to what I should be looking at? Thanks.

It's expected behavior that random IO is faster under RBD vs
sequential IO for QD > 1 since it can scale-out to multiple OSDs.

> Tri Hoang
> Inside VM
> ========
> At QD=8, randread is around 2.8X read @ 64k
> ---------------------------------------------------------------
>
> tri@ansible:~$ fio -name=read -ioengine=libaio -iodepth=8 -direct=1 -sync=1 -rw=randread -bs=64k -size=4G -runtime=120 --filename=test.fio
> read: (g=0): rw=randread, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=8
> fio-3.12
> Starting 1 process
> Jobs: 1 (f=1): [r(1)][100.0%][r=762MiB/s][r=12.2k IOPS][eta 00m:00s]
> read: (groupid=0, jobs=1): err= 0: pid=1551: Wed Sep 30 08:19:56 2020
>  read: IOPS=11.9k, BW=743MiB/s (779MB/s)(4096MiB/5515msec)
>
> tri@ansible:~$ fio -name=read -ioengine=libaio -iodepth=8 -direct=1 -sync=1 -rw=read -bs=64k -size=4G -runtime=120 --filename=test.fio
> read: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=8
> fio-3.12
> Starting 1 process
> Jobs: 1 (f=1): [R(1)][100.0%][r=268MiB/s][r=4289 IOPS][eta 00m:00s]
> read: (groupid=0, jobs=1): err= 0: pid=1554: Wed Sep 30 08:22:03 2020
>  read: IOPS=4374, BW=273MiB/s (287MB/s)(4096MiB/14981msec)
>
> At QD=128, randread is around 1.4X read
> ---------------------------------------------------------
> tri@ansible:~$ fio -name=read -ioengine=libaio -iodepth=128 -direct=1 -sync=1 -rw=randread -bs=64k -size=4G -runtime=120 --filename=test.fio
> read: (g=0): rw=randread, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=128
> fio-3.12
> Starting 1 process
> Jobs: 1 (f=1)
> read: (groupid=0, jobs=1): err= 0: pid=1548: Wed Sep 30 08:18:59 2020
>  read: IOPS=23.1k, BW=1441MiB/s (1511MB/s)(4096MiB/2843msec)
> tri@ansible:~$ fio -name=read -ioengine=libaio -iodepth=128 -direct=1 -sync=1 -rw=read -bs=64k -size=4G -runtime=120 --filename=test.fio
> read: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=128
> fio-3.12
> Starting 1 process
> Jobs: 1 (f=1): [R(1)][100.0%][r=974MiB/s][r=15.6k IOPS][eta 00m:00s]
> read: (groupid=0, jobs=1): err= 0: pid=1545: Wed Sep 30 08:17:38 2020
>  read: IOPS=15.9k, BW=997MiB/s (1045MB/s)(4096MiB/4110msec)
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux