Re: librbd 4k read/write?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I have the following scenario:
> Pool RBD replication x3
> 5 hosts with 12 SAS spinning disks each
> 
> I'm using exactly the following line with FIO to test:
> fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G
> -iodepth=16 -rw=write -filename=./test.img
> 
> If I increase the blocksize I can easily reach 1.5 GBps or more.
> 
> But when I use blocksize in 4K I get a measly 12 Megabytes per second,
> which is quite annoying. I achieve the same rate if rw=read.
> 
> If I use librbd's cache I get a considerable improvement in writing, but
> reading remains the same.
> 
> I already tested with rbd_read_from_replica_policy=balance but I didn't
> notice any difference. I tried to leave readahead enabled by setting
> rbd_readahead_disable_after_bytes=0 but I didn't see any difference in
> sequential reading either.
> 
> Note: I tested it on another smaller cluster, with 36 SAS disks and got the
> same result.
> 
> I don't know exactly what to look for or configure to have any improvement.

What are you expecting?

This is what I have on a vm with an rbd from a hdd pool


<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>

[@~]# fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -size=1G -iodepth=16 -rw=write -filename=./test.img
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=57.5MiB/s][r=0,w=14.7k IOPS][eta 00m:00s]


[@~]# fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -size=1G -iodepth=1 -rw=write -filename=./test.img
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=19.9MiB/s][r=0,w=5090 IOPS][eta 00m:00s]



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux