Re: Very low 4k randread performance ~1000iops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I created a file which has the following parameters


[random-read]
rw=randread
size=128m
directory=/root/asd
ioengine=libaio
bs=4k
#numjobs=8
iodepth=64


Br,T
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Mark Nelson
Sent: 30. kesäkuuta 2015 20:55
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Very low 4k randread performance ~1000iops

Hi Tuomos,

Can you paste the command you ran to do the test?

Thanks,
Mark

On 06/30/2015 12:18 PM, Tuomas Juntunen wrote:
> Hi
>
> It?s not probably hitting the disks, but that really doesn?t matter. 
> The point is we have very responsive VM?s while writing and that is 
> what the users will see.
>
> The iops we get with sequential read is good, but the random read is 
> way too low.
>
> Is using SSD?s as OSD?s the only way to get it up? or is there some 
> tunable which would enhance it? I would assume Linux caches reads in 
> memory and serves them from there, but atleast now we don?t see it.
>
> Br,
>
> Tuomas
>
> *From:*Somnath Roy [mailto:Somnath.Roy@xxxxxxxxxxx]
> *Sent:* 30. kesäkuuta 2015 19:24
> *To:* Tuomas Juntunen; 'ceph-users'
> *Subject:* RE:  Very low 4k randread performance ~1000iops
>
> Break it down, try fio-rbd to see what is the performance you getting..
>
> But, I am really surprised you are getting > 100k iops for write, did 
> you check it is hitting the disks ?
>
> Thanks & Regards
>
> Somnath
>
> *From:*ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] *On 
> Behalf Of *Tuomas Juntunen
> *Sent:* Tuesday, June 30, 2015 8:33 AM
> *To:* 'ceph-users'
> *Subject:*  Very low 4k randread performance ~1000iops
>
> Hi
>
> I have been trying to figure out why our 4k random reads in VM?s are 
> so bad. I am using fio to test this.
>
> Write : 170k iops
>
> Random write : 109k iops
>
> Read : 64k iops
>
> Random read : 1k iops
>
> Our setup is:
>
> 3 nodes with 36 OSDs, 18 SSD?s one SSD for two OSD?s, each node has 
> 64gb mem & 2x6core cpu?s
>
> 4 monitors running on other servers
>
> 40gbit infiniband with IPoIB
>
> Openstack : Qemu-kvm for virtuals
>
> Any help would be appreciated
>
> Thank you in advance.
>
> Br,
>
> Tuomas
>
> ----------------------------------------------------------------------
> --
>
>
> PLEASE NOTE: The information contained in this electronic mail message 
> is intended only for the use of the designated recipient(s) named above.
> If the reader of this message is not the intended recipient, you are 
> hereby notified that you have received this message in error and that 
> any review, dissemination, distribution, or copying of this message is 
> strictly prohibited. If you have received this communication in error, 
> please notify the sender by telephone or e-mail (as shown above) 
> immediately and destroy any and all copies of this message in your 
> possession (whether hard copies or electronically stored copies).
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux