Re: slow fio random read benchmark, need help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 31 Oct 2012, Alexandre DERUMIER wrote:
> Hello,
> 
> I'm doing some tests with fio from a qemu 1.2 guest (virtio disk,cache=none), randread, with 4K block size on a small size of 1G (so it can be handle by the buffer cache on ceph cluster)
> 
> 
> fio --filename=/dev/vdb -rw=randread --bs=4K --size=1000M --iodepth=40  --group_reporting --name=file1 --ioengine=libaio --direct=1
> 
> 
> I can't get more than 5000 iops.

Have you tried increasing the iodepth?

sage

> 
> 
> RBD cluster is :
> ---------------
> 3 nodes,with each node : 
> -6 x osd 15k drives (xfs), journal on tmpfs, 1 mon 
> -cpu: 2x 4 cores intel xeon E5420@2.5GHZ
> rbd 0.53
> 
> ceph.conf
> 
>         journal dio = false
>         filestore fiemap = false
>         filestore flusher = false
>         osd op threads = 24
>         osd disk threads = 24
>         filestore op threads = 6
> 
> kvm host is : 4 x 12 cores opteron
> ------------
> 
> 
> During the bench:
> 
> on ceph nodes:
> - cpu  is around 10% used
> - iostat show no disks activity on osds. (so I think that the 1G file is handle in the linux buffer)
> 
> 
> on kvm host:
> 
> -cpu is around 20% used
> 
> 
> I really don't see where is the bottleneck....
> 
> Any Ideas, hints ?
> 
> 
> Regards,
> 
> Alexandre
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux