Slow RBD performance bs=4k

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA
Client1
All Cisco UCS running RHEL6.5 + kernel 3.18.0 + ceph 0.88.

A "dd bs=4k oflag=direct" test directly on a OSD disk shows me:
Node1 = 60MB/s
Node2 = 30MB/s
Node2 = 30MB/s

I've created 2 pools, each size=1, pg_num=1024.
I've created a rbd image, formatted it ext4 (bs=4k), but also xfs.
A "dd bs=4k oflag=direct"  test on that image shows me   5 MB/s.
A "dd bs=4M oflag=direct"  test on that image shows me 150 MB/s.
A "dd bs=32M oflag=direct" test on that image shows me 260 MB/s.
A "rados bench write"      test on that pool  shows me 560 MB/s.

What am i doing wrong?
Why is a 4kb block size write so slow?

Thanks for any help...


Samuel Terburg




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux