Hello, There have been many, many threads about this. Google is your friend, so is keeping an eye on threads in this ML. On Mon, 15 Dec 2014 05:44:24 +0100 ceph.com@xxxxxxxxxxxxx wrote: > I have the following setup: > Node1 = 8 x SSD > Node2 = 6 x SATA > Node3 = 6 x SATA > Client1 > All Cisco UCS running RHEL6.5 + kernel 3.18.0 + ceph 0.88. > > A "dd bs=4k oflag=direct" test directly on a OSD disk shows me: > Node1 = 60MB/s > Node2 = 30MB/s > Node2 = 30MB/s > > I've created 2 pools, each size=1, pg_num=1024. > I've created a rbd image, formatted it ext4 (bs=4k), but also xfs. You're not telling us how you mounted it, but since you're not mentioning VMs anywhere lets assume RBD kernel space. > A "dd bs=4k oflag=direct" test on that image shows me 5 MB/s. Looking at the CPU utilization (and other things) of your storage nodes during that test with atop or similar should be educational. Or maybe not, as you're missing one major item (aside the less than stellar kernel space performance), see below > A "dd bs=4M oflag=direct" test on that image shows me 150 MB/s. This is the same block size as "rados bench" but... > A "dd bs=32M oflag=direct" test on that image shows me 260 MB/s. > A "rados bench write" test on that pool shows me 560 MB/s. > > What am i doing wrong? The "rados bench" has a default size of 4MB (which is optimal for the default ceph settings) _AND_ 16 threads. Ceph excels at parallel tasks, single threads will suck in comparison (as they tend to hit the same target OSDs for the time it takes to write 4MB). > Why is a 4kb block size write so slow? > See above. And once you use a larger amount of threads and 4KB blocks, your CPUs will melt. Try "rados -p poolname bench 30 write -t 64 -b 4096" for some fireworks. Regards, Christian > Thanks for any help... > > > Samuel Terburg > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com