rbd bench-write vs dd performance confusion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm trying to get comfortable with managing and benchmarking ceph clusters, and I'm struggling to understan rbd bench-write results vs using dd against mounted rbd images.

I have a 6 node test cluster running version 0.94.5, 2 nodes per rack, 20 OSDs per node. Write journals are on the same disk as their OSD. My rbd pool is set for 3 replicas, with 2 on different hosts in a given rack, and 3rd on some host in a different rack.


I created a test 100GB image with 4MB object size, created a VM client, and mounted the image at /dev/rbd1.

In a shell on one of my 6 storage nodes I have 'iostat 2' running.

Now my confusion; If I run on the client:

'sudo dd if=/dev/zero of=/dev/rbd1 bs=4M count=1000 iflag=fullblock oflag=direct'

I see '4194304000 bytes (4.2 GB) copied, 18.5798 s, 226 MB/s' and the iostat on the storage node shows almost all 20 disks sustaining 4-16MB/s writes.

However, if I run

'rbd --cluster <clustername> bench-write test-4m-image --io-size 4000000 --io-threads 1 --io-total 40000000000 --io-pattern rand'

I see 'elapsed:    12  ops:    10000  ops/sec:   805.86  bytes/sec: 3223441447.72' but the iostat shows the disks basically all at 0.00kb_wrtn/s for the duration of the run.

So that's bench-write reporting 3.2 GB/s with iostat showing *nothing* happening, while dd writes 226 MB/s and iostat lights up. Am I misunderstanding what rbd-bench is supposed to do?

Thanks,
-Emile
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux