Re: dd testing from within the VM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm a lurker here and don't know much about ceph, but:

If fdatasync hardly makes a difference, then either it's not being honoured (which would be a major problem), or there's something else that is a bottleneck in your test (more likely).

It's not uncommon for a poor choice of block size (bs) to have a big effect on the speed with dd. Try something much bigger. How big is the minimum record size written to your ceph cluster? Reading on http://docs.ceph.com/docs/master/man/8/rbd/ it appears that the default is 4MB. On a normal RAID, you'd often see a record size of something like 64kb or 128kb or something like that.

A too big bs usually isn't a problem as long as it's a multiple of the record size.You could try again with bs=4MB, or even bigger, so something like:

dd if=/dev/zero of=test.file bs=4M count=1024
dd if=/dev/zero of=test.file bs=4M count=1024 conv=fdatasync

to see if this affects your performance. And you might want to write more than 4GB to make sure you get it spread out, though this may not change the result for a single sequential write. You could try running several in parallel to see if your total speed is higher.

Proper benchmarking can be difficult, with dd or other tools. Try a benchmarking tool like bonnie++ instead, though I'm not sure it does concurrent writes.

Cheers, Ketil

On 19 May 2016 at 04:40, Ken Peng <pyh@xxxxxxxxxxxxxxx> wrote:
Hi,

Our VM has been using ceph as block storage for both system and data patition.

This is what dd shows,

# dd if=/dev/zero of=test.file bs=4k count=1024k
1048576+0 records in
1048576+0 records out
4294967296 bytes (4.3 GB) copied, 16.7969 s, 256 MB/s

When dd again with fdatasync argument,the result is similar.

# dd if=/dev/zero of=test.file bs=4k count=1024k conv=fdatasync
1048576+0 records in
1048576+0 records out
4294967296 bytes (4.3 GB) copied, 17.6878 s, 243 MB/s


My questions include,

1. for a cluster which has more than 200 disks as OSD storage (SATA only), both the cluster and data network are 10Gbps, does the performance from within the VM behave well as the results above?

2. is "dd" suitable for testing a block storage within the VM?

3. why "fdatasync" influences nothing on the testing?

Thank you.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
-Ketil
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux