Re: Ceph performance improvement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/08/12 22:24, David McBride wrote:
On 22/08/12 09:54, Denis Fondras wrote:

* Test with "dd" from the client using CephFS :
# dd if=/dev/zero of=testdd bs=4k count=4M
17179869184 bytes (17 GB) written, 338,29 s, 50,8 MB/s

Again, the synchronous nature of 'dd' is probably severely affecting apparent performance. I'd suggest looking at some other tools, like fio, bonnie++, or iozone, which might generate more representative load.

(Or, if you have a specific use-case in mind, something that generates an IO pattern like what you'll be using in production would be ideal!)



Appending conv=fsync to the dd will make the comparison fair enough. Looking at the ceph code, it does


sync_file_range(fd, offset, blocksz, SYNC_FILE_RANGE_WRITE);

which is very fast - way faster than fdatasync() and friends (I have tested this ... see prev posting on random write performance with file writetest.c attached).

I am not convinced the these sort of tests are in any way 'unfair' - for instance I would like to use rbd for postgres or mysql data volumes... and many database actions involve a stream of block writes similar enough to doing dd (e.g bulk row loads, appends to transaction log journals).

Cheers

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux