Re: how to debug slow rbd block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/22/2012 03:30 PM, Stefan Priebe wrote:
Am 22.05.2012 21:52, schrieb Greg Farnum:
On Tuesday, May 22, 2012 at 12:40 PM, Stefan Priebe wrote:
Huh. That's less than I would expect. Especially since it ought to be going through the page cache.
What version of RBD is KVM using here?
v0.47.1

Can you (from the KVM host) run
"rados -p data bench seq 60 -t 1"
"rados -p data bench seq 60 -t 16"
and paste the final output from both?
OK here it is first with write then with seq read.

# rados -p data bench 60 write -t 1
# rados -p data bench 60 write -t 16
# rados -p data bench 60 seq -t 1
# rados -p data bench 60 seq -t 16

Output is here:
http://pastebin.com/iFy8GS7i

Thanks!

Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Hi Stefan,

Can you use something like iostat or collectl to check and see if the write throughput to each SSD is roughly equal during your tests? Also, what FS are you using and how did you format/mount it? I've been doing some tests internally using 2 nodes with 5 OSDs each backed by SSDs for both data and journal and am seeing about 600MB/s from the client (over 10GE) on a fresh ceph fs.

Mark


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux