Re: 1MB/s throughput to 33-ssd test cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What SSDs are you using, and is there any under-provisioning on them?

On 2013-12-09 16:06, Greg Poirier wrote:
On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood
<mark.kirkwood@xxxxxxxxxxxxxxx> wrote:

I'd suggest testing the components separately - try to rule out NIC (and switch) issues and SSD performance issues, then when you are sure the bits all go fast individually test how ceph performs again.
What make and model of SSD? I'd check that the firmware is up to 
date (sometimes makes a huge difference). I'm also wondering if you 
might get better performance by having (say) 7 osds and using 4 of the 
SSD for journals for them.
Thanks, Mark.

In my haste, I left out part of a paragraph... probably really a
whole paragraph... that contains a pretty crucial detail.

I had previously run rados bench on this hardware with some success
(24-26MBps throughput w/ 4k blocks).

ceph osd bench looks great.

iperf on the network looks great.

After my last round of testing (with a few aborted rados bench
tests), I deleted the pool and recreated it (same name, crush ruleset,
pg num, size, etc). That is when I started to notice the degraded
performance. 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux