Re: 1MB/s throughput to 33-ssd test cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/09/2013 10:06 AM, Greg Poirier wrote:
On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood
<mark.kirkwood@xxxxxxxxxxxxxxx <mailto:mark.kirkwood@xxxxxxxxxxxxxxx>>
wrote:

    I'd suggest testing the components separately - try to rule out NIC
    (and switch) issues and SSD performance issues, then when you are
    sure the bits all go fast individually test how ceph performs again.

    What make and model of SSD? I'd check that the firmware is up to
    date (sometimes makes a huge difference). I'm also wondering if you
    might get better performance by having (say) 7 osds and using 4 of
    the SSD for journals for them.


Thanks, Mark.

In my haste, I left out part of a paragraph... probably really a whole
paragraph... that contains a pretty crucial detail.

I had previously run rados bench on this hardware with some success
(24-26MBps throughput w/ 4k blocks).

ceph osd bench looks great.

iperf on the network looks great.

After my last round of testing (with a few aborted rados bench tests), I
deleted the pool and recreated it (same name, crush ruleset, pg num,
size, etc). That is when I started to notice the degraded performance.

Definitely sounds like something is mucked up! With 32 concurrent threads you aren't going to be saturating 33 SSDs, but you should be doing far better than 1MB/s! Basically what you should expect to see is something like 30-80MB/s of throughput (maybe higher with reads) all of the CPU cores consumed, and CPU being the limiting factor (at least for now! This is an area we are actively working on right now). Usually completely disabling logging helps, but it sounds like you've got something else going on for sure.

Certainly fixing the clock skew mentioned in your original email wouldn't hurt. Also, with 33 SSDs I'd try to shoot for something like 4096 or maybe 8192 PGs. I'd suggest testing a pool with no replication to start out.




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux