Re: SSD journal suggestion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/07/2012 10:35 AM, Atchley, Scott wrote:
On Nov 7, 2012, at 11:20 AM, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote:

  Right now I'm doing 3 journals per SSD, but topping out at about
1.2-1.4GB/s from the client perspective for the node with 15+ drives and
5 SSDs.  It's possible newer versions of the code and tuning may
increase that.

What interconnect is this? 10G Ethernet is 1.25 GB/s line rate and I would expect your Sockets and Ceph overhead to eat into that. Or is it dual 10G Ethernet?

This is 8 concurrent instances of rados bench running on localhost.
Ceph is configured with 1x replication.  1.2-1.4GB/s is the aggregate
throughput of all of the rados bench instances.

Ok, all local with no communication. Given this level of local performance, what does that translate into when talking over the network?

Scott


Well, local, but still over tcp. Right now I'm focusing on pushing the osds/filestores as far as I can, and after that I'm going to setup a bonded 10GbE network to see what kind of messenger bottlenecks I run into. Sadly the testing is going slower than I would like.

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux