Re: Hardware Sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/20/2013 08:01 AM, Bjorn Mork wrote:
Hi Team,

This is my first post to this community.

I have some basic queries to start with CEPH software...I found that
http://www.supermicro.com.tw/products/system/2U/6027/SSG-6027R-E1R12T.cfm is
being recommend as a start of storage server.

This is a reasonable server for a basic Ceph POC using spinning disks with no SSD journals. Since it's using on-board ethernet and RAID, it should be relatively inexpensive, but if any of the on-board components fail the whole motherboard has to be replaced. It's a good starting point though.


As my target is to start with 12 TB solution (production environment,
high performance) having three copies of my data. I am confused, that

1.   How many servers will be required i.e OSD, MON, MDS (above
mentioned chassis).

For production you should have at least 3 MONs. You only need an MDS if you plan to use CephFS. We tend to recommend 1 OSD per disk for most configurations.


2.   Should I separate role to each server? or single server will be
good enough?

You want each MON on a different server, and for a production deployment I really don't like seeing less than 5 servers for OSDs. You can technically run a single mon and all of your OSDs on 1 server, but it's not really what Ceph was designed for.


3.     How many raid-cards in each server will be required?
3.1   I mean separate for read and write can be configured or not? I
need best performance and throughput.

There's a lot of different ways you can configure ceph servers with various trade-offs. A general rule of thumb is that you want at least 3-5 servers for OSDs (and preferably more), and for high performance SSD journals or at the very least a controller with WB cache and 1 OSD per disk.

You may be interested in some of our performance comparison tests:

http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
http://ceph.com/uncategorized/argonaut-vs-bobtail-performance-preview/

Mark


Can anyone suggest? Thanks in advance...

B~Mork


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux