Re: New Ceph cluster design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

As i understand it, you'll have one RAID1 of two SSDs for 12 HDDs. A
WAL is used for all writes on your host. If you have good SSDs, they
can handle 450-550 MBpsc. Your 12 HDDs SATA can handle 12 x 100 MBps
that is to say 1200 GBps. So your RAID 1 will be the bootleneck with
this design. A good design would be to have one SSD for 4 or 5 HDD. In
your case, the best option would be to start with 3 SSDs for 12 HDDs
to have a balances node. Don't forget to choose SSD with a high WDPD
ratio (>10)

The network needs of your node depend of the bandwith of your disks.
As explain over, your 12 HDDs can handle 1200 GBps so you need to have
a public and a private network that can handle it. In your case, a
minimum a two 10 Gbps networks for per node are needed. If you need
redondancy, just use two LACP networks with each having two 10 Gbps
links. The scrub or deep scrub operations will not have a significant
impact on your network but on your disks utilisations. So you need to
plan them during low usage by your clients
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux