Re: Best layout for SSD & SAS OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Fri, 4 Sep 2015 12:30:12 -0300 German Anders wrote:

> Hi cephers,
> 
>    I've the following scheme:
> 
> 7x OSD servers with:
>
Is this a new cluster, total initial deployment?

What else are these nodes made of, CPU/RAM/network?
While uniform nodes have some appeal (interchangeability, one node down
does impact the cluster uniformly) they tend to be compromise solutions.
I personally would go with optimized HDD and SSD nodes.

>     4x 800GB SSD Intel DC S3510 (OSD-SSD)
Only 0.3DWPD, 450TB total in 5 years.  
If you can correctly predict your write volume and it is below that per
SSD, fine. I'd use 3610s, with internal journals.

>     3x 120GB SSD Intel DC S3500 (Journals)
In this case even more so the S3500 is a bad choice. 3x 135MB/s is
nowhere near your likely network speed of 10Gb/s.
 
You will vastly superior performance and endurance with two 200GB S3610
(2x 230MB/s) or S3700 (2x365 MB/s)

Why the uneven number of journals SSDs?
You want uniform utilization, wear. 2 journal SSDs for 6 HDDs would be a
good ratio.

>     5x 3TB SAS disks (OSD-SAS)
>
See above, even numbers make a lot more sense.

> 
> The OSD servers are located on two separate Racks with two power circuits
> each.
> 
>    I would like to know what is the best way to implement this.. use the
> 4x 800GB SSD like a SSD-pool, or used them us a Cache pool? or any other
> suggestion? Also any advice for the crush design?
> 
Nick touched on that already, for right now SSD pools would be definitely
better.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux