Re: Best layout for SSD & SAS OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian,

    Ok so would said that it's better to rearrange the nodes so i dont mix the hdd and ssd disks right? And create high perf nodes with ssd and others with hdd, its fine since its a new deploy.
   Also the nodes had different type of ram cpu, 4 had more cpu and more memory 384gb and other 3 had less cpu and 128gb of ram, so maybe i can put the ssd con the much more cpu nodes and left the hdd for the other nodes. Network is going to be used infiniband fdr at 56gb/s on all the nodes for the publ network and for the clus network.
   Any other suggestion/comment?

Thanks a lot!

Best regards

German


On Saturday, September 5, 2015, Christian Balzer <chibi@xxxxxxx> wrote:

Hello,

On Fri, 4 Sep 2015 12:30:12 -0300 German Anders wrote:

> Hi cephers,
>
>    I've the following scheme:
>
> 7x OSD servers with:
>
Is this a new cluster, total initial deployment?

What else are these nodes made of, CPU/RAM/network?
While uniform nodes have some appeal (interchangeability, one node down
does impact the cluster uniformly) they tend to be compromise solutions.
I personally would go with optimized HDD and SSD nodes.

>     4x 800GB SSD Intel DC S3510 (OSD-SSD)
Only 0.3DWPD, 450TB total in 5 years.
If you can correctly predict your write volume and it is below that per
SSD, fine. I'd use 3610s, with internal journals.

>     3x 120GB SSD Intel DC S3500 (Journals)
In this case even more so the S3500 is a bad choice. 3x 135MB/s is
nowhere near your likely network speed of 10Gb/s.

You will vastly superior performance and endurance with two 200GB S3610
(2x 230MB/s) or S3700 (2x365 MB/s)

Why the uneven number of journals SSDs?
You want uniform utilization, wear. 2 journal SSDs for 6 HDDs would be a
good ratio.

>     5x 3TB SAS disks (OSD-SAS)
>
See above, even numbers make a lot more sense.

>
> The OSD servers are located on two separate Racks with two power circuits
> each.
>
>    I would like to know what is the best way to implement this.. use the
> 4x 800GB SSD like a SSD-pool, or used them us a Cache pool? or any other
> suggestion? Also any advice for the crush design?
>
Nick touched on that already, for right now SSD pools would be definitely
better.

Christian
--
Christian Balzer        Network/Systems Engineer
chibi@xxxxxxx        Global OnLine Japan/Fusion Communications
http://www.gol.com/


--

German Anders
Storage Engineer Manager
Despegar | IT Team
office +54 11 4894 3500 x3408
mobile +54 911 3493 7262
mail ganders@xxxxxxxxxxxx

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux