Re: How to think a two different disk's technologies architecture

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Fri, Mar 24, 2017 at 10:04 AM Alejandro Comisario <alejandro@xxxxxxxxxxx> wrote:
thanks for the recommendations so far.
any one with more experiences and thoughts?

best

On the network side, 25, 40, 56 and maybe soon 100 Gbps can now be fairly affordable, and simplify the architecture for the high throughput nodes.


On Mar 23, 2017 16:36, "Maxime Guyot" <Maxime.Guyot@xxxxxxxxx> wrote:
Hi Alexandro,

As I understand you are planning NVMe for Journal for SATA HDD and collocated journal for SATA SSD?

Option 1:
- 24x SATA SSDs per server, will have a bottleneck with the storage bus/controller.  Also, I would consider the network capacity 24xSSDs will deliver more performance than 24xHDD with journal, but you have the same network capacity on both types of nodes.
- This option is a little easier to implement: just move nodes in different CRUSHmap root
- Failure of a server (assuming size = 3) will impact all PGs
Option 2:
- You may have noisy neighbors effect between HDDs and SSDs, if HDDs are able to saturate your NICs or storage controller. So be mindful of this with the hardware design
- To configure the CRUSHmap for this you need to split each server in 2, I usually use “server1-hdd” and “server1-ssd” and map the right OSD in the right bucket, so a little extra work here but you can easily fix a “crush location hook” script for it (see example http://www.root314.com/2017/01/15/Ceph-storage-tiers/)
- In case of a server failure recovery will be faster than option 1 and will impact less PGs

Some general notes:
- SSD pools perform better with higher frequency CPUs
- the 1GB of RAM per TB is a little outdated, the current consensus for HDD OSDs is around 2GB/OSD (see https://www.redhat.com/cms/managed-files/st-rhcs-config-guide-technology-detail-inc0387897-201604-en.pdf)
- Network wise, if the SSD OSDs are rated for 500MB/s and use collocated journal you could generate up to 250MB/s of traffic per SSD OSD (24Gbps for 12x or 48Gbps for 24x) therefore I would consider doing 4x10G and consolidate both client and cluster network on that

Cheers,
Maxime

On 23/03/17 18:55, "ceph-users on behalf of Alejandro Comisario" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of alejandro@xxxxxxxxxxx> wrote:

    Hi everyone!
    I have to install a ceph cluster (6 nodes) with two "flavors" of
    disks, 3 servers with SSD and 3 servers with SATA.

    Y will purchase 24 disks servers (the ones with sata with NVE SSD for
    the SATA journal)
    Processors will be 2 x E5-2620v4 with HT, and ram will be 20GB for the
    OS, and 1.3GB of ram per storage TB.

    The servers will have 2 x 10Gb bonding for public network and 2 x 10Gb
    for cluster network.
    My doubts resides, ar want to ask the community about experiences and
    pains and gains of choosing between.

    Option 1
    3 x servers just for SSD
    3 x servers jsut for SATA

    Option 2
    6 x servers with 12 SSD and 12 SATA each

    Regarding crushmap configuration and rules everything is clear to make
    sure that two pools (poolSSD and poolSATA) uses the right disks.

    But, what about performance, maintenance, architecture scalability, etc ?

    thank you very much !

    --
    Alejandrito
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
--
Alex Gorbachev
Storcium
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux