Re: Motherboard recommendation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark ,

Thanks for your reply and your CPU test report. It really help us to identify appropriate hardware for EC based Ceph cluster. Currently we are using Intel Xeon 2630 V3 ( 16 core * 2.4 Ghz = 38.4 GHz) processor. I think, you have tested with Intel Xeon 2630L V2 ( 12 * 2.4 Ghz = 28.8 GHz) processor. We have additional 10 GHz CPU speed per system compared with your testing scenario. Do you think 38.4 GHz CPU speed is not enough to hold 36 drives (10 +3 EC cluster) ? . We are looking 200 MB/s max parallel read and write performance from this cluster.

 We are planning to increase the CPU speed as per your recommendation. Please advise.

Cheers
K.Mohamed Pakkeer

On Fri, Apr 10, 2015 at 7:19 PM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:


On 04/10/2015 02:56 AM, Mohamed Pakkeer wrote:
Hi Blazer,

Ceph recommends 1GHz CPU power per OSD. Is it applicable for both
replication and erasure coding based cluster? or will we require more
CPU power( more than 1Ghz per OSD) for erasure coding.

We are running a test cluster with 15 * 4U servers and each server
contains 36 OSDs, dual Intel 2630 V3 processor and 96 GB RAM. We are
getting average CPU load 4 to 5% on cluster ideal condition. Is it
normal behavior of CPU load or  Could you advice the average CPU load
requires in ideal condition for good erasure coding cluster?


>From what we've seen, EC can take quite a bit of CPU.  Assuming no other bottlenecks, it wouldn't be a bad idea to bump up your CPU speed by 20-30% if it doesn't increase cost significantly.  I've included some tests we ran on a SC847a with dual E5-2630L (2GHz) processors, 30 OSDs, and 6 SSDs for journals.  As you can see, CPU usage was quite a bit higher for EC than for 3X replication.  The E5-2630L is a bit below-spec for this configuration, so keep that in mind.

Mark


ceph version 0.87.1
cluster : Erasure coding and CephFS



Thanks in advance

Cheers,
K.Mohamed Pakkeer


On Fri, Apr 10, 2015 at 12:20 PM, Christian Balzer <chibi@xxxxxxx
<mailto:chibi@xxxxxxx>> wrote:


    Hello,

    On Thu, 09 Apr 2015 10:00:37 +0200 Markus Goldberg wrote:

    > Hi,
    > i have a backup-storage with ceph 0,93
    Living on the edge...

    > As every backup-system it is only been written and hopefully never read.
    >
    What and how are you backing up?
    As in, lots of small files copied like with rsync or a stream into a big
    archive file like with bacula?
    Is the Ceph target a RBD image or CephFS?

    > The hardware is 3 Supermicro SC847-cases with 30 SATA-HDDS each (2- and
    > 4-TB-WD-disks) = 250TB
    Uneven disk sizes can make for fun (not) later on.

    > I have realized, that the motherboards and CPUs are totally undersized,
    > so i want to install new boards.
    What's in there now?

    > I'm thinking of the following:
    > 3 Supermicro X10DRH-CT or X10DRC-T4+ with 128GB memory each.
    > What do you think about these boards? Will they fit into the SC847?
    They should, but that question, like others, should best be asked to
    Supermicro or your vendor.
    As it will be their problem, not yours if they gave you a wrong answer.
    Same goes for the question if the onboard controller can see the devices
    behind the backplane expander (I would strongly expect the answer to be
    "yes, of course")

    > They have SAS and 10G-Base-T onboard, so no extra controller seems to be
    > necessary.
    That's a LSI 3108 SAS controller.
    No IT mode available for it AFAIK.
    Thus not suitable/recommended to hook up individual JBOD disks.

    > What Xeon-v3 should i take, how many cores?
    http://ark.intel.com/products/family/78583/Intel-Xeon-Processor-E5-v3-Family

    Find the best (price, TDP) combination that gives you at least 30GHz of
    total CPU power.
    So the E5-2630 v3 comes to mind.

    > Does anyone know if M.2-SSDs are supported in their pci-e-slots?
    >
    One would think so, what SSDs where you thinking about?
    How much data are you backing up per day (TDW, endurance of SSDs)?

    But realistically, with just 3 nodes and 30 HDDs per node, the best you
    can hope for is probably 10 HDDs per journal SSD, so a single SSD
    failure
    would impact your cluster significantly.

    However _if_ you plan on using journal SSDs and _if_ your backups
    consist
    of a lot of small writes, get as much CPU power as you can afford.

    Christian
    > Thank you very much,
    >    Markus
    >
    > --------------------------------------------------------------------------
    > Markus Goldberg       Universität Hildesheim
    >                        Rechenzentrum
    > Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
    > Fax +49 5121 88392823 emailgoldberg@xxxxxxxxxxxxxxxxx <mailto:goldberg@xxxxxxxxxxxxxxxxx>
    > --------------------------------------------------------------------------
    >
    > _______________________________________________
    > ceph-users mailing list
    >ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    >


    --
    Christian Balzer        Network/Systems Engineer
    chibi@xxxxxxx <mailto:chibi@xxxxxxx>           Global OnLine
    Japan/Fusion Communications
    http://www.gol.com/
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Thanks & Regards
K.Mohamed Pakkeer
Mobile- 0091-8754410114



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Thanks & Regards   
K.Mohamed Pakkeer
Mobile- 0091-8754410114

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux