Re: Motherboard recommendation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nick,

Thanks Nick for your reply. There is a clear picture on the hardware requirement for replication( 1Ghz per osd). But We cant find any document related to hardware recommendation  for erasure coding.I read the mark nelson report. But still some erasure coding testing shows 100% CPU utilization. So what would be recommended CPU processing power for those testing to avoid the 100% CPU utilization.

cheers
K.Mohamed Pakkeer

On Fri, Apr 10, 2015 at 1:40 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
Hi Mohamed,

There was an excellent document posted to the list by Mark Nelson a number of weeks back showing CPU utilisation for both replicated and erasure coded clusters under different operations (read/write/rebuild...etc)

If you search for that it will probably answer quite a few of your questions. One thing that came from it that was important for Erasure Coding, is that the increase in the total number of shards, increases the CPU requirements, so it's not a simple black and white answer.

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Mohamed Pakkeer
> Sent: 10 April 2015 08:57
> To: Christian Balzer
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re: Motherboard recommendation?
>
> Hi Blazer,
>
> Ceph recommends 1GHz CPU power per OSD. Is it applicable for both
> replication and erasure coding based cluster? or will we require more CPU
> power( more than 1Ghz per OSD) for erasure coding.
>
> We are running a test cluster with 15 * 4U servers and each server contains
> 36 OSDs, dual Intel 2630 V3 processor and 96 GB RAM. We are getting
> average CPU load 4 to 5% on cluster ideal condition. Is it normal behavior of
> CPU load or  Could you advice the average CPU load requires in ideal
> condition for good erasure coding cluster?
>
> ceph version 0.87.1
> cluster : Erasure coding and CephFS
>
>
>
> Thanks in advance
>
> Cheers,
> K.Mohamed Pakkeer
>
>
> On Fri, Apr 10, 2015 at 12:20 PM, Christian Balzer <chibi@xxxxxxx> wrote:
>
> Hello,
>
> On Thu, 09 Apr 2015 10:00:37 +0200 Markus Goldberg wrote:
>
> > Hi,
> > i have a backup-storage with ceph 0,93
> Living on the edge...
>
> > As every backup-system it is only been written and hopefully never read.
> >
> What and how are you backing up?
> As in, lots of small files copied like with rsync or a stream into a big
> archive file like with bacula?
> Is the Ceph target a RBD image or CephFS?
>
> > The hardware is 3 Supermicro SC847-cases with 30 SATA-HDDS each (2- and
> > 4-TB-WD-disks) = 250TB
> Uneven disk sizes can make for fun (not) later on.
>
> > I have realized, that the motherboards and CPUs are totally undersized,
> > so i want to install new boards.
> What's in there now?
>
> > I'm thinking of the following:
> > 3 Supermicro X10DRH-CT or X10DRC-T4+ with 128GB memory each.
> > What do you think about these boards? Will they fit into the SC847?
> They should, but that question, like others, should best be asked to
> Supermicro or your vendor.
> As it will be their problem, not yours if they gave you a wrong answer.
> Same goes for the question if the onboard controller can see the devices
> behind the backplane expander (I would strongly expect the answer to be
> "yes, of course")
>
> > They have SAS and 10G-Base-T onboard, so no extra controller seems to be
> > necessary.
> That's a LSI 3108 SAS controller.
> No IT mode available for it AFAIK.
> Thus not suitable/recommended to hook up individual JBOD disks.
>
> > What Xeon-v3 should i take, how many cores?
> http://ark.intel.com/products/family/78583/Intel-Xeon-Processor-E5-v3-
> Family
>
> Find the best (price, TDP) combination that gives you at least 30GHz of
> total CPU power.
> So the E5-2630 v3 comes to mind.
>
> > Does anyone know if M.2-SSDs are supported in their pci-e-slots?
> >
> One would think so, what SSDs where you thinking about?
> How much data are you backing up per day (TDW, endurance of SSDs)?
>
> But realistically, with just 3 nodes and 30 HDDs per node, the best you
> can hope for is probably 10 HDDs per journal SSD, so a single SSD failure
> would impact your cluster significantly.
>
> However _if_ you plan on using journal SSDs and _if_ your backups consist
> of a lot of small writes, get as much CPU power as you can afford.
>
> Christian
> > Thank you very much,
> >    Markus
> >
> > --------------------------------------------------------------------------
> > Markus Goldberg       Universität Hildesheim
> >                        Rechenzentrum
> > Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
> > Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
> > --------------------------------------------------------------------------
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Thanks & Regards
> K.Mohamed Pakkeer
> Mobile- 0091-8754410114







--
Thanks & Regards   
K.Mohamed Pakkeer
Mobile- 0091-8754410114

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux