Re: Motherboard recommendation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 09 Apr 2015 10:00:37 +0200 Markus Goldberg wrote:

> Hi,
> i have a backup-storage with ceph 0,93
Living on the edge...

> As every backup-system it is only been written and hopefully never read.
> 
What and how are you backing up?
As in, lots of small files copied like with rsync or a stream into a big
archive file like with bacula?
Is the Ceph target a RBD image or CephFS?

> The hardware is 3 Supermicro SC847-cases with 30 SATA-HDDS each (2- and 
> 4-TB-WD-disks) = 250TB
Uneven disk sizes can make for fun (not) later on.

> I have realized, that the motherboards and CPUs are totally undersized, 
> so i want to install new boards.
What's in there now?

> I'm thinking of the following:
> 3 Supermicro X10DRH-CT or X10DRC-T4+ with 128GB memory each.
> What do you think about these boards? Will they fit into the SC847?
They should, but that question, like others, should best be asked to
Supermicro or your vendor. 
As it will be their problem, not yours if they gave you a wrong answer.
Same goes for the question if the onboard controller can see the devices
behind the backplane expander (I would strongly expect the answer to be
"yes, of course")

> They have SAS and 10G-Base-T onboard, so no extra controller seems to be 
> necessary.
That's a LSI 3108 SAS controller. 
No IT mode available for it AFAIK. 
Thus not suitable/recommended to hook up individual JBOD disks.

> What Xeon-v3 should i take, how many cores?
http://ark.intel.com/products/family/78583/Intel-Xeon-Processor-E5-v3-Family

Find the best (price, TDP) combination that gives you at least 30GHz of
total CPU power.
So the E5-2630 v3 comes to mind.

> Does anyone know if M.2-SSDs are supported in their pci-e-slots?
> 
One would think so, what SSDs where you thinking about?
How much data are you backing up per day (TDW, endurance of SSDs)?

But realistically, with just 3 nodes and 30 HDDs per node, the best you
can hope for is probably 10 HDDs per journal SSD, so a single SSD failure
would impact your cluster significantly. 

However _if_ you plan on using journal SSDs and _if_ your backups consist
of a lot of small writes, get as much CPU power as you can afford.

Christian
> Thank you very much,
>    Markus
> 
> --------------------------------------------------------------------------
> Markus Goldberg       Universität Hildesheim
>                        Rechenzentrum
> Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
> Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
> --------------------------------------------------------------------------
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux