Re: high density machines

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Those Fat twins are not blades in the classical sense, they are what are often referred to as un-blades.

 

They only share power, ie about 4-6 pins which are connected by solid bits of copper to the PSU’s. I can’t see anyway of this going wrong. If you take out all the sleds you are just left with an empty box. If you took the fans out the back, you could probably even climb through it.

 

However since they share power and cooling they work out cheaper to buy and run compared to standard servers. As long as you don’t mind pulling the whole sled out to swap a disk, then I think you would be hard pressed to find a solution which matches it in terms of price/density.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Jan Schermer
Sent: 03 September 2015 15:53
To: Paul Evans <paul@xxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: high density machines

 

 

On 03 Sep 2015, at 16:49, Paul Evans <paul@xxxxxxxxxxxx> wrote:

 

Echoing what Jan said, the 4U Fat Twin is the better choice of the two options, as it is very difficult to get long-term reliable and efficient operation of many OSDs when they are serviced by just one or two CPUs. 

I don’t believe the FatTwin design has much of a backplane, primarily sharing power and cooling. That said: the cost savings would need to be solid to choose the FatTwin over 1U boxes, especially as (personally) I dislike lots of front-side cabling in the rack. 

 

I never used SuperMicro blades, but with Dell blades there's a single "backplane" board to which the blades plug-in for power and IO distribution. We had it go bad in a way where the blades would work until removed, and wouldn't power on once plugged-in again. Restart of the chassis didn't help and we had to change the backplane.

I can't imagine SuperMicro would be much different, there are some components that just can't be changed while the chassis is in operation.

 



-- 
Paul Evans


 

On Sep 3, 2015, at 7:01 AM, Gurvinder Singh <gurvindersinghdahiya@xxxxxxxxx> wrote:

 

Hi,

I am wondering if anybody in the community is running ceph cluster with
high density machines e.g. Supermicro SYS-F618H-OSD288P (288 TB),
Supermicro SSG-6048R-OSD432 (432 TB) or some other high density
machines. I am assuming that the installation will be of petabyte scale
as you would want to have at least 3 of these boxes.

It would be good to hear their experiences in terms of reliability,
performance (specially during node failures). As these machines have
40Gbit network connection it can be ok, but experience from real users
would be  great to hear. As these are mentioned in the reference
architecture published by red hat and supermicro.

Thanks for your time.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://xo4t.mj.am/link/xo4t/nsnk9jn/1/sSpOCYmTd3AXE99W4tUqaA/aHR0cDovL2xpc3RzLmNlcGguY29tL2xpc3RpbmZvLmNnaS9jZXBoLXVzZXJzLWNlcGguY29t

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://xo4t.mj.am/link/xo4t/nsnk9jn/2/9HQtIgYANKXypd_p11GLrg/aHR0cDovL2xpc3RzLmNlcGguY29tL2xpc3RpbmZvLmNnaS9jZXBoLXVzZXJzLWNlcGguY29t

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux