Re: high density machines

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh
<gurvindersinghdahiya@xxxxxxxxx> wrote:
>> The density would be higher than the 36 drive units but lower than the
>> 72 drive units (though with shorter rack depth afaik).
> You mean the 1U solution with 12 disk is longer in length than 72 disk
> 4U version ?

This is a bit old and I apologize for dredging it up, but I wanted to
weigh in that we have used a couple of the 6017R-73THDP+ 12 x 3.5" 1U
chassis and will not be using any more.  The depth is truly obscene;
the 36" is not a misprint.  If you have open racks they may be
acceptable but in a cabinet they are so long that they have to
mismount (sticking past the rack both front and rear) to close the
doors and in doing so, occlude so much space they raise concerns about
cabinet airflow.

They are also *very* cut down to get that many drives into the space.
They don't even have a physical serial port for console; they depend
entirely on IPMI for management.  (And we have had very mixed success
with SuperMicro IPMI virtual serial consoles.)  Also of course there
is no drive servicing done without shutting down the entire node,
making a simple drive swap vastly more labor intensive.  If (as in our
case) the cluster is overprovisioned enough to survive the long-term
loss of several drives per unit until it makes sense to take the whole
thing down and replace them all it may be OK.  In any situation where
you expect most/all platters to be spinning, they're a non-start.

All in all, the money / rack units saved is not even close to worth
the extra hassle involved, particularly when you start counting up how
many of those 12 drives you are treating as spares to space out
servicing it.

The 5018A-AR12L looks like a better layout that trims down to a
"svelte" 32" of depth, but appears to keep most of the rest of the
downsides of its 36" cousin.  That wired-in Atom processor also raises
some concerns about CPU overload during any major OSD rebalance.

Anyway, sorry for raising an old issue, but if I can save even one
person from going with these for Ceph, it was worth it.

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux