Re: high density machines

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 30, 2015 at 8:19 AM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
> FWIW, I've mentioned to Supermicro that I would *really* love a version of the
> 5018A-AR12L that replaced the Atom with an embedded Xeon-D 1540. :)

Is even that enough?  (It's a serious question; due to our insatiable
need for IOPs rather than TB's we mostly we use all-SSD nodes where
the CPU requirements are much higher so I genuinely do not know how
much CPU 12 x 3.5s would need to make sure the node is not CPU bound.)

But yes, definitely.  With enough CPU power the 5018A-AR12L would be a
solid all-around improvement over the (I maintain) ridiculous 6017R.

> I think any time you start talking really high density, you have to talk
> trade-offs.  [...]  I think the 6017Rs are still worth considering so
> long as you know that they aren't really hotswap-drive capable.  They will
> probably work best in an environment where you can either leave broken OSDs
> down for a while or swap out entire nodes at once (ie you need a lot of
> them).

That's exactly what ultimately put me off on these.  If the
application is so demanding of density as to require these, having to
overprovision enough to mitigate the effect of loss of hot-swap may be
self-defeating.

If you are deploying hundreds of these, the potential cost of custom
racks/cooling arrangements to deal with the form factor may become
background noise.  That's definitely not the scale at which we
operate, but I still wonder about the long-term labor cost effect.  As
Google has pointed out, given enough drives, enough drives are always
down that replacing them becomes somebody's full-time job.  Turning a
30-second one-tech hot swap into a 10-15 minute procedure that
requires a storage admin standing by to set/unset noout has the
potential to really drive up the operating cost of a cluster.

Certainly a few applications/environments exist where the need for
density outweighs all of that, and on the rare occasions I've come
across such situations in the past it does seem to mostly involve
solving for the least worst option.  That (potentially "least worst"
for a small set of special cases) is the bin I'd put the 36" deep
6017R in.

What we *really* need is something in a 1U high-density drive config
that can slide out of the rack while operating far enough that
individual drives can be hot-swapped from above.

This looks like a promising design direction, but lacking any compute
guts I doubt it's a good fit for Ceph:

http://www.supermicro.com/products/chassis/4U/946/SC946ED-R2KJBOD.cfm

And, even if they did have the guts, for proper redundancy at 90
drives per failure domain you would likely need a *lot* of them, so
that's probably a (future) design direction for only the largest
clusters.

The search for the perfect 1U Ceph building block continues.  Maybe
next year. :-)

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux