Re: high density machines

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 30-09-15 14:19, Mark Nelson wrote:
> On 09/29/2015 04:56 PM, J David wrote:
>> On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh
>> <gurvindersinghdahiya@xxxxxxxxx> wrote:
>>>> The density would be higher than the 36 drive units but lower than the
>>>> 72 drive units (though with shorter rack depth afaik).
>>> You mean the 1U solution with 12 disk is longer in length than 72 disk
>>> 4U version ?
>>
>> This is a bit old and I apologize for dredging it up, but I wanted to
>> weigh in that we have used a couple of the 6017R-73THDP+ 12 x 3.5" 1U
>> chassis and will not be using any more.  The depth is truly obscene;
>> the 36" is not a misprint.  If you have open racks they may be
>> acceptable but in a cabinet they are so long that they have to
>> mismount (sticking past the rack both front and rear) to close the
>> doors and in doing so, occlude so much space they raise concerns about
>> cabinet airflow.
>>
>> They are also *very* cut down to get that many drives into the space.
>> They don't even have a physical serial port for console; they depend
>> entirely on IPMI for management.  (And we have had very mixed success
>> with SuperMicro IPMI virtual serial consoles.)  Also of course there
>> is no drive servicing done without shutting down the entire node,
>> making a simple drive swap vastly more labor intensive.  If (as in our
>> case) the cluster is overprovisioned enough to survive the long-term
>> loss of several drives per unit until it makes sense to take the whole
>> thing down and replace them all it may be OK.  In any situation where
>> you expect most/all platters to be spinning, they're a non-start.
>>
>> All in all, the money / rack units saved is not even close to worth
>> the extra hassle involved, particularly when you start counting up how
>> many of those 12 drives you are treating as spares to space out
>> servicing it.
>>
>> The 5018A-AR12L looks like a better layout that trims down to a
>> "svelte" 32" of depth, but appears to keep most of the rest of the
>> downsides of its 36" cousin.  That wired-in Atom processor also raises
>> some concerns about CPU overload during any major OSD rebalance.
> 
> FWIW, I've mentioned to Supermicro that I would *really* love a version
> of the 5018A-AR12L that replaced the Atom with an embedded Xeon-D 1540. :)
> 

I'm running a production cluster with 5018A-AR12L:
- 64GB Memory
- 10x 4TB
- 2x SSD for Journaling

Currently there are 10 of these machines online and they are only used
for RBD with slow IOps (backups).

The hardware has been ordered for 160 machines to store RGW data, no RBD
in this case.

I kept the amount of PGs low to reduce CPU load and Memory demand. But
for storage they work just fine.

>>
>> Anyway, sorry for raising an old issue, but if I can save even one
>> person from going with these for Ceph, it was worth it.
> 
> I think any time you start talking really high density, you have to talk
> trade-offs.  The 72 drive chassis is around the same depth as the 6017R,
> and while drive replacement is easier (though you still have to take 2
> OSDs down at once), there are plenty of other areas where you have to
> make fairly significant compromises.  I think the 6017Rs are still worth
> considering so long as you know that they aren't really hotswap-drive
> capable.  They will probably work best in an environment where you can
> either leave broken OSDs down for a while or swap out entire nodes at
> once (ie you need a lot of them).
> 
> For medium-large sized deployments, the SC847 chassis is probably a
> fairly reasonable as a compromise despite it's age.  36 bays in 4U,
> 27.5" depth, all drives hotswap, lots of PCIe expandability, cheap
> price.  I still maintain that you want at least a rack (~10) of them
> though.  As always the more nodes you can spread failures over the better.
> 
>>
>> Thanks!
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux