Re: Dear Abby: Why Is Architecting CEPH So Hard?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2020-04-23 at 09:08 +0200, Janne Johansson wrote:
> Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill <
> darren.soothill@xxxxxxxx>:
> 
> > If you want the lowest cost per TB then you will be going with
> > larger nodes in your cluster but it does mean you minimum cluster
> > size is going to be many PB’s in size.
> > Now the question is what is the tax that a particular chassis
> > vendor is charging you. I know from the configs we do on a regular
> > basis that a 60 drive chassis will give you the lowest cost per TB.
> > BUT it has implications. Your cluster size needs to be up in the
> > order of 10PB minimum. 60 x 18TB gives you around 1PB per node.  Oh
> > did you notice here we are going for the bigger disk drives. Why
> > because the more data you can spread your fixed costs across the
> > lower the overall cost per GB.
> > 
> 
> I don't know all models, but the computers I've looked at with 60
> drive slots will have a small and "crappy" motherboard, with few
> options, not many buses/slots/network ports and low amounts of cores,
> DIMM sockets and so on, counting on you to make almost a passive
> storage node on it. I have a hard time thinking the 60*18TB OSD
> recovery requirements in cpu and ram would be covered in any way by
> the kinds of 60-slot boxes I've seen. Not that I focus on that area,
> but it seems like a common tradeoff, Heavy Duty(tm) motherboards or
> tons of drives.

I would imagine that this describes the use of separate SAS-attached
(or whatever) JBOD boxes rather than everything in a single chassis. My
clusters use 1U servers with decent CPU/memory and SAS adapter cards
hooking up larger JBODs to actually house the disks (for the spinning
rust OSDs, at least).

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux