Re: Dear Abby: Why Is Architecting CEPH So Hard?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Maritin,
How is the performance of d120-c21 hdd cluster? Can it utilize the
full performance of the 16 hdd?


linyunfan

Martin Verges <martin.verges@xxxxxxxx> 于2020年4月23日周四 下午6:12写道:
>
> Hello,
>
> simpler systems tend to be cheaper to buy per TB storage, not on a
> theoretical but practical quote.
>
> For example 1U Gigabyte 16bay D120-C21 systems with a density of 64 disks
> per 4U are quite ok for most users. On 40 Nodes per rack + 2 switches you
> have 10PB raw space for around 350k€.
> They come with everything you need from dual 10G SFP+ to acceptable 8c/16t
> 45W TDP CPU. It comes with a M.2 slot if you want a db/wal or other
> additional disk.
> Such systems equipped with 16x16TB have a price point of below 8k€ or ~31 €
> per TB RAW storage.
>
> For me this is just an example of a quite cheap but capable HDD node. I
> never saw a better offer for big fat systems on a price per TB and TCO.
>
> Please remember, there is no best node for everyone, this node is not the
> best or fastest out on the market and just an example ;)
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695
> E-Mail: martin.verges@xxxxxxxx
> Chat: https://t.me/MartinVerges
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
>
>
> Am Do., 23. Apr. 2020 um 11:21 Uhr schrieb Darren Soothill <
> darren.soothill@xxxxxxxx>:
>
> > I can think of 1 vendor who has made some of the compromises that you talk
> > of although memory and CPU is not one of them they are limited on slots and
> > NVME capacity.
> >
> > But there are plenty of other vendors out there who use the same model of
> > motherboard across the whole chassis range so there isn’t a compromise in
> > terms of slots and CPU.
> >
> > The compromise may come with the size of the chassis in that a lot of
> > these bigger chassis can also be deeper to get rid of the compromises.
> >
> > The reality with an OSD node is you don't need that many slots or network
> > ports.
> >
> >
> >
> > From: Janne Johansson <icepic.dz@xxxxxxxxx>
> > Date: Thursday, 23 April 2020 at 08:08
> > To: Darren Soothill <darren.soothill@xxxxxxxx>
> > Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
> > Subject: Re:  Re: Dear Abby: Why Is Architecting CEPH So Hard?
> > Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill <
> > darren.soothill@xxxxxxxx<mailto:darren.soothill@xxxxxxxx>>:
> > If you want the lowest cost per TB then you will be going with larger
> > nodes in your cluster but it does mean you minimum cluster size is going to
> > be many PB’s in size.
> > Now the question is what is the tax that a particular chassis vendor is
> > charging you. I know from the configs we do on a regular basis that a 60
> > drive chassis will give you the lowest cost per TB. BUT it has
> > implications. Your cluster size needs to be up in the order of 10PB
> > minimum. 60 x 18TB gives you around 1PB per node.  Oh did you notice here
> > we are going for the bigger disk drives. Why because the more data you can
> > spread your fixed costs across the lower the overall cost per GB.
> >
> > I don't know all models, but the computers I've looked at with 60 drive
> > slots will have a small and "crappy" motherboard, with few options, not
> > many buses/slots/network ports and low amounts of cores, DIMM sockets and
> > so on, counting on you to make almost a passive storage node on it. I have
> > a hard time thinking the 60*18TB OSD recovery requirements in cpu and ram
> > would be covered in any way by the kinds of 60-slot boxes I've seen. Not
> > that I focus on that area, but it seems like a common tradeoff, Heavy
> > Duty(tm) motherboards or tons of drives.
> >
> > --
> > May the most significant bit of your life be positive.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux