Re: NVMe and 2x Replica

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Taking a month to weight up a drive suggests the cluster doesn't have
enough spare IO capacity.

And for everyone suggesting EC, I don't understand how anyone really thinks
that's a valid alternative with the min allocation / space amplification
bug, no one in this community, not even the top developers of the project
can provide an accurate space projection for EC usage -- and if you cannot
predict how much space an EC configuration will use in the wild because of
a bug that isn't well documented nor discussed, you cannot begin to even
talk about costs and numbers. I understand '4k' block sizes may fix this
issue but not everyone can necessarily run ceph off the latest master.

There are a lot of hidden costs in using ceph which can vary depending on
usage needs, such as having spare io for recovery operations or ensuring
your total cluster disk usage stays below 60%.


On Thu, Feb 4, 2021 at 2:25 PM Anthony D'Atri <anthony.datri@xxxxxxxxx>
wrote:

>
>
> >> Why would I when I can get a 18TB Seagate IronWolf for <$600, a 18TB
> Seagate Exos for <$500, or a 18TB WD Gold for <$600?
> >
> > IOPS
>
> Some installations don’t care so much about IOPS.
>
> Less-tangible factors include:
>
> * Time to repair and thus to restore redundancy.  When an EC pool of
> spinners takes a *month* to weight up a drive, that’s a significant
> operational and data durability / availability concern.
>
> * RMAs.  They’re a pain, especially if you have to work them through a
> chassis vendor, who likely will be dilatory and demand unreasonable hoops
> like attaching a BMC web interface screenshot for every drive.  This
> translates to each RMA being modeled with a certain shipping / person-hour
> cost, which means that for lower unit-value items it may not be worth the
> hassle.  It is not unreasonable to guesstimate a threshold around USD 500.
> Soit is not uncommon to just trash failed / DOA spinners — or letting them
> stack up indefinitely in a corner — instead of recovering their value.
>
> As I wrote … in 2019 I think it was, with spinners you have some manner of
> HBA in the mix.  If that HBA is a fussy RAID model, you may have
> significant added cost for the RoC, onboard RAM, and supercap/BBU.
> Complexity also comes with neverending firmware bugs and cache management
> nightmares.  Gas gauge firmware… don’t even get me talking about that.
>
> And consider how many TB of 3.5” spinners you fit into an RU, compared to
> 2.5” or EDSFF flash.  RUs aren’t free, and SATA HBAs will bottleneck a
> relatively dense HDD chassis long before a similar number of NVMe drives
> will bottleneck.  Unless perhaps you have the misfortune of a chassis
> manufacturer who for some reason runs NVMe PCI lanes *though* an HBA.
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Steven Pine

*E * steven.pine@xxxxxxxxxx  |  *P * 516.938.4100 x
*Webair* | 501 Franklin Avenue Suite 200, Garden City NY, 11530
webair.com
[image: Facebook icon] <https://www.facebook.com/WebairInc/>  [image:
Twitter icon] <https://twitter.com/WebairInc> [image: Linkedin icon]
<https://www.linkedin.com/company/webair>
NOTICE: This electronic mail message and all attachments transmitted with
it are intended solely for the use of the addressee and may contain legally
privileged proprietary and confidential information. If the reader of this
message is not the intended recipient, or if you are an employee or agent
responsible for delivering this message to the intended recipient, you are
hereby notified that any dissemination, distribution, copying, or other use
of this message or its attachments is strictly prohibited. If you have
received this message in error, please notify the sender immediately by
replying to this message and delete it from your computer.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux