Re: recommendation for barebones server with 8-12 direct attach NVMe?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> NVMe SSDs shouldn’t cost significantly more than SATA SSDs.  Hint:  certain tier-one chassis manufacturers mark both the fsck up.  You can get a better warranty and pricing by buying drives from a VAR.
> 
>              We stopped buying “Vendor FW” drives a long time ago.

Groovy.  Channel drives are IMHO a pain, though in the case of certain manufacturers it can be the only way to get firmware updates.  Channel drives often only have a 3 year warranty, vs 5 for generic drives.


> Although when the PowerEdge R750 originally came out they removed the ability for the DRAC to monitor the endurance of the non-vendor SSDs to penalize us, it took about 6 months or arguing to get them to put that back in.

I've seen a bug on R440s with certain drives around this as well, where a drive was falsely reported as EOL.  It's a much better idea to monitor yourself than to trust iDRAC or any other BMC to do this.  


> It’s a trap!  Which is to say, that the $/GB really isn’t far away, and in fact once you step back to TCO from the unit economics of the drive in insolation, the HDDs often turn out to be *more* expensive.
> 
>              I suppose depending on what DWPD/endurance you are assuming on the SSDs but also in my very specific case we have PBs of HDDs in inventory so that costs us…no additional money.

Fair enough ; my remarks naturally are with respect to net new acquisitions.  OpEx of HDDs is still higher though.


> My comment on there being more economical NVMe disks available was simply that if we are all changing over to NVMe but we don’t right now need to be able to move 7GB/s per drive

It's not just about performance, it's about drives that will be available if any 5 years from now.  

> it would be cool to just stop buying anything with SATA in it and then just change out the drives later.  Which was kind of the vibe with SATA when SSDs were first introduced. Everyone disagrees with me on this point but it doesn’t really make sense that you have to choose between SATA or NVME on a system with a backplane.

There are "universal" backplanes that will accept both, but of course you pay more and still need an HBA, even if it's built into the motherboard.


> 
> But yes I see all of your points as far as if I was trying to build a Ceph cluster as primary storage and had a budget for this project. That would indeed change everything about my algebra.
> 
> Thanks for your time and consideration I appreciate it.
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux