Re: Enterprise SSD/NVME

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Interesting idea, not sure how well a utility could empirically test without a hardcoded list of SKUs, but here are some thoughts.

* One difference is PLP - power loss protection.  An enterprise drive must IMHO offer this and I don’t know offhand of one that doesn’t.  Client / desktops drives often don’t.  Don’t know if this can be programmatically determined.  Maybe for NVMe, if compliant, but doubtful for SAS/SATA.

* Sustained performance.  Some client drives (the industry term) may look okay — for the first 60 seconds or so, then we see what is sometimes called cliffing, where it saturates and performance drops precipitously.  I dont’ know off hand what kind of threshold you might enforce here, this could perhaps be determined by trying against representative SKUs.

* Endurance is important.  Client drives are intended for much gentler duty cycles than most enterprise drives.  For general Ceph usage I usually suggest a ~~ 1DWPD SKU; for archival / object purposes QLC drives with 0.3 - 1.0 DWPD work great.





> On Jan 10, 2025, at 5:11 PM, Martin Konold <martin.konold@xxxxxxxxxx> wrote:
> 
> Hi there,
> 
> it is well documented that Ceph performance is extremely poor with consumer ssd/nvme block devices.
> 
> Recommending enterprise or data center devices is IMHO not sufficient as these terms are not really standardized.
> 
> I propose to write a little system program which determines the properties of a device with regards to the system calls performed by ceph osd.
> 
> Can anyone hint me how such a program should look like? Simply some timing of fsync system calls? 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux