Re: recommendation for barebones server with 8-12 direct attach NVMe?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


On 12/1/24 22:32, Drew Weaver wrote:
So we were going to replace a Ceph cluster with some hardware we had
laying around using SATA HBAs but I was told that the only right way to
build Ceph in 2023 is with direct attach NVMe.

These kinds of statements make me at least ask questions. Dozens of 14TB HDDs have worked reasonably well for us for four years of RBD for cloud, and hundreds of 16TB HDDs have satisfied our requirements for two years of RGW operations, such that we are deploying 22TB HDDs in the next batch. It remains to be seen how well 60 disk SAS-attached JBOD chassis work, but we believe we have an effective use case.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux