Re: recommendation for barebones server with 8-12 direct attach NVMe?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh, well what I was going to do was just use SATA HBAs on PowerEdge R740s because we don't really care about performance as this is just used as a copy point for backups/archival but the current Ceph cluster we have [Which is based on HDDs attached to Dell RAID controllers with each disk in RAID-0 and works just fine for us] is on EL7 and that is going to be EOL soon. So I thought it might be better on the new cluster to use HBAs instead of having the OSDs just be single disk RAID-0 volumes because I am pretty sure that's the least good scenario whether or not it has been working for us for like 8 years now.

So I asked on the list for recommendations and also read on the website and it really sounds like the only "right way" to run Ceph is by directly attaching disks to a motherboard. I had thought that HBAs were okay before but I am probably confusing that with ZFS/BSD or some other equally hyperspecific requirement. The other note was about how using NVMe seems to be the only right way now too.

I would've rather just stuck to SATA but I figured if I was going to have to buy all new servers that direct attach the SATA ports right off the motherboards to a backplane I may as well do it with NVMe (even though the price of the media will be a lot higher).

It would be cool if someone made NVMe drives that were cost competitive and had similar performance to hard drives (meaning, not super expensive but not lightning fast either) because the $/GB on datacenter NVMe drives like Kioxia, etc is still pretty far away from what it is for HDDs (obviously).

Anyway thanks.
-Drew





-----Original Message-----
From: Robin H. Johnson <robbat2@xxxxxxxxxx> 
Sent: Sunday, January 14, 2024 5:00 PM
To: ceph-users@xxxxxxx
Subject:  Re: recommendation for barebones server with 8-12 direct attach NVMe?

On Fri, Jan 12, 2024 at 02:32:12PM +0000, Drew Weaver wrote:
> Hello,
> 
> So we were going to replace a Ceph cluster with some hardware we had 
> laying around using SATA HBAs but I was told that the only right way 
> to build Ceph in 2023 is with direct attach NVMe.
> 
> Does anyone have any recommendation for a 1U barebones server (we just 
> drop in ram disks and cpus) with 8-10 2.5" NVMe bays that are direct 
> attached to the motherboard without a bridge or HBA for Ceph 
> specifically?
If you're buying new, Supermicro would be my first choice for vendor based on experience.
https://www.supermicro.com/en/products/nvme

You said 2.5" bays, which makes me think you have existing drives.
There are models to fit that, but if you're also considering new drives, you can get further density in E1/E3

The only caveat is that you will absolutely want to put a better NIC in these systems, because 2x10G is easy to saturate with a pile of NVME.

--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail   : robbat2@xxxxxxxxxx
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux