On 13/1/2024 1:02 am, Drew Weaver wrote:
Hello,
So we were going to replace a Ceph cluster with some hardware we had laying around using SATA HBAs but I was told that the only right way to build Ceph in 2023 is with direct attach NVMe.
Does anyone have any recommendation for a 1U barebones server (we just drop in ram disks and cpus) with 8-10 2.5" NVMe bays that are direct attached to the motherboard without a bridge or HBA for Ceph specifically?
Thanks,
-Drew
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
Hi
You need to use PCIe card with a PCIe switch, cards with 4 x m.2 NVME
are cheap enough around $USD180 from Aliexpress.
There are companies with cards which have many more m.2 ports but the
cost goes up greatly.
We just build a 3x1RU G9 HP cluster with 4 x 2T m.2 NVME using Dual 40G
Ethernet ports and dual 10G Ethernet and a second hand Arisa 16 port 40G
switch.
It works really well.
Cheers
Mike
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx