Re: recommendation for barebones server with 8-12 direct attach NVMe?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The OP is asking about new servers I think.  

> On Jan 13, 2024, at 9:36 PM, Mike O'Connor <mike@xxxxxxxxxx> wrote:
> 
> Because it's almost impossible to purchase the equipment required to convert old drive bays to u.2 etc.
> 
> The M.2's we purchased are enterprise class.
> 
> Mike
> 
> 
>> On 14/1/2024 12:53 pm, Anthony D'Atri wrote:
>> Why use such a card and M.2 drives that I suspect aren’t enterprise-class? Instead of U.2, E1.s, or E3.s ?
>> 
>>>> On Jan 13, 2024, at 5:10 AM, Mike O'Connor<mike@xxxxxxxxxx>  wrote:
>>> 
>>> On 13/1/2024 1:02 am, Drew Weaver wrote:
>>>> Hello,
>>>> 
>>>> So we were going to replace a Ceph cluster with some hardware we had laying around using SATA HBAs but I was told that the only right way to build Ceph in 2023 is with direct attach NVMe.
>>>> 
>>>> Does anyone have any recommendation for a 1U barebones server (we just drop in ram disks and cpus) with 8-10 2.5" NVMe bays that are direct attached to the motherboard without a bridge or HBA for Ceph specifically?
>>>> 
>>>> Thanks,
>>>> -Drew
>>>> 
>>>> _______________________________________________
>>>> ceph-users mailing list --ceph-users@xxxxxxx
>>>> To unsubscribe send an emailtoceph-users-leave@xxxxxxx
>>> Hi
>>> 
>>> You need to use PCIe card with a PCIe switch, cards with 4 x m.2 NVME are cheap enough around $USD180 from Aliexpress.
>>> 
>>> There are companies with cards which have many more m.2 ports but the cost goes up greatly.
>>> 
>>> We just build a 3x1RU G9 HP cluster with 4 x 2T m.2 NVME using Dual 40G Ethernet ports and dual 10G Ethernet and a second hand Arisa 16 port 40G switch.
>>> 
>>> It works really well.
>>> 
>>> Cheers
>>> 
>>> Mike
>>> _______________________________________________
>>> ceph-users mailing list --ceph-users@xxxxxxx
>>> To unsubscribe send an email toceph-users-leave@xxxxxxx
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux