Re: recommendation for barebones server with 8-12 direct attach NVMe?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


> Now that you say it's just backups/archival, QLC might be excessive for
> you (or a great fit if the backups are churned often).

PLC isn’t out yet, though, and probably won’t have a conventional block interface.

> USD70/TB is the best public large-NVME pricing I'm aware of presently; for QLC
> 30TB drives. Smaller capacity drives do get down to USD50/TB.
> 2.5" SATA spinning disk is USD20-30/TB.

2.5” spinners top out at 5TB last I checked, and a certain chassis vendor only resells half that capacity.

But as I’ve written, *drive* unit economics are myopic.  We don’t run palletloads of drives, we run *servers* with drive bays, admin overhead, switch ports, etc., that take up RUs, eat amps, and fart out watts.

> PCIe bandwidth: this goes for NVME as well as SATA/SAS.
> I won't name the vendor, but I saw a weird NVME server with 50+ drive
> slots.  Each drive slot was x4 lane width but had a number of PCIe
> expanders in the path from the motherboard, so it you were trying to max
> it out, simultaneously using all the drives, each drive only only got
> ~1.7x usable PCIe4.0 lanes.

I’ve seen a 2U server with … 102 IIRC E1.L bays, but it was only Gen3.

> Compare that to the Supermicro servers I suggested: The AMD variants use
> a H13SSF motherboard, which provides 64x PCIe5.0 lanes, split into 32x
> E3.S drive slots, and each drive slot has 4x PCIe 4.0, no
> over-subscription.

Having the lanes and filling them are two different things though.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux