If there's intent to use this for performance comparisons between releases, I would propose that you include rotational drive(s), as well. It will be quite some time before everyone is running pure NVME/SSD clusters with the storage costs associated with that type of workload, and this should be reflected in test clusters.
On Fri, Oct 8, 2021 at 6:25 PM Dan Mick <dmick@xxxxxxxxxx> wrote:
Ceph has been completely ported to build and run on ARM hardware
(architecture arm64/aarch64), but we're unable to test it due to lack of
hardware. We propose to purchase a significant number of ARM servers
(50+?) to install in our upstream Sepia test lab to use for upstream
testing of Ceph, alongside the x86 hardware we already own.
This message is to start a discussion of what the nature of that
hardware should be, and an investigation as to what's available and how
much it might cost. The general idea is to build something arm64-based
that is similar to the smithi/gibba nodes:
https://wiki.sepia.ceph.com/doku.php?id=hardware:gibba
Some suggested features:
* base hardware/peripheral support for current releases of RHEL, CentOS,
Ubuntu
* 1 fast and largish (400GB+) NVME drive for OSDs (it will be
partitioned into 4-5 subdrives for tests)
* 1 large (1TB+) SSD/HDD for boot/system and logs (faster is better but
not as crucial as for cluster storage)
* Remote/headless management (IPMI?)
* At least 1 10G network interface per host
* Order of 64GB main memory per host
Density is valuable to the lab; we have space but not an unlimited amount.
Any suggestions on vendors or specific server configurations?
Thanks!
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx