On 10/9/21 01:24, Dan Mick wrote:
Ceph has been completely ported to build and run on ARM hardware
(architecture arm64/aarch64), but we're unable to test it due to lack of
hardware. We propose to purchase a significant number of ARM servers
(50+?) to install in our upstream Sepia test lab to use for upstream
testing of Ceph, alongside the x86 hardware we already own.
Nice, great move!
This message is to start a discussion of what the nature of that
hardware should be, and an investigation as to what's available and how
much it might cost. The general idea is to build something arm64-based
that is similar to the smithi/gibba nodes:
https://wiki.sepia.ceph.com/doku.php?id=hardware:gibba
Some suggested features:
* base hardware/peripheral support for current releases of RHEL, CentOS,
Ubuntu
* 1 fast and largish (400GB+) NVME drive for OSDs (it will be
partitioned into 4-5 subdrives for tests)
* 1 large (1TB+) SSD/HDD for boot/system and logs (faster is better but
not as crucial as for cluster storage)
* Remote/headless management (IPMI?)
* At least 1 10G network interface per host
* Order of 64GB main memory per host
Not sure how the lab infra looks like, but personally I would like to
have separate management and out of band management interfaces.
Density is valuable to the lab; we have space but not an unlimited amount.
Is power an issue as well?
Any suggestions on vendors or specific server configurations?
I'm not sure what kind of workloads the servers have to handle. So based
on just the specs of the other servers it's a bit hard to advise.
Suitable hardware might be Ampere powered Arm servers, like the Lenovo
ThinkSystem HR330A.
Less enterprise, but in line with the objective of this thread, would be
Solidrun's Honeycomb LX2 server. TNetworking might be a challenge here,
as you would need a modern or a custom build kernel.
Gr. Stefan
[1]:
https://amperecomputing.com/wp-content/uploads/2019/04/Lenovo_ThinkSystem_HR330A_PB_20190409.pdf
[2]: https://www.solid-run.com/blog/articles/honeycomb-lx2-server/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx