Ignacio; Personally, I like to use hardware for a proof of concept that I can roll over into the final system, or repurpose if the project is denied. As such, I would recommend these: Supermicro 5019A-12TN4 Barebones<https://www.supermicro.com/products/system/1U/5019/SYS-5019A-12TN4.cfm> I built our PoC around three of them (2 x 12TB Seagate Ironwolf drives, plus 1x 256 GB Intel M.2 SSDs), then they got turned into MONs for the production cluster. The production cluster ended up with larger M.2s, and smaller spinners, as I was concerned about recovery time for 24 - 36TB per node, with only 4 x 1Gb network. Just my 2 cents. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos@xxxxxxxxxxxxxx www.PerformAir.com -----Original Message----- From: Ignacio Ocampo [mailto:nafiux@xxxxxxxxx] Sent: Sunday, March 08, 2020 7:00 PM To: ceph-users@xxxxxxx Subject: Hardware feedback before purchasing for a PoC Hi team, I'm planning to invest in hardware for a PoC and I would like your feedback before the purchase: The goal is to deploy a *16TB* storage cluster, with *3 replicas* thus *3 nodes*. System configuration: https://pcpartpicker.com/list/cfDpDx ($400 USD per node) Some notes about the configuration: - 4-core processor for 4 OSD daemons - 8GB RAM for the first 4TB of storage, that will increase to 16GB of RAM when 16TB of storage. - Motherboard: - 4 x SATA 6 Gb/s (one per each OSD disk) - 2 x PCI-E x1 Slots (1 will be used for an additional Gigabit Ethernet) - 1 x M.2 Slots for the host OS - Ram can increase up-to 32 GB, and another SATA 6b/s controller can be added on PCI-E x1 for growth up to *32TB* As noted, the plan is to deploy nodes with *4TB* and gradually add *12TB* as needed, memory also should be increased to *16GB* after *8TB* threshold. Edit Questions to validate before the purchase 1. Does the hardware components make sense for the *16TB* growth projection? 2. Is it easy to gradually add more capacity to each node (*4TB* each time per node)? Thanks for your support! -- Ignacio Ocampo _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx