Sorry, I can't compare. Not used Ceph in anger anywhere else. In our case we were looking for Kubernetes on-premises storage and several points led us to the Ambedded solution. I wouldn't expect them to have the sort of throughput of a full size Xeon server but for our immediate purposes that is not really an issue. - It was a turnkey solution with support. We knew very little about either Kubernetes or Ceph and this was the least risk for us. - Size and power. We only have single racks in two datacentres, space is a serious consideration. Ceph is very machine hungry and the alternatives like Softiron even ran to 7U at least. We get 24 micro servers in 3U. Power is only 105W per unit along with little heat. - Cost. These appliances are incredibly inexpensive for the amount of storage they provide. Even the smallest offerings from people like Dell/EMC were both a lot larger and an astonomical amount more expensive. The licencing is purely related to the physical appliances. Hard drives and M.2 cache can be upgraded. You can even run on a single appliance albeit at much reduced resilience and lost capacity. Makes evaluation really inexpensive. - Open source standard. Anything we learn from running these appliances is directly translatable to any Ceph install. Anything we learned on Dell/EMC would be yet more lock in to Dell/EMC. We intend to experiment with Rook in the near future but our inexperience of both Kubernetes and Ceph made this option too risky for the initial stages. If we run Rook properly we think we can be able to co-locate things like databases with their OSD and storage on one server so that performance is optimal while having the management control of Ceph. But the bulk of our data storage is exactly that and doesn't require massive performance. On Wed, 25 Nov 2020 at 09:35, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote: > > How does ARM compare to Xeon in latency and cluster utilization? > > > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx