Re: Ceph on ARM ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had hoped to stay out of this, but here I go.

> 4) SATA controller and PCIe throughput

SoftIron claims “wire speed” with their custom hardware FWIW. 

> Unfortunately these are the kinds of things that you can't easily generalize between ARM vs x86.  Some ARM processors are going to do wildly better than others.

Absolutely, just like an Intel Atom will deliver a different experience than a high-end Xeon.  Those of us in the US, though, can see markedly different pricing and product availability than other countries.

> 
>> Hi guys,
>> 
>> I was looking at some Huawei ARM-based servers and the datasheets are very interesting. The high CPU core numbers and the SoC architecture should be ideal for a distributed storage like Ceph, at least in theory.
>> 
>>  I'm planning to build a new Ceph cluster in the future and my best case scenario right now is to buy 7 servers with  2 x Intel Silver 12-core or 2 x Gold 20-core CPUs, 32 SATA drives each. And of course - SSDs/NVMEs for wal/db and the metadata pools , 256GB RAM and so on.

Consider if a single-socket Epyc would let you upgrade to QLC drives, save all the external WAL/DB hassles and potentially the need for an HBA at all.

>>  I'm curious however if the ARM servers are better or not for this use case (object-storage only).

Everyone’s needs are unique, but my sense is that object storage can be a reasonable fit for the right ARM gear if your use-case cares more about throughput than latency or IOPS.


>>  For example, instead of using 2xSilver/Gold server, I can use a Taishan 5280 server with 2x Kungpen 920 ARM CPUs with up to 128 cores in total .  So I can have twice as many CPU cores (or even more) per server comparing with x86.

More but slower cores work well in some cases, and are miserable in others.  cf. Sun’s CMT systems from last decade.  Conventional wisdom is that for CephFS MDS, for example, you want fewer but faster cores, so that’s probably an application for which such a CPU isn’t a good fit.

>>  Probably the price is lower for the ARM servers as well.

That’s a function in part of your haggling skills.  Oh and power (and thus cooling reduction) is not imaginary.  Time and again I’ve seen DC racks only partially populated because they ran out of amps before RUs.  Racks, RUs, TORs, switch ports, these things all cost money.  Like when people think cheap 1TB drives are a bargain, you have to look at TCO, including power, heat, and how many TB you can fit in a chassis and a rack.

>>     Has anyone tested Ceph in such scenario ?  Is the Ceph software really optimised for the ARM architecture ?  What do you think about this ?

SoftIron gear has been running production Ceph for a few years.  They tell me the code is unmodified.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux