Anthony,
I just used normal HDD, I pretend to test the same HDDs on two X86 and
ARM clusters to test the cephfs perf diff.
Best regards,
Norman
On 8/7/2020 上午11:51, Anthony D'Atri wrote:
Bear in mind that ARM and x86 are architectures, not CPU models. Both are available in a vast variety of core counts, clocks, and implementations.
Eg., an 80 core Ampere Altra likely will smoke a Intel Atom D410 in every way.
That said, what does “performance” mean? For object storage, it might focus on throughput; for block attached to VM instances, chances are that latency (iops) is the dominant concern.
And if you’re running LFF HDDs, it’s less likely to matter than if your nodes are graced with an abundance of NVMe devices split into multiple OSDs and using dmcrypt.
You might like to visit softiron.com if they aren’t already on your radar. If they fit your use-case, their ARM servers boast really low power usage compared to eg. a dual Xeon chassis: less power, less cooling, RUs become the limiting factor in your racks instead of ahmps, etc.
— me
Aaron,
Significant performance diff for OSD? Can you tell me percentage of perf
descent?
Best regards,
Norman
On 8/7/2020 上午9:56, Aaron Joue wrote:
Hi Norman,
It works well. We mix Arm and x86. For example, OSD and MON are on Arm, RGWs are on x86. We can also put x86 OSD in the same cluster. All of the Ceph daemons talk to each other with the same protocol. Just separate the OSD to different CRUSH root If the OSD on Arm and x86 has a significant performance difference.
Best regards,
Aaron
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx