Re: Proxmox+Ceph Benchmark 2020

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the link Alwin!


On intel platforms disabling C/P state transitions can have a really big impact on IOPS (on RHEL for instance using the network or performance latency tuned profile).  It would be very interesting to know if AMD EPYC platforms see similar benefits.  I don't have any in house, but if you happen to have a chance it would be an interesting addendum to your report.


Mark


On 10/13/20 5:17 AM, Alwin Antreich wrote:
Hello fellow Ceph users,

we have released our new Ceph benchmark paper [0]. The used platform and
Hardware is Proxmox VE 6.2 with Ceph Octopus on a new AMD Epyc Zen2 CPU
with U.2 SSDs (details in the paper).

The paper should illustrate the performance that is possible with a 3x
node cluster without significant tuning.

I welcome everyone to share their experience and add to the discussion,
perferred on our forum [1] thread with our fellow Proxmox VE users.

--
Cheers,
Alwin

[0] https://proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark-2020-09
[1] https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux