Re: Experience with 100G Ceph in Proxmox

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



600 MB/s is rather slow. With 10 GBit/s I regularly measure 1,28 GB/s bandwidth even with a single connection.

The issue is latency not bandwidth!

The latenc is bound by the CPU serving the osds when using decent  NVMe storage is used.

In an optimal world the network latency would be the limiting factor though.

The good news is that CPU bound osd implementation is a solvable issue.

Regards
--martin

Am 12.03.2025 15:14 schrieb Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>:

How about testing the actual network throughput with iperf?  Even today
there are speed/duplex mismatches on switch ports.  And what everyone else
said about saturation etc.

We get, at absolute worst, 600 MB/s on a 10G connection.
--
Alex Gorbachev
https://alextelescope.blogspot.com



On Tue, Mar 11, 2025 at 6:57 AM Giovanna Ratini <
giovanna.ratini@xxxxxxxxxxxxxxx> wrote:

> Hello everyone,
>
> We are running Ceph in Proxmox with a 10G network.
>
> Unfortunately, we are experiencing very low read rates. I will try to
> implement the solution recommended in the Proxmox forum. However, even
> 80 MB per second with an NVMe drive is quite disappointing.
> Forum link
> <
> https://forum.proxmox.com/threads/slow-performance-on-ceph-per-vm.151223/#post-685070
> >
>
> For this reason, we are considering purchasing a 100G switch for our
> servers.
>
> This raises some questions:
> Should I still use separate networks for VMs and Ceph with 100G?
> I have read that running Ceph on bridged connections is not recommended.
>
> Does anyone have experience with 100G Ceph in Proxmox?
>
> Is upgrading to 100G a good idea, or will I have 60G sitting idle?
>
> Thanks in advance!
>
> Gio
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux