Re: Experience with 100G Ceph in Proxmox

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Agree that the net is likely not your problem, though you should use iftop et al to look for saturation.  

Check that you have proper smith hash policy, otherwise you may not be using both bond links.  

The linked thread mentions SSDPE2KE032T8  which is SATA. 

The replication network is always optional.  Just a matter of saturation.  

The linked article discusses MSFT clients, is that what you have?  


> On Mar 11, 2025, at 7:13 AM, Eneko Lacunza <elacunza@xxxxxxxxx> wrote:
> 
> Hi Giovanna,
> 
>> El 11/3/25 a las 11:55, Giovanna Ratini escribió:
>> 
>> We are running Ceph in Proxmox with a 10G network.
>> 
>> Unfortunately, we are experiencing very low read rates. I will try to implement the solution recommended in the Proxmox forum. However, even 80 MB per second with an NVMe drive is quite disappointing.
>> Forum link <https://forum.proxmox.com/threads/slow-performance-on-ceph-per-vm.151223/#post-685070>
>> 
>> For this reason, we are considering purchasing a 100G switch for our servers.
>> 
>> This raises some questions:
>> Should I still use separate networks for VMs and Ceph with 100G?
>> I have read that running Ceph on bridged connections is not recommended.
>> 
>> Does anyone have experience with 100G Ceph in Proxmox?
>> 
>> Is upgrading to 100G a good idea, or will I have 60G sitting idle?
>> 
> 
> I think you should give more info on your setup and those low read rates (what numbers? how do you get them?), so that community can suggest improvements.
> 
> If you're getting 80MB/s the network is not your bottleneck (10G is 1GB/s) and upgrading to 100G network won't help much.
> 
> Cheers
> 
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
> 
> Tel. +34 943 569 206 | https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
> 
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux