Re: TCP qdisc + congestion control / BBR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We use fd on both hypervisors and ceph servers, seems to be working quite well.

We use fd_codel on the router sending traffic from hypervisor to slower segments of the network.

I was afraid to use fd_codel in hypervisors as it could affect latency -we run a full ceph SSD and NVMe cluster- as we need to keep latency under 1ms.

My recommendation would be to be careful when using fd_codel In hypervisors -or at least tweak the default values-. But, we don’t have serious test data to backup using that configuration.



Saludos Cordiales,
Xavier Trilla P.
Clouding.io

¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?

¡Pruébalo ahora en 
Clouding.io!

El 2 gen 2019, a les 22:47, Kevin Olbrich <ko@xxxxxxx> va escriure:

Hi!

I wonder if changing qdisc and congestion_control (for example fq with
Google BBR) on Ceph servers / clients has positive effects during high
load.
Google BBR: https://cloud.google.com/blog/products/gcp/tcp-bbr-congestion-control-comes-to-gcp-your-internet-just-got-faster

I am running a lot of VMs with BBR but the hypervisors run fq_codel +
cubic (OSDs run Ubuntu defaults).

Did someone test qdisc and congestion control settings?

Kevin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux