Problesaturating thlink when simulating latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I havtwo machines with a 10Gbits link with very low latency between them. I ausing iperf to saturate the link and without introducing any latency I can saturate it with only 2 connections (-P parameter in iperf) to 9.90 Gbits/sec

I'using neteto introduce latency by configuring the interface in both machines like this: tc qdisc add dev eth0_0 root netem delay 1ms

With small latencies everything looks reasonable. For examplfor 5 ms in each direction I can saturatthe link with 20 connections to 9.80 Bbits/sec. With higher values however, it goes downhill quite badly. With 10ms each side I can barely get to 5 Gbits/sec, and with 20ms I can't get to 2 Gbits/sec, no matter how many parallel connections I try.

I'thinking thanetem is introducing something different than just latency here. May it be possible that it is for example dropping frames because it can't buffer enough at these speeds? Any suggestion to diagnose this?

Thanks,
Damian

-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/netem/attachments/20140305/ee317f43/attachment-0001.html>

FroDamian.Lezama ariverbed.com  Wed Mar  5 22:20:26 2014
From: Damian.Lezama ariverbed.co(Damian Lezama)
Date: Wed, 05 Mar 2014 22:20:26 -0000
Subject: Problesaturating thlink when simulating latency
Message-ID: <CD78AB7CA98A4D429F16233047F72EA89AB73C@xxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi,

I havtwo machines with a 10Gbits link with very low latency between them. I ausing iperf to saturate the link and without introducing any latency I can saturate it with only 2 connections (-P parameter in iperf) to 9.90 Gbits/sec

I'using neteto introduce latency by configuring the interface in both machines like this: tc qdisc add dev eth0_0 root netem delay 1ms

With small latencies everything looks reasonable. For examplfor 5 ms in each direction I can saturatthe link with 20 connections to 9.80 Bbits/sec. With higher values however, it goes downhill quite badly. With 10ms each side I can barely get to 5 Gbits/sec, and with 20ms I can't get to 2 Gbits/sec, no matter how many parallel connections I try.

I'thinking thanetem is introducing something different than just latency here. May it be possible that it is for example dropping frames because it can't buffer enough at these speeds? Any suggestion to diagnose this?

Thanks,
Damian

-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/netem/attachments/20140305/3c5f7a21/attachment-0001.html>


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux