I'trying to ustc netem to emulate WAN latency in a lab environment. I'm using a fairly simplistic configuration at this time with 3 linux boxes, one connected to three switches and the other two each connected to a separate switch. The first box is routing for the other two. Using thbasic netedelay: tc qdisc add dev eth1 roonetedelay 50ms tc qdisc add dev eth2 roonetedelay 50ms successfully makes a singlround trip tak100ms. However, it seems to be doing something unexpected with bandwidth. I use iperf to measure bandwidth. Comparing to a real world scenario (EU <-> US near 100ms), I can easily get up to 60mbit using a 512KB window size. However with netem I'm stuck down to 10mbit no matter what window size I use. I'm assuming I missing something related to buffers or queues but I can't seem to figure it out. I'valso tried combining iwith tbf: tc qdisc add dev eth1 roohandl1:0 tbf rate 1gbit latency 15ms burst 1gbit tc qdisc add dev eth2 roohandl1:0 tbf rate 1gbit latency 15ms burst 1gbit tc qdisc add dev eth1 paren1:1 handl10 netem delay 15ms tc qdisc add dev eth2 paren1:1 handl10 netem delay 15ms and htb: tc qdisc add dev eth1 roohandl1:0 htb tc class add dev eth1 paren1:0 classid 1:1 htb rat1gbit tc qdisc add dev eth1 paren1:1 handl10 netem delay 15ms tc filter add dev eth1 paren1:0 protocol ip prio 1 u32 match ip protocol 6 0xff flowid 1:1 tc qdisc add dev eth2 roohandl1:0 htb tc class add dev eth2 paren1:0 classid 1:1 htb rat1gbit tc qdisc add dev eth2 paren1:1 handl10 netem delay 15ms tc filter add dev eth2 paren1:0 protocol ip prio 1 u32 match ip protocol 6 0xff flowid 1:1 All with thsamresult. Removing the delay the boxes communicate around 7Gbit with iperf. JustiRush PerformancTesEngineer jarush aepic.com Epic