Greetings, I've got a server with a bunch of vpn (openvpn) tunnels terminating on it. I'm trying to limit the total bandwidth the tunnels can use to 3mbit. So I applied a rule like this: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ tc qdisc add dev eth0 root handle 1: htb default 90 tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit burst 15k # Default rule for eth0 tc class add dev eth0 parent 1:1 classid 1:90 htb rate 3mbit ceil 3mbit burst 15k tc qdisc add dev eth0 parent 1:90 handle 90: sfq perturb 10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ All traffic including tunnel traffic enters and leaves eth0. Hence I assumed this would work, even if there is lots of tun virtual interfaces. But I get the following error as soon as I apply this rule. ~~~ Aug 19 13:51:44 gw openvpn[4996]: write UDPv4 []: No buffer space available (code=105) Aug 19 13:51:44 gw openvpn[5012]: write UDPv4 []: No buffer space available (code=105) Aug 19 13:51:44 gw openvpn[5012]: write UDPv4 []: No buffer space available (code=105) Aug 19 13:51:44 gw openvpn[3017]: write UDPv4 []: No buffer space available (code=105) ~~~ Searching archives I see that this usually means I have a crappy network card. I use an Intel onboard nic. Is this the cause of the problem ? Is it a crappy nic to use ? Or is it perhaps a problem between the buffering on the tunnel interfaces and the real interface ? or something ? Should I ignore the error or is it dropping packets ? What can I do to resolve this problem ? tx e. _______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/