At 02:44 p.m. 11/07/03 +0300, you wrote:
The only problem I'm experiencing is with heavy udp traffic, f.e. Counter Strike
and Bnetd (Diablo, DiabloII, {Star|War}Craft) game play. Then the machine is reaching
some really high load averages, like 8 to 10. Have no idea how this
could be avoided.
I'd appreciate any suggestions.
UDP traffic is very difficult to control because the protocol is unresponsive. Then the only way to put it under control is by killing its packets. This doesn´t mean that it is going to lowering its rate, just that you kill all packets that violate an upper level.
You could do that at ingress using something like this:
tc qdisc add dev eth0 handle ffff: ingress
tc filter add dev eth0 parent ffff: protocol ip prio 1 u32 \ match ip protocol 17 0xff police rate 1200kbit \ burst 75kb drop flowid :1
Here UDP traffic is policed to 1200kbit.
Then using tcindex you can filter them again when leaving to select an output class for these "bad citizen" flows.
UDP transports RTP that is always a problem. Application flows travelling on it are very sensitive to latency and jitter. Some multimedia protocols, like MPEG, are also very sensitive to packet dropping because when you lose an I-decode frame you lose too a GOP (group of pictures); really a problem.
Using a policer you at least guarantee yourself they are not going to starve other (TCP good citizen) flows and your servers. But packet masacre generates then quality problem to the multimedia applications. Perhaps the best solution is overprovisioning. Check how many flows of this type you have on peak hours, check the bandwidth requirement of each of them, and supercharge your servers to support the storm.
RED, or GRED even better, can help too. This case the control is less agressive and, perhaps, things go better. It´s just a matter to have time and patient to make some tests.
You can search for more information on my site, below.
Best regards,
Leonardo Balliache
Practical QoS http://opalsoft.net/qos