Re: Latency low enough for gaming

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 19 Feb 2004 09:45:16 +0000, Andy Furniss <andy.furniss@xxxxxxxxxxxxx> wrote:

> The delete rule for iptabled needs -D not -A.

Yes, that one was bad. I noticed it when i discovered how to list rules
in chains... I think all my rules was there about 10 times each, since i
never removed anything :)

> esfq won't work hashing src on egress if you are doing NAT see the KPTD 
> on www.docum.org - egress qos comes after NAT. Ingress with dst should 
> be ok if you are NATing as long as you used the IMQ NAT patch.

I thought that with the NAT patch, imq would see incoming packets with
the real ip on the internal net?

> The trouble with esfq hashing per user (unless you use limit) is that 
> each user gets a 128 packet queue, which if they have many connections 
> gets full and drops take a long time to be noticed. I have a modified 
> esfq which overcomes this somewhat, but I also use classic hash and 
> restrict the size of the queue.

I didnt thing a 128 packet queue would do any real difference, but im
testing with other qdiscs at the moment, since it seems that bandwidth
is being divided, but there is still latency problems.

> I can see why commercial bandwidth controllers use advertised window 
> manipulation - often dropping is needed to get the sender to back off a 
> bit and set its' congestion window, but if you queue this may result in 
> a resync burst later. Being able to reduce adv window on dups/sacks and 
> increase slowly/randomly would be handy.

Ah yes, the holy grail it seems. Its a mystery that noone has started an
open source project for this.

> One thing that helps me is to give new bulk connections their own class 
> with a short queue for the first 80000 bytes using connbytes (netfilter 
> extra patch). This is limited to rate downrate/5 ceil /3 and stops tcp 
> slowstart from overshooting. I have also tried connbytes just to drop 
> early packets, but with browsers making many simultaneous connections, 
> the resyncs cause a latency burst.

If im getting this right, you are using iptables to manage bandwidth
directly? Im real bad with iptables still, i dont think ive gotten to
know half of it yet.

> I see you are trying RED - in theory this should be nice, but remember 
> the settings/docs you read about don't take into account that you are 
> trying to shape behind a fifo (at ISP/teleco) that dequeues only 20% 
> faster than what you are aiming for.

Im still kind of blank on RED, what im trying out now is to use the RED
part of Jim diGriz' (i think) script. It seems that a few packets are
actually dropped when the link is getting full, but only about 5-10 in a
couple of minutes.. Seems a bit low?

> I am not convinced that just dropping ingress is best - a short queue 
> yes, then at least you don't ever drop game packets.

This is what im trying to do now, using IMQ for incoming traffic.
However, it seems that my 2 root qdiscs are delaying packets a lot.
According to tc -s qdisc etc etc about 100-500 packets are overlimits,
even when dataflow is no more than around 5-10kb/s. Setting a ceil on
the root classes seems to help it out a little, but not completely. This
i dont understand.
-- 
Patrick Petersen <lartc@xxxxxxxxxx>

_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux