Re: Latency low enough for gaming

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick Petersen wrote:

esfq won't work hashing src on egress if you are doing NAT see the KPTD on www.docum.org - egress qos comes after NAT. Ingress with dst should be ok if you are NATing as long as you used the IMQ NAT patch.


I thought that with the NAT patch, imq would see incoming packets with
the real ip on the internal net?

Yes - it is OK for incoming, but not for outbound AFAIK whether you use imq or the interface direct. The NAT patch for IMQ only changes the ingress hooks, not egress.




The trouble with esfq hashing per user (unless you use limit) is that each user gets a 128 packet queue, which if they have many connections gets full and drops take a long time to be noticed. I have a modified esfq which overcomes this somewhat, but I also use classic hash and restrict the size of the queue.


I didnt thing a 128 packet queue would do any real difference, but im
testing with other qdiscs at the moment, since it seems that bandwidth
is being divided, but there is still latency problems.

Are the problems brief or does it totally loose it. I just tested the ingress policer at 80% and with 8 tcps going it looses control.




I can see why commercial bandwidth controllers use advertised window manipulation - often dropping is needed to get the sender to back off a bit and set its' congestion window, but if you queue this may result in a resync burst later. Being able to reduce adv window on dups/sacks and increase slowly/randomly would be handy.


Ah yes, the holy grail it seems. Its a mystery that noone has started an
open source project for this.


One thing that helps me is to give new bulk connections their own class with a short queue for the first 80000 bytes using connbytes (netfilter extra patch). This is limited to rate downrate/5 ceil /3 and stops tcp slowstart from overshooting. I have also tried connbytes just to drop early packets, but with browsers making many simultaneous connections, the resyncs cause a latency burst.


If im getting this right, you are using iptables to manage bandwidth
directly? Im real bad with iptables still, i dont think ive gotten to
know half of it yet.

I did, just experementing try dropping an early packet from new connections. It was better in some cases, but not as good as putting new connections in their own limited bandwidth short queue. This still causes drops, but delays the packets aswell.


I use the netfilter connbytes patch to mark < 80000 bytes then put them in their own htb class.




I see you are trying RED - in theory this should be nice, but remember the settings/docs you read about don't take into account that you are trying to shape behind a fifo (at ISP/teleco) that dequeues only 20% faster than what you are aiming for.


Im still kind of blank on RED, what im trying out now is to use the RED
part of Jim diGriz' (i think) script. It seems that a few packets are
actually dropped when the link is getting full, but only about 5-10 in a
couple of minutes.. Seems a bit low?

I guess it depends on how many connections - just queuing will throttle a low number without many drops. I tested your RED settings with 8 and it handled it OK and drops enough. It temporarily looses it when a new connection is started - but it's hard to stop this. It also causes a blip when my link is otherwise empty - putting new connections into a restricted seperate class should stops this.


I think it will be hard to get RED perfect as it really ought to live at your ISP, before the bottleneck, but it's still worth seeing how well it can be tweaked.




I am not convinced that just dropping ingress is best - a short queue yes, then at least you don't ever drop game packets.


This is what im trying to do now, using IMQ for incoming traffic.
However, it seems that my 2 root qdiscs are delaying packets a lot.
According to tc -s qdisc etc etc about 100-500 packets are overlimits,
even when dataflow is no more than around 5-10kb/s. Setting a ceil on
the root classes seems to help it out a little, but not completely. This
i dont understand.

This is OK, you want to see overlimits - tcp will send packets in bursts and these will come in at full link speed, and so be seen by HTB as overlimits.


Andy.

_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux