Re: Latency low enough for gaming

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick Petersen wrote:
Welcome to me, and my very first lartc post.
As with most first timers, i made a mistake. Admin, disregard the
earlier message, as i was still waiting for the subscription
confirmation. Should it get through still, i apoligize.

For the last few weeks i have been trying to make it so our 2048/512
adsl line can be used for gaming and for leeching at the same time. The
current result is what can be found at http://www.schmakk.dk/~schmakk
which is what is running at the NAT gateway. This has done a lot for the
latency, but still there is huge problems with eg. massive http
downloads (5+ threads makes ping go up to at least 200).

I have learned a lot from the lartc list archive, but this specifik
leaves me with no clue. I have been able to get real close to normal
latency by capping incoming traffic at around 1200kbits, but its no fun
throwing away almost half your bandwidth.

Can i get any recommendations?

Also, if you have the time, a look through my script is much appreciated.
(Im concerned about the calculations for dividing the bandwidth, the
general setup of everything and the ipp2p+connmark tagging.)

I see you have a newer version now anyway, but I tried you script last night (not connmark/ipp2p as it clashed with connbytes). I have 256/512 so things in theory should be nicer for you.


I am still testing myself, so can't post a solution but can make some observations -

The delete rule for iptabled needs -D not -A.

esfq won't work hashing src on egress if you are doing NAT see the KPTD on www.docum.org - egress qos comes after NAT. Ingress with dst should be ok if you are NATing as long as you used the IMQ NAT patch.

The trouble with esfq hashing per user (unless you use limit) is that each user gets a 128 packet queue, which if they have many connections gets full and drops take a long time to be noticed. I have a modified esfq which overcomes this somewhat, but I also use classic hash and restrict the size of the queue.

I can see why commercial bandwidth controllers use advertised window manipulation - often dropping is needed to get the sender to back off a bit and set its' congestion window, but if you queue this may result in a resync burst later. Being able to reduce adv window on dups/sacks and increase slowly/randomly would be handy.

One thing that helps me is to give new bulk connections their own class with a short queue for the first 80000 bytes using connbytes (netfilter extra patch). This is limited to rate downrate/5 ceil /3 and stops tcp slowstart from overshooting. I have also tried connbytes just to drop early packets, but with browsers making many simultaneous connections, the resyncs cause a latency burst.

I see you are trying RED - in theory this should be nice, but remember the settings/docs you read about don't take into account that you are trying to shape behind a fifo (at ISP/teleco) that dequeues only 20% faster than what you are aiming for.

I am not convinced that just dropping ingress is best - a short queue yes, then at least you don't ever drop game packets.

If I had 2048 down I reckon I could keep down below 100 now - apart from the odd 1 sec blip.

Andy.

_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux