Re: Latency low enough for gaming

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick Petersen wrote:
> I have learned a lot from the lartc list archive, but this specifik
> leaves me with no clue. I have been able to get real close to normal
> latency by capping incoming traffic at around 1200kbits, but its no
> fun throwing away almost half your bandwidth.
>
> Can i get any recommendations?

Let's get the problem statement clear first.

  First of all, it is obvious that the high latency is a result of
  queueing at the ISP, before the packets are send over the slow link
  to your router. ISPs have very long queues normally.

  Secondly, one needs to understand that there isn't really a damn
  thing you can do about it. If someone ping-floods you, it will
  saturate your downlink and latency will go through the roof. This
  cannot be prevented except by having access to your ISP.

  And thirdly, the only thing you can do, is to either discard or
  delay perfectly good packets which have already travelled over your
  slow link and spent bandwidth on it. If you drop a packet, it will
  most likely have to be resent and again use up the same
  bandwidth. And the only good this does is to try and make the
  connections throttle themselves when they notice that packets aren't
  getting through. TCP does this, and a few application level UDP
  protocols do this, but not much else.

So, to your *goal* in a single sentence:

  Force TCP to send packets slower than your downlink speed.

If you can manage this, then no packets are queued at your ISP and you
can prioritise traffic perfectly on your router.

So, how does TCP work, then?

  On a connection, TCP has a window size in both directions, which is
  the amount of new packets that can be transferred without getting an
  acknowledgement for the packets already sent. Every packet sent is
  put on a re-send queue, and removed from there when an
  acknowledgement is received for that packet. If an acknowledgement
  doesn't arrive for a while, the packet is re-sent.

  So what happens when a packet is dropped, is that the connection
  stalls for a moment, because a packet is unacknowledged and send
  window limits the amount of packets that can be in transit. TCP
  stacks also throttle themselves when they notice that packets are
  being dropped.

  Traditionally, the maximum window size was 64kb - that is, a maximum
  of 64kbs of data can be unacknowledged on the link. Then the
  internet became full of links which have a large bandwidth, but also
  lots of latency. TCP window scaling was invented, and now window
  sizes can be much larger than that.

  Also, traditionally TCP only acknowledged up to the last continguous
  packet - that is, it wouldn't send acknowledgements for the packets
  that arrived after the missing packet. A loss of a single packet
  usually caused a short stall in the connection. This was augmented
  by cool retransmission logic, which allowed TCP to recover from the
  dropping of a single packet without a stall. And yet later selective
  acknowledgements were invented, which allows TCP to tell the other
  end exactly which packets it is missing, and now TCP survives quite
  high packet loss reasonably well.

So, what's the solution? How to make TCP throttle properly?

  The *real* solution would be to implement a packet mangler which
  would mutilate outgoing TCP ACK packets such that it would only give
  out transmission windows with the speed the link is configured
  to. However, to my knowledge, no free software implements this. I
  might work up a patch later, if I can come up with a good design.

But, short of implementing the *real* solution, there are several
things you can do to improve the situation. But first, let's see what
is happening now.

  Right now, your scripts shove all incoming traffic to a HTB, inside
  which the selection of packets happens through ESFQ. The HTB has to
  be limited to a rate *smaller* than the actual downlink for it to
  have any effect what so ever. And even so, what you do is that you
  queue (eg. delay) packets (maximum of 128 packets as per ESFQ), and
  then drop fairly traffic that comes faster.

  So what does TCP do about it? Latency is higher because of queueing
  at your router, or queuing at the ISP, so the large window sizes
  allow for a lot of packets to be in transit, waiting to be
  transferred. A bunch of packets are dropped, so those are
  retransmitted as soon as possible (at the arrival of the next
  selective acknowledgement), again filling up the queue. TCP will
  always try to transfer a bit faster than the speed it can get
  packets through to take immediate use of improved situations.

  With a single TCP stream, the queue size at your router or ISP is
  neglible, so it doesn't hurt latency much. But when there are a
  large amount of connections transferring as fast as they can,
  there's a lot of overshooting and what you described happens - the
  only way to prevent queuing at ISP is to limit the bandwidth to half
  of the actual link speed.

What can be done to improve the situation then?

  First of all, don't delay traffic. Either accept it or drop it,
  don't queue it. This results in dropping more packets in total if
  the transmissions are bursty, but only packet drops will tell TCP to
  transmit slower.

  Make a hard limit for the speed of transmission below the maximum
  transmission speed, over which you drop all packets, no questions
  asked. Taking 10% or 20% off of the theoretical maximum is probably
  good enough.

  Implement some probability drop to tell TCP early that it is
  approaching the maximum bandwidth and to keep it from trying to go
  over the allowed bandwidth. Some Random Early Drop (RED) mechanism
  could work well, a Time Sliding Window Three Colour Marker (TSWTCM,
  RFC2859) would work even better, but again - nobody has implemented
  that to my knowledge.

I haven't perused your ingress marking rules at all - in general,
there's no reason to delay the processing of a packet locally. And
most often dropping anything but TCP data packets is not going to do
anyone much good.

Note that I haven't actually given any actual instructions on how to
do this, because I have yet to experiment with all the available
strategies myself. I just wished to clarify why things happen as you
observe them happen and help you perhaps to find your own solutions.

Long rant, phew,
-- Naked


_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux