internal drops with tcp, kernel 2.2.16

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




mukesh@cs.cmu.edu wrote:
>From looking over the code, running some experiments, and from the mailing
>list archive, it looks like packets simply get dropped inside the kernel
>when the queues overflow.

Isn't this one of the major points of having a queue?  Resource
limiting.


>This seems to interact badly with TCP. If the initial retransmit time out
>is 3 seconds, and one of the first few packets gets dropped (before the
>rtt estimate is updated), then the connection is stalled for 3 seconds.
>(TCP packet gets dropped silently, then kernel waits an RTT for a response
>before retransmitting.)
>
>Two questions:
>
>1. Can this really happen, or have I overlooked something? (Our
>experiments suggest that it can happen.)

Yes.  If you (for example) write an application that creates 1000
sockets and then performs a TCP connect() on them all at fast as it
posibily can, that causes the SYN packet to go out through a PPP
interface which is attached to a 9600 baud line.  You're going to get a
bit of a bottle neck.  These numbers can obviously be scaled up for
faster interfaces, but at some point you will see the same problem.


>2. If this can happen, is it worth changing? In particular, we might not
>want to wait the entire RTT before retransmitting the packet if it was
>dropped inside the kernel.

Since you've not disclosed the nature of the application or the
specifics which create the problem you are seeing I can only based my
comments below on presumtions about your problem.


I believe there are per interface queues in at least 2.4.0.  The current
length can be viewed when running 'ifconfig 1.40 (2000-05-21)', I also
believe these queue lengths can be configured to any value you desire.

$ ip link
1: lo: <LOOPBACK,UP> mtu 3792 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
    link/ether 00:50:da:8a:4c:80 brd ff:ff:ff:ff:ff:ff
3: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP> mtu 1500 qdisc pfifo_fast qlen
3
    link/ppp

$ /sbin/ifconfig ppp0
ppp0      Link encap:Point-to-Point Protocol
          inet addr:10.0.0.123  P-t-P:10.0.0.124  Mask:255.255.255.255
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:22680 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25584 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:3

$ /sbin/ifconfig ppp0 txqueuelen 10

$ /sbin/ifconfig ppp0
ppp0      Link encap:Point-to-Point Protocol
          inet addr:194.207.243.209  P-t-P:194.207.243.225 
Mask:255.255.255.255
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:22713 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25620 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:10


You may wish to check out /sbin/ip and /sbin/tc over the various types
of queueing options available with Linux, '/sbin/ip' is more to do with
the global interface queue, and '/sbin/tc' over setting up bespoke
traffic management queues.

I believe 'pfifo_fast' is the default queuing policy for most
interfaces, and the queue length is a packet counting limit (as opposed
to byte counting limit).


-- 
Darryl Miles
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org


[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux