Per-chunk overhead calculation too aggressive on send side

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Folks,

The following bit of code in net/sctp/output.c


sctp_packet_append_data

        /* Update our view of the receiver's rwnd. Include sk_buff overhead
         * while updating peer.rwnd so that it reduces the chances of a
         * receiver running out of receive buffer space even when receive
         * window is still open. This can happen when a sender is sending
         * sending small messages.
         */
        datasize += sizeof(struct sk_buff);
        if (datasize < rwnd)
                rwnd -= datasize;
        else
                rwnd = 0;

adds a whole sk_buff overhead for each data chunk in a packet. On
x86_64 sk_buff is 232 bytes, on an interface with 9K MTU sender which
streams 12 byte packets to a receiver with "default" 64K window is
going to run ouf of "space" before it can fill a single MTU-worth of
bytes. It seems (to me) the overhead calculation is too
aggressive. Does it really assume that each data chunk will get its
own sk_buff on the receive side?

max
--
To unsubscribe from this list: send the line "unsubscribe linux-sctp" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Networking Development]     [Linux OMAP]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux