hello, can someone please explain simply what the "net.core.wmem_default" parameter in /etc/sysctl.conf exactly affect? an example: # increase Linux TCP buffer limits net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.rmem_default = 65536 net.core.wmem_default = 65536 Questions: 1. rmem and wmem - defaults - someone mentioned is: TCP send and receive socket buffer sizes: "It is essentially the maximum size of the linked list that is used to hold the socket buffers for a particular connection." what exactly does this mean? is this the congestion window for TCP only packets? What if my application is running UDP - will changing/tuning the above parameters matter? 2. Is it a true statement to say: Conventionally value for buffers used for reading network values are twice the value for writes as the reads are interrupt driven and therefore flow control cannot be imposed....As writes can be controlled by the host O/S then the kernel can make the application block. again, what if my application is running UDP not TCP, does the above statement still hold true? thanks in advance, - a. a s p a s i a . . . . . . . -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list