UDP problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,
I'm running some tests to measure the performance of UDP. I've got a server
application running under Windows which reads UDP datagrams and a client
application running under Linux that sends datagrams. It seems that the more
datagrams that arrive to the server network interface the less datagrams
that are read by the application layer. I guess it has something to do with
data dropping due to queue oveflow but I don't know how to prove this, much
less how to overcome it. Following is a list with the figures I got.

Datagrams at the network layer		10000	6000	4500	
Datagrams at the application layer	1100	4000	3800	
 

Any hint as to why this happens will be more than welcome.
many thanks
Raul
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux