resets received for embryonic SYN_RECV sockets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear community,

I'm new in network stack optimization and I have rather big subj value
on back end servers:

2479229 resets received for embryonic SYN_RECV sockets

I've googled for it but didn't find exact explanation of this value.
How can I catch it using tcpdump and how can I avoid it?

sysctl custom values:
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_sack = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1

I have attached netstat -s. CentOS 6.2, kvm virtual machine.

P.S. Also I use ipvs + keepalived.

Attachment: netstat
Description: Binary data


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Netem]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux