Hi, Experienced similar issues. Our cluster internal network (completely separated) now has NOTRACK (no connection state tracking) iptables rules. In full: > # iptables-save > # Generated by xtables-save v1.8.2 on Wed Jul 17 14:57:38 2019 > *filter > :FORWARD DROP [0:0] > :OUTPUT ACCEPT [0:0] > :INPUT ACCEPT [0:0] > COMMIT > # Completed on Wed Jul 17 14:57:38 2019 > # Generated by xtables-save v1.8.2 on Wed Jul 17 14:57:38 2019 > *raw > :OUTPUT ACCEPT [0:0] > :PREROUTING ACCEPT [0:0] > -A OUTPUT -j NOTRACK > -A PREROUTING -j NOTRACK > COMMIT > # Completed on Wed Jul 17 14:57:38 2019 Ceph uses IPv4 in our case, but to be complete: > # ip6tables-save > # Generated by xtables-save v1.8.2 on Wed Jul 17 14:58:20 2019 > *filter > :OUTPUT ACCEPT [0:0] > :INPUT ACCEPT [0:0] > :FORWARD DROP [0:0] > COMMIT > # Completed on Wed Jul 17 14:58:20 2019 > # Generated by xtables-save v1.8.2 on Wed Jul 17 14:58:20 2019 > *raw > :OUTPUT ACCEPT [0:0] > :PREROUTING ACCEPT [0:0] > -A OUTPUT -j NOTRACK > -A PREROUTING -j NOTRACK > COMMIT > # Completed on Wed Jul 17 14:58:20 2019 Using this configuration, state tables never ever can fill up with dropped connections as effect. Cheers, Kees On 17-07-2019 11:27, Maximilien Cuony wrote: > Just a quick update about this if somebody else get the same issue: > > The problem was with the firewall. Port range and established > connection are allowed, but for some reasons it seems the tracking of > connections are lost, leading to a strange state where one machine > refuse data (RST are replied) and the sender never get the RST packed > (even with 'related' packets allowed). > > There was a similar post on this list in February ("Ceph and TCP > States") where lossing of connections in conntrack created issues, but > the fix, net.netfilter.nf_conntrack_tcp_be_liberal=1 did not improve > that particular case. > > As a workaround, we installed lighter rules for the firewall (allowing > all packets from machines inside the cluster by default) and that > "fixed" the issue :) > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com