Re: Most optimal method to dump UDP conntrack entries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 11, 2024 at 01:54:56PM +0100, Florian Westphal wrote:
> Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx> wrote:
> > On Mon, Nov 11, 2024 at 01:09:46PM +0100, Florian Westphal wrote:
> > > The time and effort needed to make something as basic as NAT
> > > work properly is jus silly.
> > > 
> > > Lets fix conntrack so this "just works".
> > 
> > Ok, then...
> > 
> > +static bool udp_ts_reply(struct nf_conn *ct, enum ip_conntrack_dir dir)
> > +{
> > +       bool is_reply = READ_ONCE(ct->proto.udp.last_dir) != dir;
> > +
> > +       if (is_reply)
> > +               WRITE_ONCE(ct->proto.udp.last_dir, dir);
> > +
> > +       return is_reply;
> > +}
> > 
> > ... if packet in the other direction is seen, then...
> > 
> > +       if (udp_ts_reply(ct, dir))
> > +               nf_ct_refresh_acct(ct, ctinfo, skb, extra);
> > 
> > ... conntrack entry is refreshed?
> 
> Yes.
> 
> > Will this work for, let's say, RTP traffic which goes over UDP and it
> > is unidirectional? Well, maybe you could occasionally see a RCTP
> > packet as reply to get statistics, but those could just not be
> > available.
> 
> We could add a || ct->master to the is_reply test.

Assuming SIP helper is in place, then yes.

> > I am not sure we can make assumptions on the direction like this, any
> > application protocol could run over UDP.
> 
> What about adding a CT template option to control the behaviour?

Maybe custom ct timeout policy can help instead? Or even extend the
timeout policy to support for the behaviour you want to put in place
(refresh timer only when packets are seen in both directions).

If user knows what application protocol runs over UDP in just port,
then they can define finer grain timeout policies accordingly.

> More work, but would avoid any compat concerns.

Agreed.

The UDP conntracker is already making assumptions by handling UDP
traffic as "stateful" based on default timeouts that were define back
in 1999 and that has been adjusted several times in the past.

Shrinking too much the timeouts could also lead to releasing NAT too
early, hence removing the NAT mapping too soon, it is hard to know the
implications of this wrt. to the application protocol.

Maybe Antonio can extend the requirements stub he provides, he
mentions the following scenarios:

- Conntrack entry removal for backends that are gone. Probably
  speeding up conntrack -D with the new CTA_FILTER support is
  sufficient to improve the situation. IIRC, a user reported from
  3 seconds to 0.5 to delete million of entries via CTA_FILTER.

- Service restart, ie. "reconcile" scenario, I think this is harder
  because IIUC this means userspace needs to compare the current
  configuration with the conntrack entries in the table and purge
  those that are stale?

I guess the concern is that assured flows cannot be expelled from the
conntrack table via early_drop, that is why an expedite cleanup is
important?

Thanks.




[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Netem]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux