Re: [PATCH RESEND v3] netfilter: xtables: add quota support to nfacct

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mathieu Poirier <mathieu.poirier@xxxxxxxxxx> wrote:
[ removed netfilter@ from cc ]

> On 1 February 2014 15:57, Florian Westphal <fw@xxxxxxxxx> wrote:
> > mathieu.poirier@xxxxxxxxxx <mathieu.poirier@xxxxxxxxxx> wrote:
> >> +struct xt_nfacct_match_info_v1 {
> >> +     char            name[NFACCT_NAME_MAX];
> >> +     struct nf_acct  *nfacct;
> >> +
> >> +     __u32 flags;
> >> +     __aligned_u64 quota;
> >> +     /* used internally by kernel */
> >> +     struct nf_acct_quota    *priv __attribute__((aligned(8)));
> >> +};
> >
> > I think *nfacct pointer is also kernel internal, so it should also get
> > aligned (might also be possible to stash it in *private struct).
> 
> I suspect Pablo didn't change it already for a good reason and as such
> reluctant to do the modification myself.

Looks like an oversight to me.  I haven't checked but my guess would
be that -m nfacct won't work with 32bit userspace on 64 bit machines.

Would be better to avoid such problems with match revision 2.

> >> +             if (val <= info->quota) {
> >> +                     ret = !ret;
> >> +                     info->priv->quota_reached = false;
> >
> > Why quota_reached = false
> > assignment?
> 
> An object that has reached it's quota can always be reset from the
> userspace with:
> 
> $ nfacct get object_name reset

I see.  Makes sense,  thanks.
 
> As such we need to reset the flag so that a broadcast isn't sent.
> > [ How is this toggled (other than transition to 'true' below)? ]
> >
> >> +             if (val >= info->quota && !info->priv->quota_reached) {
> >> +                     info->priv->quota_reached = true;
> >> +                     nfnl_quota_event(info->nfacct);
> >> +             }
> >> +
> >> +             spin_unlock_bh(&info->priv->lock);
> >> +     }
> >
> > Hm.  Not sure why the lock has to be hold during all tests.  What about:
> 
> Here "info" is an object seen and shared by all processors.  If a
> process looses the CPU between the two atomic64_read, other CPUs in
> the system can grab "info" and go through the same code, leading to an
> erroneous count of byte and packets.
> To me the real problem is with the incrementation of "pkts" and
> "bytes" in function "nfnl_acct_update" - there is simply no way to
> prevent a process from loosing the CPU between the two incrementation.

Not following.  Write side is done via atomic ops, so we don't lose any
information.  Also, note that netfilter hooks run with rcu_read_lock().

Its possible that concurrent reader sees pkts already incremented but
the "old" byte count.  Is that what you're referring to? Is it a concern?

When you read either the packet or byte count you only know that this
_was_ the statistic count at some point in the (very recent) past anyway.

To me it only looks like you need to make sure that you try to send out
a broadcast notification message once when one of the counters has
recently exceeded the given threshold.

Or is there more to this functionality?
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux