Re: [RFT 3/4] netfilter: use sequence number synchronization for counters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stephen Hemminger a écrit :
Change how synchronization is done on the iptables counters. Use seqcount
wrapper instead of depending on reader/writer lock.

Signed-off-by: Stephen Hemminger <shemminger@xxxxxxxxxx>


--- a/net/ipv4/netfilter/ip_tables.c 2009-01-27 14:48:41.567879095 -0800
+++ b/net/ipv4/netfilter/ip_tables.c	2009-01-27 15:45:05.766673246 -0800
@@ -366,7 +366,9 @@ ipt_do_table(struct sk_buff *skb,
 			if (IPT_MATCH_ITERATE(e, do_match, skb, &mtpar) != 0)
 				goto no_match;
+ write_seqcount_begin(&e->seq);
 			ADD_COUNTER(e->counters, ntohs(ip->tot_len), 1);
+			write_seqcount_end(&e->seq);
Its not very good to do it like this, (one seqcount_t per rule per cpu)

t = ipt_get_target(e);
 			IP_NF_ASSERT(t->u.kernel.target);
@@ -758,6 +760,7 @@ check_entry_size_and_hooks(struct ipt_en
 	   < 0 (not IPT_RETURN). --RR */
/* Clear counters and comefrom */
+	seqcount_init(&e->seq);
 	e->counters = ((struct xt_counters) { 0, 0 });
 	e->comefrom = 0;
@@ -915,14 +918,17 @@ get_counters(const struct xt_table_info &i); for_each_possible_cpu(cpu) {
+		struct ipt_entry *e = t->entries[cpu];
+		unsigned int start;
+
 		if (cpu == curcpu)
 			continue;
 		i = 0;
-		IPT_ENTRY_ITERATE(t->entries[cpu],
-				  t->size,
-				  add_entry_to_counter,
-				  counters,
-				  &i);
+		do {
+			start = read_seqcount_begin(&e->seq);
+			IPT_ENTRY_ITERATE(e, t->size,
+					  add_entry_to_counter, counters, &i);
+		} while (read_seqcount_retry(&e->seq, start));
This will never complete on a loaded machine and a big set of rules.
When we reach the end of IPT_ENTRY_ITERATE, we notice many packets came while doing the iteration and restart,
with wrong accumulated values (no rollback of what was done to accumulator)

You want to do the seqcount_begin/end in the leaf function (add_entry_to_counter()), and make accumulate a value pair (bytes/counter)
only once you are sure they are correct.

Using one seqcount_t per rule (struct ipt_entry) is very expensive. (This is 4 bytes per rule X num_possible_cpus())

You need one seqcount_t per cpu


--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux