Re: [PATCH] netfilter: xtables: add cluster match

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pablo Neira Ayuso wrote:
Patrick McHardy wrote:
Why use conntrack at all? Shouldn't the cluster match simply
filter out all packets not for this cluster and thats it?
You stated it needs conntrack to get a constant tuple, but I
don't see why the conntrack tuple would differ from the data
that you can gather from the packet headers.

No, source NAT connections would have different headers. A -> B for original, and B -> FW for reply direction. Thus, I cannot apply the same hashing for packets going in the original and the reply direction.

Ah I'm beginning to understand the topology I think :) Actually
it seems its only combined SNAT+DNAT on one connections thats a
problem, with only one of both you could tell the cluster match
to look at either source or destination address (the unchanged one)
in the opposite direction. Only if the opposite direction is
completely unrelated from a non-conntrack view we can't deal with
it. Anyways, your way to deal with this seems fine to me.

echo +2 > /proc/sys/net/netfilter/cluster/$PROC_NAME

Does this provide anything you can't do by replacing the rule
itself?

Yes, the nodes in the cluster are identifies by an ID, the rule allows you to specify one ID. Say you have two cluster nodes, one with ID 1, and the other with ID 2. If the cluster node with ID 1 goes down, you can echo +1 to node with ID 2 so that it will handle packets going to node with ID 1 and ID 2. Of course, you need conntrackd to allow node ID 2 recover the filtering.

I see. That kind of makes sense, but if you're running a
synchronization daemon anyways, you might as well renumber
all nodes so you still have proper balancing, right?

Indeed, the daemon may also add a new rule for the node that has gone down but that results in another extra hash operation to mark it or not (one extra hash per rule) :(.

Thats not what I meant. By having a single node handle all connections
from the one which went down, you have an imbalance in load
distribution. The nodes are synchronized, so they could just all
replace their cluster match with an updated number of nodes.

--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux