Re: per source bandwidth limit with hashlimit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/07/17 12:47, Fatih USTA wrote:
Hi

I tried to use hashlimit to limit the bandwidth, for each ip address on
the 192.168.59.0/24 network. But did not work specified network address
or protocol.
iptables -t mangle -I PREROUTING -m tcp -p tcp -m hashlimit
--hashlimit-above 50kb/sec --hashlimit-burst 50kb --hashlimit-mode srcip
--hashlimit-name persource -j DROP
iptables -t mangle -A PREROUTING -j RETURN

You are using hashlimit backwards. The jump target is executed when the limit is still good, it fails once the limit has been exceeded. So your rule is dropping the packets/connections until the 50kb a second threshold is exceeded then it's letting them through.

What you want is

iptables (blah blah) hashlimit (blah blah) -j RETURN
iptables (blah blah) -j DROP

Now when you are within the limits the data gets through, and when you exceed the limits the rule evaluation falls through to the drop.

Note that I usually put this in a separate chain that I invoke from the real chain so that I can still apply other logic before/after the throttle.

so...

iptables -N throttle
iptables -A throttle -m hashlimit (blah blah) -j RETURN
ipbables -A throttle -j DROP
iptables -A (wherever) (conditional blah) -j throttle

As far as (wherever), I don't like to interfere with existing connections since that leads to more load, not less. So I'd put the throttle in FORWARDING or INPUT with the (conditional blah) being for ctstate NEW and after the established,related rule.

This secondary chain means that you can use the same throttle for several different protocols or conditions without it getting really complex to invoke.

Of course your where-and-when will vary depending on your task and goals. But in truth, if you are the service then your service application is going to be the source of most of the data so you probably ought to throttle in the server not the firewall rules instead of dropping your own data or throwing away packets that you've already paid bandwidth/time to receive and which the sending end is probably just going to send again because of the "loss".

That is bulk limits on existing connections tends to drive received volume _up_ as re-transmits occur.

So be careful not to shoot yourself in the foot.

So anyway... the --match hashlimit should be followed by the success --jump and the later rules are then the failure path.



Hope this helps,
--Rob.

--
To unsubscribe from this list: send the line "unsubscribe netfilter" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Netem]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux