ebtables: ebtables-restore segfaults when 'among' list has many items

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I’m trying to build an ebtables rule-set with an entry resembling:

ebtables -t filter -A FORWARD -i in+ -o out+ -p IPV4 --among-src-file ! /var/lib/ebtables/ethers.lst -j logdrop
ebtables -t filter -A FORWARD -i out+ -o in+ -p IPV4 --among-dst-file ! /var/lib/ebtables/ethers.lst -j logdrop

(where ‘logdrop’ is a single-rule chain with policy DROP and entry ‘--log-arp --log-ip --log-prefix "Mismatching MAC:" --log-level 4 -j DROP’, if that’s of any significance)

I notice two issues here - the first is that ‘ebtables-save’ never produces the correct output for such ‘among’ entries, with the generated output ending:

...,xx:xx:xx:xx:xx:xx=yyy.yyy.yyy.yyy, -j logdrop-c 0 0
                                    ^          ^^

… with a trailing comma (which may not be anything other than an aesthetic issue), and no space between the target and the ‘-c’ counter token (which is definitely a problem).  This occurs even when the loaded file contains only two items.

However, the greater issue still is that if a list of greater than 69 entries (or, more likely, 2k of data - 69 entires is going to be ~2000 bytes on average, which seems to be suspiciously close…) is defined, then this list can (apparently) be made live in the kernel and can be saved, but ‘ebtables-restore’ will always segfault when attempting to reload it.

dmesg says:

[7315212.511342] ebtables-restor[24542]: segfault at 0 ip 0000000000400f7b sp 00007fff4a315820 error 6 in ebtables-restore[400000+2000]
[7315240.848830] ebtables-restor[24576]: segfault at 0 ip 0000000000400f7b sp 00007fffa37d77b0 error 6 in ebtables-restore[400000+2000]
[7315369.074719] ebtables-restor[24603]: segfault at 0 ip 0000000000400f7b sp 00007ffffb8951a0 error 6 in ebtables-restore[400000+2000]
[7315432.481345] ebtables-restor[25156]: segfault at 0 ip 0000000000400f7b sp 00007fffa6d24420 error 6 in ebtables-restore[400000+2000]
[7315704.848695] ebtables-restor[26750]: segfault at 0 ip 0000000000400e4d sp 00007fffd01d9b00 error 6 in ebtables-restore[400000+2000]
[7316379.348187] ebtables-restor[27008]: segfault at 0 ip 0000000000400e4d sp 00007fff7bd81e60 error 6 in ebtables-restore[400000+2000]
[7316508.452788] ebtables-restor[27066]: segfault at 0 ip 0000000000400e4d sp 00007fff7e43f8e0 error 6 in ebtables-restore[400000+2000]

… but I don’t currently have gdb on this router, unfortunately.

This figure of 2k/69 entries may well vary dependent on the number of characters or the overall size or number of rules present, or it may be an absolute limit.  Given that the issue seems to occur when more than 2k of data is loaded and I’m loading the same data twice (once for source, once for destination), this could be an issue when the overall dataset to be loaded exceeds the page-size/4k.  On the other hand, the loaded lists are hashed, so this may again be coincidence.  I guess I’m pretty safe in assuming, though, that this is not the intended behaviour.

Both of these issues have work-arounds - although working around the ‘among’ size limit would likely mean swapping from a “if not in list then drop” to less efficient groups of “if in list then accept” in a separate chain with a DROP policy, which may be less optimal.

Thanks in advance,

Stuart

--
To unsubscribe from this list: send the line "unsubscribe netfilter" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Netem]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux