Hi Slavko, On Wed, 31 Jan 2024 20:23:54 +0000 Slavko <linux@xxxxxxxxxx> wrote: > Dňa 31. januára 2024 13:02:57 UTC používateľ Kerin Millar <kfm@xxxxxxxxxxxxx> napísal: > > >Firstly, I mentioned both tables and the "inet" family. If you search for inet in the man page, you should land in the ADDRESS FAMILY section, which indicates that the purpose of an address family is to define the type of packets that can be processed. It goes on to say that "all nftables objects exist in address family specific namespaces, therefore all identifiers include an address family". > > I check manpage now, 1.0.6 (as is in debian bookworm) and from its > ADDRESS FAMILY section is nor clean (at least for me) the order of > inet and ip/ip6 tables processing. It is even not clearly stated here, > that packet will be processed in both, the inet and the ip/ip6. I think that the effect of "inet" family is to register hooks - if any - for both the ip and ip6 families. Yet, even knowing that doesn't help to address the question. > > I know, that hook's priority comes into play, it is worth to mention, > if here are hooks with the same priority in both (as inet and ip/ip6) > the order is ... or order is not defined. Yes, it should be made clear - even if only to explicitly say that the behaviour is undefined (it would be better than nothing). > > >>> table inet filter { > >>> set block4 { > >>> type ipv4_addr > >>> } > >>> set block6 { > >>> type ipv6_addr > >>> } > >>> chain INPUT { > >>> type filter hook input priority filter; policy accept > >>> ip saddr @block4 drop > >>> ip6 saddr @block6 drop > >>> } > >>> } > > This can be OK with two sets, as it really doesn't matter if you have > one ot two rules/sets. But when you start to really use them and you > need eg. 10 sets, you have to define every twice, things become > more complicated... > > When i start to play with these sets i was too constrained by iptables's > approach by that, that every iptables's table becomed separate > nftables table, thus one cannot share common sets between eg. raw > and filter tables. It took some time to realize, that i need to switch my > mind and that i can define "raw" and "filter" hooks in one table and thus > share the same set in both (eg. fill content in filter and drop in raw). This has also annoyed me on several occasions. Though I have been using nftables for a fairly long time, I still find it more natural to organise rulesets based on the conventions of iptables. Old habits die hard, as the saying goes. Incidentally, there is an open bug concerning this. https://bugzilla.netfilter.org/show_bug.cgi?id=1472 > > Second thing which took time to understand was, why in hell i need > as many sets. After some time i realized, that it is price for flexibility. > In iptables many extensions uses some storage (recent, limit, > hashlimit, connlimit) and these storages are managed by extension > itself, and user needs only to setup its name or even nothing (for > limit). In nftables world one can use sets to achieve the same result, > but have to create sets/storage by self, but can customize them in > simple way (without change kernel module options) and that is > great. > > But if one use these features intensive, all sets and rules must > be twiced in inet table (exactly as in separated ip/ip6 tables), the > amount of sets increases significantly. Yes, these "sets" was twiced > in ip(6)tables too, but that was "hidden" in extension code, now the > inet table exposes that twicing. And as amount of sets increases, > listing them becomes more and more not user friendly, as one can > list either all table's sets or one concrete set, nothing in between. All of this is true. > > BTW, i am curious, how big difference (in performance/time) is to > search IPv4 vs. IPv6 address in set? Is it worth to consider? I do not know but I would expect a lookup to be very efficient in both cases. > > I start to more and more think, that using of inet family (as in current > state) is good only for simple/basic FWs, without any advanced > features, where are L3 addresses used only to static distingushing > of access or where rulesets are maintained by some external tool > (in separate tables). > > From my point of view, integration of sets created impressive > features in nftables, but in result they are too constrained (in L3 > mean), by the same way as they was in iptables, or even worse, > as now ipset allow to use combined sets and add/update it from > the same name (thus rule too) in both, the iptables and ip6tables > (yes, as mentioned already, that is not possible from command > line -- yet?). > > Another thing, which is not clean for me, is memory usage of sets. > I read on multiple places about huge memory usage with big sets, > but nowhere i found real comparison. Nor in memory usage, nor in > performance of nftables's sets vs. ipset sets. Now i don't know > if these memory problems was in early stage of sets development > (and are solved now), or they was caused by suboptimal usage of > tools, or it is by design. Recently i noticed in docs, that there is > memory/performance switch for sets, but what that means in > real usage? Again, no benchmark is documented, to one can > properly decide which one to use. Or, at least, commands/way > how to measure difference by self... I haven't paid much attention to memory usage. Benchmarking performance is fairly straightforward with the use of the shell, especially bash because it offers some useful features such as: - a time builtin that can measure arbitrarily complex commands (not only "SIMPLE COMMANDS" as defined by the manual) - C-style for loops: for ((i = 0; i < 1000; i++)); do ...; done - a printf builtin that can print timestamps without the need for an external utility As concerns timestamps, bash >=5.0 can even produce timestamps with microsecond resolution. $ printf '%(%F %T)T.%s\n' -1 "${EPOCHREALTIME: -6}" 2024-01-31 22:01:42.286354 You might find it interesting to look at existing issues concerning the use of sets by visiting the following link. https://bugzilla.netfilter.org/showdependencytree.cgi?id=1461&hide_resolved=0 Of those, #1584 concerns "high memory requirements", where Pablo appears to be using https://valgrind.org/docs/manual/ms-manual.html to profile memory usage. -- Kerin Millar