On 2021-10-27, at 11:21:38 +0200, Pablo Neira Ayuso wrote: > On Sun, Oct 03, 2021 at 03:58:32PM +0100, Jeremy Sowden wrote: > > On 2021-10-02, at 16:22:30 +0100, Jeremy Sowden wrote: > > > The documentation describes the syntax of limit statements thus: > > > > > > limit rate [over] packet_number / TIME_UNIT [burst packet_number packets] > > > limit rate [over] byte_number BYTE_UNIT / TIME_UNIT [burst byte_number BYTE_UNIT] > > > > > > TIME_UNIT := second | minute | hour | day > > > BYTE_UNIT := bytes | kbytes | mbytes > > > > > > This implies that one may specify a limit as either of the following: > > > > > > limit rate 1048576 / second > > > limit rate 1048576 mbytes / second > > > > > > However, the latter currently does not parse: > > > > > > $ sudo /usr/sbin/nft add filter input limit rate 1048576 mbytes / second > > > Error: wrong rate format > > > add filter input limit rate 1048576 mbytes / second > > > ^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > > > Extend the parser to support it. > > > > > > Signed-off-by: Jeremy Sowden <jeremy@xxxxxxxxxx> > > > --- > > > > > > I can't help thinking that it ought to be possible to fold the two > > > > > > limit rate [over] byte_number BYTE_UNIT / TIME_UNIT [burst byte_number BYTE_UNIT] > > > > > > rules into one. However, my attempts to get the scanner to > > > tokenize "bytes/second" as "bytes" "/" "second" (for example) > > > failed. > > > > Having reread the Flex manual, I've changed my mind. While it would be > > possible, it would be rather fiddly and require more effort than it > > would be worth. > > I can apply this workaround meanwhile we have a better solution for > this if this is an issue on your side. > > Did you get any bug report regarding this? Can't quite remember how I found it. I may have been doing some testing in relation to this: https://lore.kernel.org/netfilter-devel/20211007201222.2613750-1-jeremy@xxxxxxxxxx/#r I happened to notice that nft would accept all these: limit rate 1048576/second limit rate 1048576 / second limit rate 1048576 mbytes/second but not this: limit rate 1048576 mbytes / second The problem is that the scanner defines a string as: ({letter}|[_.])({letter}|{digit}|[/\-_\.])* This means that: 1048576/second cannot be tokenized as a string, but: mbytes/second can. Thus the scanner will tokenize both 1048576/second and: 1048576 / second as: numberstring slash string allowing this parser rule: LIMIT RATE limit_mode NUM SLASH time_unit limit_burst_pkts close_scope_limit to match both. On the other hand, the scanner will tokenize: mbytes/second as: string which is matched by the existing parser rule: LIMIT RATE limit_mode NUM STRING limit_burst_bytes close_scope_limit but: mbytes / second as: string slash string and so I added a new parser rule to match the latter. I did consider removing '/' from the scanner's definition of "string", and it didn't seem to cause any unit test failures, but I dare say somebody has used it somewhere weird that would break. :) The other possibility I tried was to have a separate definition of "string" active in the "SCANSTATE_LIMIT" start-condition, but that turned out to require a lot more than just the one extra rule and to be a much more fiddly and invasive change than just adding the parser rule. J.
Attachment:
signature.asc
Description: PGP signature