Re: [PATCH nft 1/1] tests/shell: sanitize "handle" in JSON output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On Sat, 2023-11-18 at 03:36 +0100, Phil Sutter wrote:
> On Fri, Nov 17, 2023 at 06:18:45PM +0100, Thomas Haller wrote:
> > The "handle" in JSON output is not stable. Sanitize/normalizeit to
> > 1216.
> > 
> > The number is chosen arbitrarily, but it's somewhat unique in the
> > code
> > base. So when you see it, you may guess it originates from
> > sanitization.
> 
> Valid handles are monotonic starting at 1. Using 0 as a replacement
> is
> too simple?

Changed.

> 
> > Signed-off-by: Thomas Haller <thaller@xxxxxxxxxx>
> > ---
> > Note that only a few .json-nft files are adjusted, because
> > otherwise the
> > patch is too large. Before applying, you need to adjust them all,
> > by
> > running `./tests/shell/run-tests.sh -g`.
> 
> Just put the bulk change into a second patch?

it would require 3 patches to stay below the limit.

Also, it blows up the inbox by everybody on the list by 850K (57k
gzipped). The rest of the patch is generated. Just generate it.

Alternatively,

  git fetch https://gitlab.freedesktop.org/thaller/nftables df984038a33c6da5b159e6f6458351c4fa673bf1
  git merge FETCH_HEAD
  


> 
> [...]
> > diff --git a/tests/shell/helpers/json-sanitize-ruleset.sh
> > b/tests/shell/helpers/json-sanitize-ruleset.sh
> > index 270a6107e0aa..3b66adabf055 100755
> > --- a/tests/shell/helpers/json-sanitize-ruleset.sh
> > +++ b/tests/shell/helpers/json-sanitize-ruleset.sh
> > @@ -6,7 +6,14 @@ die() {
> >  }
> >  
> >  do_sed() {
> > -	sed '1s/\({"nftables": \[{"metainfo": {"version": "\)[0-
> > 9.]\+\(", "release_name": "\)[^"]\+\(",
> > "\)/\1VERSION\2RELEASE_NAME\3/' "$@"
> > +	# Normalize the "version"/"release_name", otherwise we
> > have to regenerate the
> > +	# JSON output upon new release.
> > +	#
> > +	# Also, "handle" are not stable. Normalize them to 1216
> > (arbitrarily chosen).
> > +	sed \
> > +		-e '1s/\({"nftables": \[{"metainfo": {"version":
> > "\)[0-9.]\+\(", "release_name": "\)[^"]\+\(",
> > "\)/\1VERSION\2RELEASE_NAME\3/' \
> > +		-e '1s/"handle": [0-9]\+\>/"handle": 1216/g' \
> > +		"$@"
> >  }
> 
> Why not just drop the whole metainfo object? A dedicated test could
> still ensure its existence.

Normalization should only perform the absolute minimal of tampering.


> Also, scoping these replacements to line 1 is funny with single line
> input. Worse is identifying the change in the resulting diff. Maybe
> write a helper in python which lets you more comfortably sanitize
> input,
> sort attributes by key and output pretty-printed?

You mean, to parse and re-encode the JSON? That introduces additional
changes, which seems undesirable. That's why the regex is limited to
the first line (even if we only expect to ever see one line there).

Also, normalization via 2 regex seems simpler than writing some python.

Well, pretty-printing the output with `jq` would have the advantage,
that future diffs might be smaller (changing individual lines, vs.
replace one large line). Still, I think it's better to keep the amount
of post-processing minimal.


> 
> In general, the long lines in your scripts make them quite hard to
> read.
> Any particular reason why you don't stick to the 80 columns maxim?

I wrapped two lines in the patch.



Thomas





[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux