Re: [PATCH v6 1/2] fetch-pack: redact packfile urls in traces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 10 2021, Ivan Frade wrote:

> On Mon, Nov 8, 2021 at 5:53 PM Ævar Arnfjörð Bjarmason <avarab@xxxxxxxxx> wrote:
>>
> ...
>>... Let's just:
>>
>>  1. Start reading the section
>>  2. Turn off tracing
>>  3. Parse the URIs as we go
>>  3. When done (or on the fly), scrub URIs, log any backlog suppressed trace, and turn on tracing again
>
> This is a more generic redacting mechanism, but I understood that
> there is no need for it. Previous comments went in the direction of
> removing generality (e.g. not looking for a URI anywhere in the
> packet, but specifically for the packfile line format) and now this
> patch is very specific to redact packfile-uri lines in the protocol.

It's less generic, because it would live in the loop that consumes the
lines. 

>> Instead of:
>>
>>  1. Set a flag to scrub stuff
>>  2. Because of the disconnect between fetch-pack.c and pkt-line.c,
>>     effectively implement a new parser for data we're already going to be
>>     parsing some microseconds later during the course of the request.
>
> pkt-line is only looking for the "<n-hex-chars>SP" shape. True that it
> encodes some protocol knowledge, but it is hardly a new parser.

Yeah, but why have find_packfile_uri_path() at all instead of just
moving the parsing code around?

We've already got the code that parses these lines, it's just a few
lines removed from the code you're adding...

>> That "turn off the trace" could be passing down a string_list/strbuf, or
>> even doing the same via a nev member in "struct packet_reader", both
>> would be simpler than needing to re-do the parse.
>
> Saving the lines and delaying the tracing could also produce weird
> outputs, no? e.g. 3 lines received, the second doesn't validate, the
> program aborts and the trace doesn't show any of the lines that caused
> the problem. Or we would need to iterate in parallel through lines and
> saved-log-lines assuming they match 1:1. Nothing unsolvable, but I am
> not sure it is worthy the effort now.

It would only be weird if you do :

    download_later =
    while (consume lines)
        download_later += buffer_lines;
    log lines;

I'm suggesting:

    download_later =
    while (consume lines)
        raw, to_log = parse line
        log line(to_log)
        download_later += raw

Sure, you'll need to do something in the case where the line doesn't
validate, should you redact it still, or log it as is? Anyway, that's
also a caveat you've got now.

That's not iterating in parallel, having one for-loop instead of two.

I see now that that approach would also solve at least one
bug/misfeature in the packfile-uri handling, i.e.:

        for (i = 0; i < packfile_uris.nr; i++) {
            [...]
            start_command(...) [... to download the URI ...]
            [...]
            die("fetch-pack: pack downloaded from %s does not match expected hash %.*s",
        }

I.e. we've already received all the URIs, but then do validation on them
one at a time, so we might only notice that the server has sent us bad
data for the Nth URI after first downloading the first N-1 URIs.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux