Re: RFD: fast-import is picky with author names (and maybe it should - but how much so?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Nov 11, 2012 at 12:00:44PM -0500, A Large Angry SCM wrote:

> >>>a) Leave the name conversion to the export tools, and when they miss
> >>>some weird corner case, like 'Author<email', let the user face the
> >>>consequences, perhaps after an hour of the process.
> [...]
> >>>b) Do the name conversion in fast-import itself, perhaps optionally,
> >>>so if a tool missed some weird corner case, the user does not have to
> >>>face the consequences.
> [...]
> >>c) Do the name conversion, and whatever other cleanup and manipulations
> >>you're interesting in, in a filter between the exporter and git-fast-import.
> >
> >Such a filter would probably be quite complicated, and would decrease
> >performance.
> >
> 
> Really?
> 
> The fast import stream protocol is pretty simple. All the filter
> really needs to do is pass through everything that isn't a 'commit'
> command. And for the 'commit' command, it only needs to do something
> with the 'author' and 'committer' lines; passing through everything
> else.
> 
> I agree that an additional filter _may_ decrease performance somewhat
> if you are already CPU constrained. But I suspect that the effect
> would be negligible compared to the all of the SHA-1 calculations.

It might be measurable, as you are passing every byte of every version
of every file in the repo through an extra pipe. But more importantly, I
don't think it helps.

If there is not a standard filter for fixing up names, we do not need to
care. The user can use "sed" or whatever and pay the performance penalty
(and deal with the possibility of errors from being lazy about parsing
the fast-import stream).

If there is a standard filter, then what is the advantage in doing it as
a pipe? Why not just teach fast-import the same trick (and possibly make
it optional)? That would be simpler, more efficient, and it would make
it easier for remote helpers to turn it on (they use a command-line
switch rather than setting up an extra process).

But what I don't understand is: what would such a standard filter look
like? Fast-import (or a filter) would already receive the exporter's
best attempt at a git-like ident string. We can clean up and normalize
things like whitespace (and we probably should if we do not do so
already). But beyond that, we have no context about the name; only the
exporter has that.

So if we receive:

  Foo Bar<foo.bar@xxxxxxxxxxx> <none@none>

or:

  Foo Bar<foo.bar@xxxxxxxxxxx <none@none>

or:

  Foo Bar<foo.bar@xxxxxxxxxxx

what do we do with it? Is the first part a malformed name/email pair,
and the second part is crap added by a lazy exporter? Or does the
exporter want to keep the angle brackets as part of the name field? Is
there a malformed email in the last one, or no email at all?

The exporter is the only program that actually knows where the data came
from, how it should be broken down, and what is appropriate for pulling
data out of its particular source system. For that reason, the exporter
has to be the place where we come up with a syntactically correct and
unambiguous ident.

I am not opposed to adding a mailmap-like feature to fast-import to map
identities, but it has to start with sane, unambiguous output from the
exporter.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]