Re: [PATCH] builtin/clean.c: Handle disappearing files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David Turner <dturner@xxxxxxxxxxxxxxxx> writes:

>> > Of course, in interactive use, very little harm is done if clean dies
>> > here: the user simply must notice that the clean has failed and retry
>> > it.  But in non-interactive use, scripts could fail.
>> >
>> > At least, I think that's what could be causing us to hit this error; I
>> > haven't actually done any research to see if this is true.
>> 
>> I find that the above argues that this patch is a bad idea.
>> 
>> The change sweeps the problem under the rug, killing the canary in
>> the mine, instead of motivating you to figure out why it is
>> happening to you.
>
> But it's totally legit for processes to create and delete files in the
> working tree at any time.  Maybe I'm editing during the clean, and my
> editor creates a backup file (or creates a lock file which it's going to
> move on top of my old file).

Isn't that exactly what users need to be aware of the potential
problem waiting to bite them?  Maybe you are editing during the
clean, which may notice a funny file your editor creates and removes
it, and when the editor wants to do something useful using that
funny file, it no longer is there.  Sure, if your editor is quick
enough and race with "clean" in certain timing, it may cause "clean"
to notice and die.  But if "clean" does not die, that means it won
the race and removed what your editor needed to use, no?

Isn't it even worse for scripts?  If your "build statistics" thing
created a temporary before "clean" started to run (causing 'clean'
to notice that as a cruft), and if "clean" gets to remove it before
the "build statistics" thing finishes what it was doing accumulating
whatever data in that file, when "build statistics" thing finally
tries to use the file, it no longer is there.

And isn't it even more worse for scripts that drive "clean"?  By
letting "clean" what it thinks cruft, while other processes are
"creating and deleting files in the working tree at any time"
without coordination, such a script is actively making the other
processes unreliable.  If "clean" in such a script stops as soon as
it notices such a race before doing any more damage, it would be a
good thing.  Retrying "clean" (to finish cleaning) does not sound
like a remedy; not running "clean" when other people are still using
the working tree is.  It's like somebody is randomly running "make
clean" in the background every once in a while in a working tree
where I am trying to do a real work.  Why is it "totally legit"?
And hiding the problem by making "clean" ignore such a race would
not help the user to fix such a script, would it?

Perhaps there is some assumption you have in the way the working
tree is being used that I haven't considered, and it is entirely
possible that this change may make sense under that special case,
but without knowing what that untold assumption is, the more I hear
about this topic from you, the less I am convinced that this is a
good change.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]