Re: Git force push fails after a rejected push (unpack failed)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 08, 2015 at 07:41:48PM +0200, Johannes Sixt wrote:

> >Yes, but remember that git stores all of the objects for all of the
> >commits. So for some reason your push is perhaps trying to send an
> >object that the other side already has. Usually this does not happen
> >(the receiver says "I already have these commits, do not bother sending
> >their objects"), but it's possible that you have an object that is not
> >referenced by any commit, or a similar situation. It's hard to say
> >without looking at the repository.
> 
> After a non-fast-forward push fails, a subsequent forced push sends the same
> set of objects, which are already present at the server side, but are
> dangling objects.
> 
> Apparently, Git for Windows fails to replace the read-only files that live
> on the network file system.

I left one bit out from my original explanation, which is that
we generally prefer existing objects to new ones. So we would generally
want to throw out the new object rather than try to write it out. I'm
not sure why unpack-objects would try to write an object we already
have.

We also don't write objects directly, of course; we write to a temporary
file and try to link them into place. It really sounds more like the
"objects/d9" directory is where the permission problems are. But, hmm...

The code path should be unpack-objects.c:write_object, which calls
sha1_file.cwrite_sha1_file, which then checks has_sha1_file(). These
days it uses the freshen_* functions instead of the latter, which does a
similar check.  But it does report failure if we cannot call utime() on
the file, preferring to write it out instead (this is the safer choice
from a preventing-prune-corruption perspective).

So it's possible that the sequence is:

  - unpack-objects tries to write object 1234abcd...

  - write_sha1_file calls freshen_loose_object

  - we call access("12/34abcd...", F_OK) and see that it does indeed
    exist

  - we call utime("12/34abcd...") which fails (presumably due to EPERM);
    we return failure and assume we must write out the object

  - write_sha1_file then writes to a temporary file, and tries to link
    it into place. Now what? If we get EEXIST, we say "OK, somebody else
    beat us here", and we consider that a success. But presumably we get
    some other error here (which may even be a Windows-ism), fall back
    to rename(), and that fails with EPERM, which we then report.

If that's the case, then one solution is to have the
timestamp-freshening code silently report success, and skip writing out
the object. I'm not entirely comfortable with that, just because it is
loosening a safety mechanism. But perhaps we could loosen it _only_ in
the case of checking the loose object, and when we get EPERM. We know
that the next step is going to be writing out that same loose object,
which is almost certainly going to fail for the same reason.

I dunno. The whole concept of trying to write to an object database for
which you do not have permissions seems a little bit weird. This would
definitely be a workaround. But I suspect it did work prior to v2.2.0.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]