Re: "git fsck" not detecting garbage at the end of blob object files...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 07, 2017 at 10:47:03PM +0100, Dennis Kaarsemaker wrote:

> On Sat, 2017-01-07 at 07:50 -0500, John Szakmeister wrote:
> > I was perusing StackOverflow this morning and ran across this
> > question: http://stackoverflow.com/questions/41521143/git-fsck-full-only-checking-directories/
> > 
> > It was a simple question about why "checking objects" was not
> > appearing, but in it was another issue.  The user purposefully
> > corrupted a blob object file to see if `git fsck` would catch it by
> > tacking extra data on at the end.  `git fsck` happily said everything
> > was okay, but when I played with things locally I found out that `git
> > gc` does not like that extra garbage.  I'm not sure what the trade-off
> > needs to be here, but my expectation is that if `git fsck` says
> > everything is okay, then all operations using that object (file)
> > should work too.
> > 
> > Is that unreasonable?  What would be the impact of fixing this issue?
> 
> If you do this with a commit object or tree object, fsck does complain.
> I think it's sensible to do so for blob objects as well.

The existing extra-garbage check is in unpack_sha1_rest(), which is
called as part of read_sha1_file(). And that's what we hit for commits
and trees. However, we check the sha1 of blobs using the streaming
interface (in case they're large). I think you'd want to put a similar
check into read_istream_loose(). But note if you are grepping for it, it
is hidden behind a macro; look for read_method_decl(loose).

I'm actually not sure if this should be downgrade to a warning. It's
true that it's a form of corruption, but it doesn't actually prohibit us
from getting the data we need to complete the operation. Arguably fsck
should be more picky, but it is just relying on the same parse_object()
code path that the rest of git uses.

I doubt anybody cares too much either way, though. It's not like this is
a common thing.

I did notice another interesting case when looking at this. Fsck ends up
in fsck_loose(), which has the sha1 and path of the loose object. It
passes the sha1 to fsck_sha1(), and ignores the path entirely!

So if you have a duplicate copy of the object in a pack, we'd actually
find and check the duplicate. This can happen, e.g., if you had a loose
object and fetched a thin-pack which made a copy of the loose object to
complete the pack).

Probably fsck_loose() should be more picky about making sure we are
reading the data from the loose version we found.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]