Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 26, 2017 at 11:07:02PM -0700, Junio C Hamano wrote:

> > No, I don't think so. We don't trust the trailer hash for anything to do
> > with corruption; we actually inflate the objects and see which ones we
> > got. So the victim will notice immediately that what the attacker sent
> > it is insufficient to complete the fetch (or push), and will refuse to
> > update the refs. The fetch transfer, but nobody gets corrupted.
> 
> In the scenario I was presenting, both the original fetch that gives
> one packdata and the later fetch that gives another packdata (which
> happens to share the csum-file trailing checksum) satisfy the "does
> the new pack give us enough objects to really complete the tips of
> refs?" check.

Right, my point was that we do that check _after_ throwing away the
duplicate-named pack. So you cannot fool that check, update the ref, and
then throw away the pack to get a corrupt receiver. The receiver throws
away the pack first, then says "hey, I don't have all the objects" and
aborts.

That said...

> The second fetch transfers, we validate the packdata using index-pack
> (we may pass --check-self-contained-and-connected and we would pass
> --strict if transfer-fsck is set), we perhaps even store it in
> quarantine area while adding it to the list of in-core packs, make
> sure everything is now connected from the refs using pre-existing
> packs and this new pack.  The index-pack may see everything is good
> and then would report the resulting pack name back to
> index_pack_lockfile() called by fetch-pack.c::get_pack().

These are interesting corner cases. We only use
--check-self-contained-and-connected with clones, but you may still have
packs from an alternate during a clone (although I think the two packs
would be allowed to co-exist indefinitely, then).

The quarantine case is more interesting. The two packs _do_ co-exist
while we do the connectivity check there, and then afterwards we can
have only one. So that reversal of operations introduces a problem, and
you could end up with a lasting corruption as a result.

> But even though both of these packs _are_ otherwise valid (in the
> sense that they satisfactorily transfer objects necessary to make
> the refs that were fetched complete), because we name the packs
> after the trailer hash and we cannot have two files with the same
> name, we end up throwing away the later one.

I kind of wonder if we should simply allow potential duplicates to
co-exist. The pack names really aren't used for duplicate suppression in
any meaningful sense. We effectively use them as UUIDs so that each new
pack gets a unique name without having to do any locking or other
coordination. It would not be unreasonable to say "oops, 1234abcd
already exists; I'll just increment and call this new one 1234abce". The
two presumably-the-same packs would then co-exist until the new "repack
-a" removes duplicates (not just at the pack level, but at the object
level).

The biggest problem there is that "claiming" a pack name is not
currently atomic. We just do it blindly. So switching to some other
presumed-unique UUID might actually be easier (whether SHA-256 of the
pack contents or some other method).

> As I said, it is a totally different matter if this attack scenario
> is a practical threat.  For one thing, it is probably harder than
> just applying the straight "shattered" attack to create a single
> object collision--you have to make two packs share the same trailing
> hash _and_ make sure that both of them record data for valid
> objects.  But I am not convinced that it would be much harder
> (e.g. I understand that zlib deflate can be told not to attempt
> compression at all, and the crafted garbage used in the middle part
> of the "shattered" attack can be such a blob object expressed as a
> base object--once the attacker has two such packfiles that hash the
> same, two object names for these garbage blobs can be used to
> present two versions of the values for a ref to be fetched by these
> two fetch requests).

Yeah, I think we can assume it will be possible with SHAttered levels of
effort. An attacker can use it to create a persistent corruption by
having somebody fetch from them twice. So not really that interesting an
attack, but it is something. I still think that ditching SHA-1 for the
naming is probably a better fix than worrying about SHA-1 collisions.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]