Re: What's cooking in git.git (Sep 2024, #05; Fri, 13)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 17, 2024 at 11:27:26AM +0200, Patrick Steinhardt wrote:
> On Tue, Sep 17, 2024 at 04:17:26AM -0400, Taylor Blau wrote:
> > On Tue, Sep 17, 2024 at 08:30:45AM +0200, Patrick Steinhardt wrote:
> > > There was also the open question of whether we want to rename the new
> > > `_fast` hash functions to `_unsafe` to make it stand out more that they
> > > are indeed not safe for cryptographic uses.
> >
> > I am fine to rename it to unsafe_, etc. But the more that I think about
> > this collision in loose object files, the less I think that it matters
> > in practice.
> >
> > We would only hit it when trying to write a loose object and racing with
> > another writer which is trying to write that same object as loose using
> > different compression settings, which seems awfully rare.
> >
> > Perhaps there is some use-case or scenario that I am missing, but this
> > seems like a very rare case to disable a check that is otherwise useful.
>
> What I don't understand: why don't we just decompress the objects to
> compare their contents? It shouldn't be a huge hit on performance as we
> basically don't expect the code to ever trigger. And it shouldn't be a
> lot of code either, so I don't see much of a reason to not do this the
> correct way.

Alternatively we could avoid use of the _unsafe SHA-1 implementation in
the hashfile API only when writing loose objects, and skip the collision
check entirely.

Thanks,
Taylor




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux