Re: [PATCH 0/3] Strengthen fsck checks for submodule URLs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 14, 2024 at 07:40:02AM +0900, Junio C Hamano wrote:

> > Is there any possibility of "loosening the fsck.gitmodulesUrl
> > severity", as Jeff suggested?
> 
> Isn't the suggestion not about butchering the rest of the world but
> by locally configuring fsck.gitmodulesUrl down from error to
> warning?  I personally think excluding a single known-offending blob
> without doing such loosening is a much better idea in that it
> prevents *new* offending instances from getting into the repository,
> while allowing an existing benign and honest mistake to stay in your
> history.  Loosening the severity of a class of check means you will
> accept *new* offending instances, which may very well be malicious,
> unlike the existing benign one you know about.

The trouble with configuring fsck.gitmodulesUrl yourself (or using
skipList, which I agree is better if you can do it) is that it only
helps your local repo, and not:

 1. Hosting sites which may need special work-arounds to let you push up
    to them, since they are using receive.fsckObjects.

    In theory this is a good thing, because it prevents dumb mistakes
    from getting distributed in the first place. But it also is a pain
    for projects with established history.

 2. All of the people who are going to clone your repo, who might need
    to follow special instructions.

    The only reason this hasn't been a huge pain in practice is that
    almost nobody turns on transfer.fsckObjects in the first place. In
    theory the people who do turn it on know enough to examine the
    objects themselves and decide if it's OK. I don't know how true that
    is in practice, though (and certainly it would be nice to turn this
    feature on by default, but I do worry about people getting caught up
    in exactly these kind of historical messes).

We did add the gitmoduleUrl check to help with malicious URLs. But it
was always an extra layer of defense over the real fix, which was in the
credential code. It's _possible_ that a newly discovered vulnerability
will be protected by the existing fsck check, but I'm a little skeptical
about its security value at this point (especially because hardly
anybody runs it locally, and protection on the hosting sites isn't that
hard to work around).

So if it's causing people real pain in practice, I think there could be
an argument for downgrading the check to a warning. I don't have a
strong feeling that we _should_ do that, only that I don't personally
reject it immediately as an option.

-Peff




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux