On Tue, May 28, 2024 at 02:16:48PM +0000, Chris Burr wrote: > According to the documentation I expected to be able to set the > safe.directory option with either "-c" or "GIT_CONFIG_" environment > variables. From the "safe.directory" documentation: > > > This config setting is only respected in protected configuration > > Protected configuration refers to the system, global, and command scopes. > > Command scope includes both the GIT_CONFIG_ variables and the "-c" flag. > > This works for operations like "git status" but not "git clone". Yeah, the problem is that Git clears many repo-related environment variables when it "switches" from one repo to another. And in the case of clone, there are two "sides": the clone process in the local repo, and the upload-pack process which runs in the remote repo. And so when we spawn upload-pack, we clear the environment, including the config variables. And the recent changes in v2.45.1, etc, are for upload-pack to be more careful about the repository in which it runs (unless safe.directory says it's OK). So that's where the config is checked, but your values don't make it there. And that's true of other variables, too. Running "git -c receive.denyDeletes=false push" won't do what you want. Now at first glance I think you could argue that the config variables should be retained in this case. After all, they are attached to the command you ran, not a specific repository. But the main reason we clear them is that for most transports, they wouldn't be retained anyway! If your remote is over http, git://, or even most configurations of ssh (which will not pass arbitrary config variables), the other side likewise would not see them. So this is keeping the local-repo case consistent with other transports. The workaround is generally to attach the config directly to the upload-pack process, like: git clone -u 'git -c safe.directory="*" upload-pack' ... But: 1. Obviously that is horrible and not something users should have to know about. 2. It doesn't always work! If you specify --no-local, then it does, because we kick off a single upload-pack process and read everything from it. But without that, the local clone process tries to optimize things by hard-linking the original files. And that code does its own ownership check. So you need: git -c safe.directory="*" \ clone -u 'git -c safe.directory="*" upload-pack' \ ... to cover both processes. I'm not sure how to make it more friendly, though. Passing config across the local-remote barrier is a can of worms, since many servers would not want to trust client-provided config. The simplest solution I can think of is that the ownership check should not go into upload-pack at all. It should be totally in git-clone, which should check the directory itself (obviously only when it knows the transport is the local filesystem). That would largely protect the original case we cared about (accidentally interacting with an untrusted repo on the local filesystem) and leave other more complex ones untouched (like git-daemon serving user-owned processes as the "nobody" user). But I did just think of this approach, so there might be corner cases that aren't well covered. And certainly the git-daemon thing _could_ be a security risk, but I think for the most part if you configured git-daemon to serve other people's repos, you know what you're doing. -Peff