Re: Locking binary files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 23 Sep 2008, Junio C Hamano wrote:

> Daniel Barkalow <barkalow@xxxxxxxxxxxx> writes:
> 
> > I think the right tool on the git side is actually a "smudge/clean" 
> > script. When you check something out, git converts it from the 
> > repository-stored form to a working tree form using a script (if there is 
> > one configured); this could check whether you've got the appropriate lock, 
> > and make the file unwritable if you don't.
> 
> An obvious question is "how would such a script check the lock when you
> are 30,000 ft above ground"; in other words, this "locking mechanism"
> contradicts the very nature of distributed development theme.  The best
> mechanism should always be on the human side.  An SCM auguments
> inter-developer communication, but it is not a _substitute_ for
> communication.

If you're offline, you can't get new locks, nor release them. But it can 
make reasonable decisions if it remembers what locks you got before.

On the other hand, you can just make the file writable yourself while 
disconnected, and nothing bad happens to anybody else; if someone else 
locks the file and starts working, they'll block your eventual push until 
they push and you merge. And nothing too bad happens to you; you get stuck 
redoing the change later (as a merge), but (a) you would have had to do 
the work then anyway; (b) you knew you weren't protecting yourself; and 
(c) at least you got to practice on the plane.

The point of the locking is just that, if you get the lock for a 
particular file in a particular branch on a particular shared repository, 
you can be sure you won't have to merge that file in order to push there, 
and you can get this worked out in advance of having the push ready. A 
secondary concern is that you might want to stop yourself from working on 
certain things without this kind of reservation, but that's a local 
decision.

> But if you limit the use case to an always tightly connected environment
> (aka "not distributed at all"), I agree the above would be a very
> reasonable approach.
> 
> Such a setup would need a separate locking infrastructure and an end user
> command that grabs the lock and when successful makes the file in the work
> tree read/write.  The user butchers the contents after taking the lock,
> saves, and then when running "git commit", probably the post-commit hook
> would release any relevant locks.

The lock needs to last until you push to the repository the lock is for; 
otherwise you have the exclusive ability to make changes, but someone who 
grabs the lock right after you release it will still be working on the 
version without your change, which is what the lock is supposed to 
prevent.

> All these can be left outside the scope of git, as they can be hooked into
> git with the existing infrastructure. Once a BCP materializes it could be
> added to contrib/ just like the "paranoid" update hook.

It would be handy to link against some of git, since it will want to use 
git config files and remotes and refspecs to figure out what lock to ask 
for on the client side, and how to communicate with the target remote 
repository, and the process of getting a lock requires checking that 
you're up-to-date, and git's also got a bunch of useful code for atomic 
file updates and repository-scoped filename management. But adding this 
doesn't have to modify any existing behavior.

	-Daniel
*This .sig left intentionally blank*
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux