Re: git-add,& "file vanishing" -> need git-add again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Firstly, apologies for getting mail-list twice in original To: line: dunno how that happened.

|On 12/22/06, David Tweed <tweed314@xxxxxxxxxxx> wrote:
|> Sidenote: I'm moving the database from the old format to the new one by repeatedly unpacking
|> the old database for snapshot X, git-add'ing any file names which have _never_ been in any snapshot
|> before, git-commit -a, git-tag, then remove all the files unpacked by the
|> old database and move onto snapshot X+1. This takes less than a second per snapshot.
|
|Not sure how large your snapshots are -- a second sounds like a long
|time for git operations. While it is a bit more complex, you _can_
|operate directly on the index, and the "snapshot" never needs to hit
|the disk as such during your migration.


By trying to be brief I was a rather cryptic. What I was trying to say was:



Running the git commands

earlier in the message in a script, I see certain files are not present from the git tree generated by

a commit at a time when I know the file I'd previously git-added "reappears" in the

working directory. I'm hypothesising that this is because when the file disappears the machinery

in git discards the `track this file name' information. However, I haven't (and would prefer not to)

dig into the git code to check that's the correct explanation. If this is why the files aren't

being tracked I can try to script around the issue by git-adding all the files I want tracked

by the snapshot before the git-commit -a. To help anyone thinking

about if the explanation is right, the working directory is repeatedly being wiped and refilled from my old

backup system with a second, so often all files have both creation and modification times

set to the current second regardless of whether the content has changed. This is a really
weird thing to do and might in some way be responsible for the untracked file (cf racy-git).



Most of the maybe half-second overhead is coming from my script unpacking the files with gzip

from my old database; git seems more than fast enough.


|Have a look at how the cvsimport script works for an example.


As it's my personal db which I'll only convert once if I can just make replaying work

I don't need anything more complicated; I've only got 2000-odd snapshots of 2500-odd files.
However, the temporarily disappearing file issue is one I think I'll face with any
cron-based commiting strategy and so need to solve.

cheers, dave tweed





		
___________________________________________________________ 
Yahoo! Photos – NEW, now offering a quality print service from just 7p a photo http://uk.photos.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]