On Thu, 26 Oct 2006, Shawn Pearce wrote: > Unfortunately it does not completely work. > > What happens when the incoming pack (steps #2 and #3) takes 15 > minutes to upload (slow ADSL modem, lots of objects) and the > background repack process sees those temporary refs and starts > trying to include those objects? It can't walk the DAG that those > refs point at because the objects aren't in the current repository. > > From what I know of that code the pack-objects process will fail to > find the object pointed at by the ref, rescan the packs directory, > find no new packs, look for the object again, and abort over the > "corruption". > > OK so the repository won't get corrupted but the repack would be > forced to abort. Maybe this is the best way out? Abort git-repack with "a fetch is in progress -- retry later". No one will really suffer if the repack has to wait for the next scheduled cron job, especially if the fetch doesn't explode packs into loose objects anymore. > Another issue I just thought about tonight is we may need a > count-packs utility that like count-objects lists the number > of active packs and their total size. If we start hanging onto > every pack we receive over the wire the pack directory is going to > grow pretty fast and we'll need a way to tell us when its time to > `repack -a -d`. Sure. Although the pack count is going to grow much less rapidly. Think of one pack per fetch instead of many many objects per fetch. Nicolas - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html