Ok, so I was wondering why doing a 'git gc' on my kernel backup on one of the linux-foundation machines was taking so long, and I think I've found a performance problem. The way I do kernel back-ups is that I just push to two different sites every once in a while (read: multiple times a day when I do lots of merging), and one of them is master.kernel.org that then gets published to others. The other one is a linux-foundation machine that I have a login on, and that's my "secondary" back-up, in case both kernel.org and my own home machines were to be corrupted somehow. And because it's my secondary, I seldom then log in an gc anything, so it's a mess. But it was _really_ slow when I finally did so today. The whole "Counting objects" phase was counting by hundreds, which it really shouldn't do on a fast machine. The reason? Tons and tons of pack-files. But just the existence of the pack-files is not what killed it: things were _much_ faster if I just did a "git pack-objects" by hand. The real reason _seems_ to be the "--unpacked=pack-....pack" arguments. I literally had 232 pack-files, and it looks like a lot of the time was spent in that silly loop oer 'ignore_packed' in find_pack_entry(), when revision.c does that "has_sha1_pack()" thing. You get a O(n**2) effect in number of pack-files: for each commit we look over every pack-file, and for every pack-file we look at, we look over each ignore_pack entry. I didn't really analyze this a lot, and now the thing is packed and much faster, but I thought I'd throw this out there.. Linus -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html