On Fri, 14 Nov 2008, Junio C Hamano wrote: > > If you have 1000 files in a single directory, do you still want 2 threads > following the "1/500" rule, or they would compete reading the same > directory and using a single thread is better off? Well, first off, the "single directory" thing is really a Linux kernel deficiency, and it's entirely possible that it doesn't even exist on other systems. Linux has a very special directory cache (dcache) model that is pretty unique - it's part of why cached 'lstat()' calls are so cheap on Linux - but it is also part of the reason for why we serialize lookups when we do miss in the cache (*). Secondly, anybody who has a thousand tracked files in a single directory can damn well blame themselves for being stupid. So I don't think it's a case that is worth worrying too much about. Git will slow down for that kind of situation for other reasons (ie a lot of the tree pruning optimization won't work for projects that have large flat directories). So i wouldn't worry about it. That said, with the second patch, we default to having people enable this explicitly, so it's something that people can decide on their own. Linus (*) That said - the Linux dcache consistency is just _one_ reason why we serialize lookups. I would not be in the least surprised if other OS's have the exact same issue. I'd love to fix it in Linux, but quiet honestly, it has never actually come up before now, and we've literally worked on multi-threading the _cached_ case, not the uncached one. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html