On Sun, 20 Apr 2008, Pieter de Bie wrote: > > Yes, I just tested this. > > for (int i = 0; i < 50000; i++) { > sprintf(s, "/Users/pieter/test/perf/%i", i); > int ret = lstat(s, a); > } > > This loop needs about 3 seconds to run. Replacing the i with 10 in the > sprintf reduces it to 0.24seconds. Ok. On my machine, that's real 0m0.090s with the 50,000 different files, and with the same filename it's real 0m0.081s so yes, we're looking at another case of Linux performance just being in a class of its own. Taking three seconds for the warm-cache case for just 50,000 files is ludicrous. That's about an order-and-a-half slower than what I see. Maybe my CPU is faster too (2.66GHz Core 2), but the thing is, Linux really does tend to outperform others at a lot of these kinds of loads. System calls are fast to begin with, and the Linux directory cache kicks ass, if I do say so myself. OS X doth suck. Linus -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html