On Sun, Nov 01, 2015 at 06:14:43PM -0800, Eric Dumazet wrote: > On Mon, 2015-11-02 at 00:24 +0000, Al Viro wrote: > > > This ought to be a bit cleaner. Eric, could you test the variant below on your > > setup? > > Sure ! > > 5 runs of : > lpaa24:~# taskset ff0ff ./opensock -t 16 -n 10000000 -l 10 > > total = 4386311 > total = 4560402 > total = 4437309 > total = 4516227 > total = 4478778 Umm... With Linus' variant it was what, around 4000000? +10% or so, then... > With 48 threads : > > ./opensock -t 48 -n 10000000 -l 10 > total = 4940245 > total = 4848513 > total = 4813153 > total = 4813946 > total = 5127804 And that - +40%? Interesting... And it looks like at 48 threads you are still seeing arseloads of contention, but apparently less than with Linus' variant... What if you throw the __clear_close_on_exec() patch on top of that? Looks like it's spending less time under ->files_lock... Could you get information on fs/file.o hotspots? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html