On 02/01/2016 05:03 PM, Waiman Long wrote:
On 02/01/2016 12:45 PM, Andi Kleen wrote:
I'm wondering, why are inode_sb_list_add()/del() even called for a
presumably
reasonably well cached benchmark running on a system with enough
RAM? Are these
perhaps thousands of temporary files, already deleted, and released
when all the
file descriptors are closed as part of sys_exit()?
If that's the case then I suspect an even bigger win would be not
just to batch
the (sb-)global list fiddling, but to potentially turn the sb list
into a
percpu_alloc() managed set of per CPU lists? It's a bigger change,
but it could
We had such a patch in the lock elision patchkit (It avoided a lot
of cache line bouncing leading to aborts)
https://git.kernel.org/cgit/linux/kernel/git/ak/linux-misc.git/commit/?h=hle315/combined&id=f1cf9e715a40f44086662ae3b29f123cf059cbf4
-Andi
I like your patch though it cannot be applied cleanly for the current
upstream kernel. I will port it to the current kernel and run my
microbenchmark to see what performance gain I can get.
Unfortunately, using per-cpu list didn't have the performance benefit
that I expected. I saw maybe 1 or 2% of performance increase, but
nothing significant. I guess the bulk of the performance improvement in
my patch is in the elimination of most of the cacheline transfer
latencies when the lock ownership is passed from one CPU to another.
Those latencies are still there even if we use the per-cpu list.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html