* Waiman Long <Waiman.Long@xxxxxxx> wrote: > The inode_sb_list_add() and inode_sb_list_del() functions in the vfs > layer just perform list addition and deletion under lock. So they can > use the new list batching facility to speed up the list operations > when many CPUs are trying to do it simultaneously. > > In particular, the inode_sb_list_del() function can be a performance > bottleneck when large applications with many threads and associated > inodes exit. With an exit microbenchmark that creates a large number > of threads, attachs many inodes to them and then exits. The runtimes > of that microbenchmark with 1000 threads before and after the patch > on a 4-socket Intel E7-4820 v3 system (48 cores, 96 threads) were > as follows: > > Kernel Elapsed Time System Time > ------ ------------ ----------- > Vanilla 4.4 65.29s 82m14s > Patched 4.4 45.69s 49m44s > > The elapsed time and the reported system time were reduced by 30% > and 40% respectively. That's pretty impressive! I'm wondering, why are inode_sb_list_add()/del() even called for a presumably reasonably well cached benchmark running on a system with enough RAM? Are these perhaps thousands of temporary files, already deleted, and released when all the file descriptors are closed as part of sys_exit()? If that's the case then I suspect an even bigger win would be not just to batch the (sb-)global list fiddling, but to potentially turn the sb list into a percpu_alloc() managed set of per CPU lists? It's a bigger change, but it could speed up a lot of other temporary file intensive usecases as well, not just batched delete. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html