Hi Miklos and all, I would like to submit following fixes for review which enhance FUSE scalability on NUMA systems. The changes add new mount option 'numa' and involve kernel and user library changes. I am currently forwarding kernel fixes only and will forward library changes once the kernel fixes seem ok to you :) In our internal tests, we noticed that FUSE was not scaling well when multiple users access same mount point. The contention is on fc->lock spinlock, but the real problem is not the spinlock itself but because of the latency involved in accessing single spinlock from multiple NUMA nodes. This fix groups various fields in fuse_conn and creates a set for each NUMA node to reduce contention. A spinlock is created for each NUMA node which will synchronize access to node local set. All processes will now access nodes local spinlock thus reducing latency. To get this behavior users(fuse library) or the file system implementers should pass 'numa' mount option. If 'numa' option is not specified during mount, FUSE will create single set of the grouped fields and behavior is similar to current. File systems that support NUMA option should listen on /dev/fuse from all NUMA nodes to serve incoming/outgoing requests. If File systems are using fuse library then the library will do that for them. Please let me know any suggestions you may have to fix this better. I am open to any ideas that improve the performance :) TESTS: We did some simple dd tests to read/write to FUSE based fs and also ran some database workloads. All tests show that these fixes improve performance. 1) Writes to single mount dd process throughput(without numa) throughput(with numa) 4 1254 M 1200 M 8 764 M 1931 M 16 430 M 1931 M 32 454 M 2641 M 64 448 M 2891 M 128 435 M 7414 M 2) Reading from single mount # dd process throughput(without numa) throughput(with numa) 4 550 M 581 M 8 1013 M 1071 M 16 430 M 1767 M 32 314 M 2114 M 64 300 M 3872 M 128 297 M 4563 M 3) Other workloads a) Database load tables application which used to take 47 min now finishes in 8 minutes. b) Data generation application which takes 2hr 50min now finishes in 1hr 21min. Thanks, --Srini -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html