Hi Xiangyou, On 24/07/2019 10:04, Xiangyou Xie wrote: > Because dist->lpi_list_lock is a perVM lock, when a virtual machine > is configured with multiple virtual NIC devices and receives > network packets at the same time, dist->lpi_list_lock will become > a performance bottleneck. I'm sorry, but you'll have to show me some real numbers before I consider any of this. There is a reason why the original series still isn't in mainline, and that's because people don't post any numbers. Adding more patches is not going to change that small fact. > This patch increases the number of lpi_translation_cache to eight, > hashes the cpuid that executes irqfd_wakeup, and chooses which > lpi_translation_cache to use. So you've now switched to a per-cache lock, meaning that the rest of the ITS code can manipulate the the lpi_list without synchronization with the caches. Have you worked out all the possible races? Also, how does this new lock class fits in the whole locking hierarchy? If you want something that is actually scalable, do it the right way. Use a better data structure than a list, switch to using RCU rather than the current locking strategy. But your current approach looks quite fragile. Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm