On 01/07/2013 12:24 PM, Seth Jennings wrote: > +struct zswap_tree { > + struct rb_root rbroot; > + struct list_head lru; > + spinlock_t lock; > + struct zs_pool *pool; > +}; BTW, I spent some time trying to get this lock contended. You thought the anon_vma locks would dominate and this spinlock would not end up very contended. I figured that if I hit zswap from a bunch of CPUs that _didn't_ use anonymous memory (and thus the anon_vma locks) that some more contention would pop up. I did that with a bunch of CPUs writing to tmpfs, and this lock was still well down below anon_vma. The anon_vma contention was obviously coming from _other_ anonymous memory around. IOW, I feel a bit better about this lock. I only tested on 16 cores on a system with relatively light NUMA characteristics, and it might be the bottleneck if all the anonymous memory on the system is mlock()'d and you're pounding on tmpfs, but that's pretty contrived. _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/devel