On Thu, Sep 04, 2014 at 01:27:26PM -0700, Dave Hansen wrote: > On 09/04/2014 07:27 AM, Michal Hocko wrote: > > Ouch. free_pages_and_swap_cache completely kills the uncharge batching > > because it reduces it to PAGEVEC_SIZE batches. > > > > I think we really do not need PAGEVEC_SIZE batching anymore. We are > > already batching on tlb_gather layer. That one is limited so I think > > the below should be safe but I have to think about this some more. There > > is a risk of prolonged lru_lock wait times but the number of pages is > > limited to 10k and the heavy work is done outside of the lock. If this > > is really a problem then we can tear LRU part and the actual > > freeing/uncharging into a separate functions in this path. > > > > Could you test with this half baked patch, please? I didn't get to test > > it myself unfortunately. > > 3.16 settled out at about 11.5M faults/sec before the regression. This > patch gets it back up to about 10.5M, which is good. The top spinlock > contention in the kernel is still from the resource counter code via > mem_cgroup_commit_charge(), though. Thanks for testing, that looks a lot better. But commit doesn't touch resource counters - did you mean try_charge() or uncharge() by any chance? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>