On Fri, Nov 26, 2021 at 2:04 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Thu, Nov 25, 2021 at 08:02:38AM +0000, Hao Lee wrote: > > On Thu, Nov 25, 2021 at 03:30:44AM +0000, Matthew Wilcox wrote: > > > On Thu, Nov 25, 2021 at 11:24:02AM +0800, Hao Lee wrote: > > > > On Thu, Nov 25, 2021 at 12:31 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > > We do batch currently so no single task should be > > > > > able to monopolize the cpu for too long. Why this is not sufficient? > > > > > > > > uncharge and unref indeed take advantage of the batch process, but > > > > del_from_lru needs more time to complete. Several tasks will contend > > > > spinlock in the loop if nr is very large. > > > > > > Is SWAP_CLUSTER_MAX too large? Or does your architecture's spinlock > > > implementation need to be fixed? > > > > > > > My testing server is x86_64 with 5.16-rc2. The spinlock should be normal. > > > > I think lock_batch is not the point. lock_batch only break spinning time > > into small parts, but it doesn't reduce spinning time. The thing may get > > worse if lock_batch is very small. > > OK. So if I understand right, you've got a lot of processes all > calling exit_mmap() at the same time, which eventually becomes calls to > unmap_vmas(), unmap_single_vma(), unmap_page_range(), zap_pte_range(), > tlb_flush_mmu(), tlb_batch_pages_flush(), free_pages_and_swap_cache(), > release_pages(), and then you see high contention on the LRU lock. Exactly. > > Your use-case doesn't seem to mind sleeping (after all, these processes > are exiting). Yes! > So we could put a semaphore in exit_mmap() to limit the > number of simultaneous callers to unmap_vmas(). Do you want to try > that out and see if it solves your problem? You might want to make it > a counting semaphore (eg permit two tasks to exit at once) rather than > a mutex. But maybe a mutex is just fine. This is really a good idea. My train of thought was trapped in reducing the lock contention. I will try to implement this idea and see if the service stability will be improved much. Thanks for your help!