On Mon, Jan 17, 2022 at 02:47:06PM +0100, Michal Hocko wrote: > On Thu 30-12-21 11:36:27, Minchan Kim wrote: > > lru_cache_disable involves IPIs to drain pagevec of each core, > > which sometimes takes quite long time to complete depending > > on cpu's business, which makes allocation too slow up to > > sveral hundredth milliseconds. Furthermore, the repeated draining > > in the alloc_contig_range makes thing worse considering caller > > of alloc_contig_range usually tries multiple times in the loop. > > > > This patch makes the lru_cache_disable aware of the fact the > > pagevec was already disabled. With that, user of alloc_contig_range > > can disable the lru cache in advance in their context during the > > repeated trial so they can avoid the multiple costly draining > > in cma allocation. > > Do you have any numbers on any improvements? The LRU draining consumed above 50% overhead for the 20M CMA alloc. > > Now to the change. I do not like this much to be honest. LRU cache > disabling is a complex synchronization scheme implemented in > __lru_add_drain_all now you are stacking another level on top of that. > > More fundamentally though. I am not sure I understand the problem TBH. The problem is that kinds of IPI using normal prority workqueue to drain takes much time depending on the system CPU business. > What prevents you from calling lru_cache_disable at the cma level in the > first place? You meant moving the call from alloc_contig_range to caller layer? So, virtio_mem_fake_online, too? It could and make sense from performance perspective since upper layer usually calls the alloc_contig_range multiple times on retrial loop. Havid said, semantically, not good in that why upper layer should know how alloc_contig_range works(LRU disable is too low level stuff) internally but I chose the performance here. There is an example why the stacking is needed. cma_alloc also can be called from outside. A usecase is try to call lru_cache_disable for (order = 10; order >= 0; order) { page = cma_alloc(1<<order) if (page) break; } lru_cacne_enable Here, putting the disable lru outside of cma_alloc is much better than inside. That's why I put it outside. > > > Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> > > --- > > * from v1 - https://lore.kernel.org/lkml/20211206221006.946661-1-minchan@xxxxxxxxxx/ > > * fix lru_cache_disable race - akpm > > > > include/linux/swap.h | 14 ++------------ > > mm/cma.c | 5 +++++ > > mm/swap.c | 30 ++++++++++++++++++++++++++++-- > > 3 files changed, 35 insertions(+), 14 deletions(-) > > > > diff --git a/include/linux/swap.h b/include/linux/swap.h > > index ba52f3a3478e..fe18e86a4f13 100644 > > --- a/include/linux/swap.h > > +++ b/include/linux/swap.h > > @@ -348,19 +348,9 @@ extern void lru_note_cost_page(struct page *); > > extern void lru_cache_add(struct page *); > > extern void mark_page_accessed(struct page *); > > > > -extern atomic_t lru_disable_count; > > - > > -static inline bool lru_cache_disabled(void) > > -{ > > - return atomic_read(&lru_disable_count); > > -} > > - > > -static inline void lru_cache_enable(void) > > -{ > > - atomic_dec(&lru_disable_count); > > -} > > - > > +extern bool lru_cache_disabled(void); > > extern void lru_cache_disable(void); > > +extern void lru_cache_enable(void); > > extern void lru_add_drain(void); > > extern void lru_add_drain_cpu(int cpu); > > extern void lru_add_drain_cpu_zone(struct zone *zone); > > diff --git a/mm/cma.c b/mm/cma.c > > index 995e15480937..60be555c5b95 100644 > > --- a/mm/cma.c > > +++ b/mm/cma.c > > @@ -30,6 +30,7 @@ > > #include <linux/cma.h> > > #include <linux/highmem.h> > > #include <linux/io.h> > > +#include <linux/swap.h> > > #include <linux/kmemleak.h> > > #include <trace/events/cma.h> > > > > @@ -453,6 +454,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > > if (bitmap_count > bitmap_maxno) > > goto out; > > > > + lru_cache_disable(); > > + > > for (;;) { > > spin_lock_irq(&cma->lock); > > bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, > > @@ -492,6 +495,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > > start = bitmap_no + mask + 1; > > } > > > > + lru_cache_enable(); > > + > > trace_cma_alloc_finish(cma->name, pfn, page, count, align); > > > > /* > > diff --git a/mm/swap.c b/mm/swap.c > > index af3cad4e5378..5f89d7c9a54e 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -847,7 +847,17 @@ void lru_add_drain_all(void) > > } > > #endif /* CONFIG_SMP */ > > > > -atomic_t lru_disable_count = ATOMIC_INIT(0); > > +static atomic_t lru_disable_count = ATOMIC_INIT(0); > > + > > +bool lru_cache_disabled(void) > > +{ > > + return atomic_read(&lru_disable_count) != 0; > > +} > > + > > +void lru_cache_enable(void) > > +{ > > + atomic_dec(&lru_disable_count); > > +} > > > > /* > > * lru_cache_disable() needs to be called before we start compiling > > @@ -859,7 +869,21 @@ atomic_t lru_disable_count = ATOMIC_INIT(0); > > */ > > void lru_cache_disable(void) > > { > > - atomic_inc(&lru_disable_count); > > + static DEFINE_MUTEX(lock); > > + > > + /* > > + * The lock gaurantees lru_cache is drained when the function > > + * returned. > > + */ > > + mutex_lock(&lock); > > + /* > > + * If someone is already disabled lru_cache, just return with > > + * increasing the lru_disable_count. > > + */ > > + if (atomic_inc_not_zero(&lru_disable_count)) { > > + mutex_unlock(&lock); > > + return; > > + } > > #ifdef CONFIG_SMP > > /* > > * lru_add_drain_all in the force mode will schedule draining on > > @@ -873,6 +897,8 @@ void lru_cache_disable(void) > > #else > > lru_add_and_bh_lrus_drain(); > > #endif > > + atomic_inc(&lru_disable_count); > > + mutex_unlock(&lock); > > } > > > > /** > > -- > > 2.34.1.448.ga2b2bfdf31-goog > > -- > Michal Hocko > SUSE Labs >