Re: [PATCH 1/3] mm/vmalloc: Move draining areas out of caller context

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Wed, Jan 19, 2022 at 03:35:38PM +0100, Uladzislau Rezki (Sony) wrote:
> > +static void drain_vmap_area(struct work_struct *work)
> 
> Nit, but I prefer to have a _work postix for workers just to keep
> it easy to ready.
> 
Will fix it!

> >  	/* After this point, we may free va at any time */
> >  	if (unlikely(nr_lazy > lazy_max_pages()))
> > -		try_purge_vmap_area_lazy();
> > +		if (!atomic_xchg(&drain_vmap_area_work_in_progress, 1))
> > +			schedule_work(&drain_vmap_area_work);
> 
> Work items are defined to be single threaded, so I don't think we need
> the drain_vmap_area_work_in_progress hack.
> 
The motivation with that hack was to prevent the drain work being placed
several times at once, i.e. schedule_work() checks only a pending bit.

If the work is in run-queue another caller of vfree() will place it one
more time, since pending bit is not set because the work is in TASK_RUNNING
state.

Or am i missing something?

--
Vlad Rezki




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux