Re: [PATCH v3 07/11] mm: vmalloc: Offload free_vmap_area_lock lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 12, 2024 at 07:37:36AM +1100, Dave Chinner wrote:
> On Thu, Jan 11, 2024 at 04:54:48PM +0100, Uladzislau Rezki wrote:
> > On Thu, Jan 11, 2024 at 08:02:16PM +1100, Dave Chinner wrote:
> > > On Tue, Jan 02, 2024 at 07:46:29PM +0100, Uladzislau Rezki (Sony) wrote:
> > > > Concurrent access to a global vmap space is a bottle-neck.
> > > > We can simulate a high contention by running a vmalloc test
> > > > suite.
> > > > 
> > > > To address it, introduce an effective vmap node logic. Each
> > > > node behaves as independent entity. When a node is accessed
> > > > it serves a request directly(if possible) from its pool.
> > > > 
> > > > This model has a size based pool for requests, i.e. pools are
> > > > serialized and populated based on object size and real demand.
> > > > A maximum object size that pool can handle is set to 256 pages.
> > > > 
> > > > This technique reduces a pressure on the global vmap lock.
> > > > 
> > > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
> > > 
> > > Why not use a llist for this? That gets rid of the need for a
> > > new pool_lock altogether...
> > > 
> > Initially i used the llist. I have changed it because i keep track
> > of objects per a pool to decay it later. I do not find these locks
> > as contented one therefore i did not think much.
> 
> Ok. I've used llist and an atomic counter to track the list length
> in the past.
> 
> But is the list length even necessary? It seems to me that it is
> only used by the shrinker to determine how many objects are on the
> lists for scanning, and I'm not sure that's entirely necessary given
> the way the current global shrinker works (i.e. completely unfair to
> low numbered nodes due to scan loop start bias).
> 
I use the length to decay pools by certain percentage, currently it is
25%, so i need to know number of objects. It is done in the purge path.
As for shrinker, once it hits us we drain pools entirely.

> > Anyway, i will have a look at this to see if llist is easy to go with
> > or not. If so i will send out a separate patch.
> 
> Sounds good, it was just something that crossed my mind given the
> pattern of "producer adds single items, consumer detaches entire
> list, processes it and reattaches remainder" is a perfect match for
> the llist structure.
> 
The llist_del_first() has to be serialized. For this purpose a per-cpu
pool would work or kind of "in_use" atomic that protects concurrent
removing.

If we detach entire llist, then we need to keep track of last node
to add it later as a "batch" to already existing/populated list.

Thanks four looking!

--
Uladzislau Rezki




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux