Re: [PATCH v2 6/9] mm: vmalloc: Offload free_vmap_area_lock lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 11, 2023 at 11:25:01AM +0800, Baoquan He wrote:
> On 08/29/23 at 10:11am, Uladzislau Rezki (Sony) wrote:
> > Concurrent access to a global vmap space is a bottle-neck.
> > We can simulate a high contention by running a vmalloc test
> > suite.
> > 
> > To address it, introduce an effective vmap node logic. Each
> > node behaves as independent entity. When a node is accessed
> > it serves a request directly(if possible) also it can fetch
> > a new block from a global heap to its internals if no space
> > or low capacity is left.
> > 
> > This technique reduces a pressure on the global vmap lock.
> > 
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
> > ---
> >  mm/vmalloc.c | 316 +++++++++++++++++++++++++++++++++++++++++++++------
> >  1 file changed, 279 insertions(+), 37 deletions(-)
> > 
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 5a8a9c1370b6..4fd4915c532d 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -779,6 +779,7 @@ struct rb_list {
> >  
> >  struct vmap_node {
> >  	/* Bookkeeping data of this node. */
> > +	struct rb_list free;
> >  	struct rb_list busy;
> >  	struct rb_list lazy;
> >  
> > @@ -786,6 +787,13 @@ struct vmap_node {
> >  	 * Ready-to-free areas.
> >  	 */
> >  	struct list_head purge_list;
> > +	struct work_struct purge_work;
> > +	unsigned long nr_purged;
> > +
> > +	/*
> > +	 * Control that only one user can pre-fetch this node.
> > +	 */
> > +	atomic_t fill_in_progress;
> >  };
> >  
> >  static struct vmap_node *nodes, snode;
> > @@ -804,6 +812,32 @@ addr_to_node(unsigned long addr)
> >  	return &nodes[addr_to_node_id(addr)];
> >  }
> >  
> > +static inline struct vmap_node *
> > +id_to_node(int id)
> > +{
> > +	return &nodes[id % nr_nodes];
> > +}
> > +
> > +static inline int
> > +this_node_id(void)
> > +{
> > +	return raw_smp_processor_id() % nr_nodes;
> > +}
> > +
> > +static inline unsigned long
> > +encode_vn_id(int node_id)
> > +{
> > +	/* Can store U8_MAX [0:254] nodes. */
> > +	return (node_id + 1) << BITS_PER_BYTE;
> > +}
> > +
> > +static inline int
> > +decode_vn_id(unsigned long val)
> > +{
> > +	/* Can store U8_MAX [0:254] nodes. */
> > +	return (val >> BITS_PER_BYTE) - 1;
> > +}
> 
> This patch looks good to me. However, should we split out the encoding
> vn_id into va->flags optimization into another patch? It looks like an
> independent optimization which can be described better with specific
> log. At least, in the pdf file pasted or patch log, it's not obvious
> that:
> 1) node's free tree could contains any address range;
> 2) nodes' busy tree only contains address range belonging to this node;
>    - could contain crossing node range, corner case.
> 3) nodes' purge tree could contain any address range;
>    - decided by encoded vn_id in va->flags.
>    - decided by address via addr_to_node(va->va_start).
> 
> Personal opinion, feel it will make reviewing easier.
> 
Sure, if it is easier to review, then i will split these two parts.
All three statements are correct and valid. The pdf file only covers
v1, so it is not up to date.

Anyway i will update a cover letter in v3 with more details.

--
Uladzislau Rezki




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux