Re: [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 04, 2016 at 08:57:04AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:01PM +0800, Ganesh Mahendran wrote:
> > the obj index value should be updated after return from
> > find_alloced_obj()
>  
>         to avoid CPU buring caused by unnecessary object scanning.
> 
> Description should include what's the goal.

Thanks for your reminder.

> 
> > 
> > Signed-off-by: Ganesh Mahendran <opensource.ganesh@xxxxxxxxx>
> > ---
> >  mm/zsmalloc.c | 13 ++++++++-----
> >  1 file changed, 8 insertions(+), 5 deletions(-)
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 405baa5..5c96ed1 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1744,15 +1744,16 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
> >   * return handle.
> >   */
> >  static unsigned long find_alloced_obj(struct size_class *class,
> > -					struct page *page, int index)
> > +					struct page *page, int *index)
> >  {
> >  	unsigned long head;
> >  	int offset = 0;
> > +	int objidx = *index;
> 
> Nit:
> 
> We have used obj_idx so I prefer it for consistency with others.

will do it.

> 
> Suggestion:
> Could you mind changing index in zs_compact_control and
> migrate_zspage with obj_idx in this chance?

I will add a clean up patch in this patchset.

> 
> Strictly speaking, such clean up is separate patch but I don't mind
> mixing them here(Of course, you will send it as another clean up patch,
> it would be better). If you mind, just let it leave as is. Sometime,
> I wil do it.
> 
> >  	unsigned long handle = 0;
> >  	void *addr = kmap_atomic(page);
> >  
> >  	offset = get_first_obj_offset(page);
> > -	offset += class->size * index;
> > +	offset += class->size * objidx;
> >  
> >  	while (offset < PAGE_SIZE) {
> >  		head = obj_to_head(page, addr + offset);
> > @@ -1764,9 +1765,11 @@ static unsigned long find_alloced_obj(struct size_class *class,
> >  		}
> >  
> >  		offset += class->size;
> > -		index++;
> > +		objidx++;
> >  	}
> >  
> > +	*index = objidx;
> 
> We can do this out of kmap section right before returing handle.

That's right. I will send a V2 patch soon.

Thanks.

> 
> Thanks!
> 
> > +
> >  	kunmap_atomic(addr);
> >  	return handle;
> >  }
> > @@ -1794,11 +1797,11 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> >  	unsigned long handle;
> >  	struct page *s_page = cc->s_page;
> >  	struct page *d_page = cc->d_page;
> > -	unsigned long index = cc->index;
> > +	unsigned int index = cc->index;
> >  	int ret = 0;
> >  
> >  	while (1) {
> > -		handle = find_alloced_obj(class, s_page, index);
> > +		handle = find_alloced_obj(class, s_page, &index);
> >  		if (!handle) {
> >  			s_page = get_next_page(s_page);
> >  			if (!s_page)
> > -- 
> > 1.9.1
> > 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]