Re: weird allocation pattern in alloc_ila_locks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 06-01-17 14:14:49, Eric Dumazet wrote:
> On Fri, 2017-01-06 at 13:16 +0100, Michal Hocko wrote:
> > I was thinking about the rhashtable which was the source of the c&p and
> > it can be simplified as well.
> > ---
> > From 555543604f5f020284ea85d928d52f6a55fde7ca Mon Sep 17 00:00:00 2001
> > From: Michal Hocko <mhocko@xxxxxxxx>
> > Date: Fri, 6 Jan 2017 13:12:31 +0100
> > Subject: [PATCH] rhashtable: simplify a strange allocation pattern
> > 
> > alloc_bucket_locks allocation pattern is quite unusual. We are
> > preferring vmalloc when CONFIG_NUMA is enabled which doesn't make much
> > sense because there is no special NUMA locality handled in that code
> > path. Let's just simplify the code and use kvmalloc helper, which is a
> > transparent way to use kmalloc with vmalloc fallback, if the caller
> > is allowed to block and use the flag otherwise.
> > 
> > Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
> > ---
> >  lib/rhashtable.c | 13 +++----------
> >  1 file changed, 3 insertions(+), 10 deletions(-)
> > 
> > diff --git a/lib/rhashtable.c b/lib/rhashtable.c
> > index 32d0ad058380..4d3886b6ab7d 100644
> > --- a/lib/rhashtable.c
> > +++ b/lib/rhashtable.c
> > @@ -77,16 +77,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
> >  	size = min_t(unsigned int, size, tbl->size >> 1);
> >  
> >  	if (sizeof(spinlock_t) != 0) {
> > -		tbl->locks = NULL;
> > -#ifdef CONFIG_NUMA
> > -		if (size * sizeof(spinlock_t) > PAGE_SIZE &&
> > -		    gfp == GFP_KERNEL)
> > -			tbl->locks = vmalloc(size * sizeof(spinlock_t));
> > -#endif
> > -		if (gfp != GFP_KERNEL)
> > -			gfp |= __GFP_NOWARN | __GFP_NORETRY;
> > -
> > -		if (!tbl->locks)
> > +		if (gfpflags_allow_blocking(gfp_))
> > +			tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
> > +		else
> >  			tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
> 
> 
> I believe the intent was to get NUMA spreading, a bit like what we have
> in alloc_large_system_hash() when hashdist == HASHDIST_DEFAULT

Hmm, I am not sure this works as expected then. Because it is more
likely that all pages backing the vmallocked area will come from the
local node than spread around more nodes. Or did I miss your point?
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]