RE: [PATCH v6] mm/zswap: move to use crypto_acomp API for hardware acceleration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Sebastian Andrzej Siewior [mailto:bigeasy@xxxxxxxxxxxxx]
> Sent: Tuesday, September 29, 2020 10:31 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@xxxxxxxxxxxxx>
> Cc: akpm@xxxxxxxxxxxxxxxxxxxx; herbert@xxxxxxxxxxxxxxxxxxx;
> davem@xxxxxxxxxxxxx; linux-crypto@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> linux-kernel@xxxxxxxxxxxxxxx; Luis Claudio R . Goncalves
> <lgoncalv@xxxxxxxxxx>; Mahipal Challa <mahipalreddy2006@xxxxxxxxx>;
> Seth Jennings <sjenning@xxxxxxxxxx>; Dan Streetman <ddstreet@xxxxxxxx>;
> Vitaly Wool <vitaly.wool@xxxxxxxxxxxx>; Wangzhou (B)
> <wangzhou1@xxxxxxxxxxxxx>; fanghao (A) <fanghao11@xxxxxxxxxx>; Colin
> Ian King <colin.king@xxxxxxxxxxxxx>
> Subject: Re: [PATCH v6] mm/zswap: move to use crypto_acomp API for
> hardware acceleration
> 
> On 2020-09-29 05:14:31 [+0000], Song Bao Hua (Barry Song) wrote:
> > After second thought and trying to make this change, I would like to change
> my mind
> > and disagree with this idea. Two reasons:
> > 1. while using this_cpu_ptr() without preemption lock, people usually put all
> things bound
> > with one cpu to one structure, so that once we get the pointer of the whole
> structure, we get
> > all its parts belonging to the same cpu. If we move the dstmem and mutex
> out of the structure
> > containing them, we will have to do:
> > 	a. get_cpu_ptr() for the acomp_ctx   //lock preemption
> > 	b. this_cpu_ptr() for the dstmem and mutex
> > 	c. put_cpu_ptr() for the acomp_ctx  //unlock preemption
> > 	d. mutex_lock()
> > 	  sg_init_one()
> > 	  compress/decompress etc.
> > 	  ...
> > 	  mutex_unlock
> >
> > as the get() and put() have a preemption lock/unlock, this will make certain
> this_cpu_ptr()
> > in the step "b" will return the right dstmem and mutex which belong to the
> same cpu with
> > step "a".
> >
> > The steps from "a" to "c" are quite silly and confusing. I believe the existing
> code aligns
> > with the most similar code in kernel better:
> > 	a. this_cpu_ptr()   //get everything for one cpu
> > 	b. mutex_lock()
> > 	  sg_init_one()
> > 	  compress/decompress etc.
> > 	  ...
> > 	  mutex_unlock
> 
> My point was that there will be a warning at run-time and you don't want
> that. There are raw_ accessors if you know what you are doing. But…

I have only seen get_cpu_ptr/var() things will disable preemption. I don't think
we will have a warning as this_cpu_ptr() won't disable preemption.

> 
> Earlier you had compression/decompression with disabled preemption and

No. that is right now done in enabled preemption context with this patch. The code before this patch
was doing (de)compression in preemption-disabled context by using get_cpu_ptr and get_cpu_var.

> strict per-CPU memory allocation. Now if you keep this per-CPU memory
> allocation then you gain a possible bottleneck.
> In the previous email you said that there may be a bottleneck in the
> upper layer where you can't utilize all that memory you allocate. So you
> may want to rethink that strategy before that rework.

we are probably not talking about same thing :-)
I was talking about possible generic swap bottleneck. For example, LRU is global,
while swapping, multiple cores might have some locks on this LRU. for example,
if we have 8 inactive pages to swap out, I am not sure if mm can use 8 cores
to swap them out at the same time.

> 
> > 2. while allocating mutex, we can put the mutex into local memory by using
> kmalloc_node().
> > If we move to "struct mutex lock" directly, most CPUs in a NUMA server will
> have to access
> > remote memory to read/write the mutex, therefore, this will increase the
> latency dramatically.
> 
> If you need something per-CPU then DEFINE_PER_CPU() will give it to you.

Yes. It is true.

> It would be very bad for performance if this allocations were not from
> CPU-local memory, right? So what makes you think this is worse than
> kmalloc_node() based allocations?

Yes. If your read zswap code, it has considered NUMA very carefully by allocating various
memory locally. And in crypto framework, I also added API to allocate local compression.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7bc13b5b60e94
this zswap patch has used the new node-aware API.

Memory access crossing NUMA node, practically crossing packages, can dramatically increase,
like double, triple or more.

Thanks
Barry





[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux