Re: [PATCH v5 02/18] zram: permit preemption with active compression stream

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On (25/02/12 16:01), Yosry Ahmed wrote:
> On Wed, Feb 12, 2025 at 03:27:00PM +0900, Sergey Senozhatsky wrote:
> > Currently, per-CPU stream access is done from a non-preemptible
> > (atomic) section, which imposes the same atomicity requirements on
> > compression backends as entry spin-lock, and makes it impossible
> > to use algorithms that can schedule/wait/sleep during compression
> > and decompression.
> > 
> > Switch to preemptible per-CPU model, similar to the one used
> > in zswap.  Instead of a per-CPU local lock, each stream carries
> > a mutex which is locked throughout entire time zram uses it
> > for compression or decompression, so that cpu-dead event waits
> > for zram to stop using a particular per-CPU stream and release
> > it.
> > 
> > Suggested-by: Yosry Ahmed <yosry.ahmed@xxxxxxxxx>
> > Signed-off-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx>
> > ---
> >  drivers/block/zram/zcomp.c    | 36 +++++++++++++++++++++++++----------
> >  drivers/block/zram/zcomp.h    |  6 +++---
> >  drivers/block/zram/zram_drv.c | 20 +++++++++----------
> >  3 files changed, 39 insertions(+), 23 deletions(-)
> > 
> > diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
> > index bb514403e305..e83dd9a80a81 100644
> > --- a/drivers/block/zram/zcomp.c
> > +++ b/drivers/block/zram/zcomp.c
> > @@ -7,6 +7,7 @@
> >  #include <linux/wait.h>
> >  #include <linux/sched.h>
> >  #include <linux/cpu.h>
> > +#include <linux/cpuhotplug.h>
> 
> What code changes prompt this?

Just a missing header include.  We use cpuhotplug.
I actually think I wanted to replace cpu.h with it.

> >  #include <linux/crypto.h>
> >  #include <linux/vmalloc.h>
> >  
> > @@ -54,6 +55,7 @@ static int zcomp_strm_init(struct zcomp *comp, struct zcomp_strm *zstrm)
> >  {
> >  	int ret;
> >  
> > +	mutex_init(&zstrm->lock);
> 
> I don't think we can initialize the mutex in the hotplug callback. I
> think the following scenario is possible:
> 
> CPU #1					CPU #2
> zcomp_stream_get()
>   zstrm = raw_cpu_ptr()
> /* task migrated to CPU 2 */
> 
> CPU goes offline
>   zcomp_cpu_dead()
>     mutex_lock()
>     ..
>     mutex_unlock()
> 					/* migrated task continues */
> 					zcomp_stream_get()
> 					  mutex_lock()
> CPU goes online
>   mutex_init()
>   					  mutex_unlock() /* problem */
> 					
> In this case we'll end up initializing the mutex on CPU #1 while CPU #2
> has it locked. When we unlocked it on CPU #2 we will corrupt it AFAICT.
> 
> This is why I moved the mutex initialization out of the hotplug callback
> in zswap. I suspect to do something similar for zram we'd need to do it
> in zcomp_init()?

Yeah, I think you are right.  Let me take a look.

> >  	ret = comp->ops->create_ctx(comp->params, &zstrm->ctx);
> >  	if (ret)
> >  		return ret;
> > @@ -109,13 +111,29 @@ ssize_t zcomp_available_show(const char *comp, char *buf)
> >  
> >  struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
> >  {
> > -	local_lock(&comp->stream->lock);
> > -	return this_cpu_ptr(comp->stream);
> > +	for (;;) {
> > +		struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream);
> > +
> > +		/*
> > +		 * Inspired by zswap
> > +		 *
> > +		 * stream is returned with ->mutex locked which prevents
> > +		 * cpu_dead() from releasing this stream under us, however
> > +		 * there is still a race window between raw_cpu_ptr() and
> > +		 * mutex_lock(), during which we could have been migrated
> > +		 * to a CPU that has already destroyed its stream.  If so
> 
> "we could have been migrated from** a CPU that has already destroyed its
> stream"? Right?

"from", "to"... what's the difference :)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux