On Fri, 31 Jan 2025 18:06:00 +0900 Sergey Senozhatsky > Concurrent modifications of meta table entries is now handled > by per-entry spin-lock. This has a number of shortcomings. > > First, this imposes atomic requirements on compression backends. > zram can call both zcomp_compress() and zcomp_decompress() under > entry spin-lock, which implies that we can use only compression > algorithms that don't schedule/sleep/wait during compression and > decompression. This, for instance, makes it impossible to use > some of the ASYNC compression algorithms (H/W compression, etc.) > implementations. > > Second, this can potentially trigger watchdogs. For example, > entry re-compression with secondary algorithms is performed > under entry spin-lock. Given that we chain secondary > compression algorithms and that some of them can be configured > for best compression ratio (and worst compression speed) zram > can stay under spin-lock for quite some time. > > Do not use per-entry spin-locks and instead convert it to an > atomic_t variable which open codes reader-writer type of lock. > This permits preemption from slot_lock section, also reduces > the sizeof() zram entry when lockdep is enabled. > Nope, the price of cut in size will be paid by extra hours in debugging, given nothing is free. > Signed-off-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> > --- > drivers/block/zram/zram_drv.c | 126 ++++++++++++++++++++-------------- > drivers/block/zram/zram_drv.h | 6 +- > 2 files changed, 79 insertions(+), 53 deletions(-) > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > index 9f5020b077c5..1c2df2341704 100644 > --- a/drivers/block/zram/zram_drv.c > +++ b/drivers/block/zram/zram_drv.c > @@ -58,19 +58,50 @@ static void zram_free_page(struct zram *zram, size_t index); > static int zram_read_from_zspool(struct zram *zram, struct page *page, > u32 index); > > -static int zram_slot_trylock(struct zram *zram, u32 index) > +static bool zram_slot_try_write_lock(struct zram *zram, u32 index) > { > - return spin_trylock(&zram->table[index].lock); > + atomic_t *lock = &zram->table[index].lock; > + int old = ZRAM_ENTRY_UNLOCKED; > + > + return atomic_try_cmpxchg(lock, &old, ZRAM_ENTRY_WRLOCKED); > +} > + > +static void zram_slot_write_lock(struct zram *zram, u32 index) > +{ > + atomic_t *lock = &zram->table[index].lock; > + int old = atomic_read(lock); > + > + do { > + if (old != ZRAM_ENTRY_UNLOCKED) { > + cond_resched(); > + old = atomic_read(lock); > + continue; > + } > + } while (!atomic_try_cmpxchg(lock, &old, ZRAM_ENTRY_WRLOCKED)); > +} > + > +static void zram_slot_write_unlock(struct zram *zram, u32 index) > +{ > + atomic_set(&zram->table[index].lock, ZRAM_ENTRY_UNLOCKED); > } > > -static void zram_slot_lock(struct zram *zram, u32 index) > +static void zram_slot_read_lock(struct zram *zram, u32 index) > { > - spin_lock(&zram->table[index].lock); > + atomic_t *lock = &zram->table[index].lock; > + int old = atomic_read(lock); > + > + do { > + if (old == ZRAM_ENTRY_WRLOCKED) { > + cond_resched(); > + old = atomic_read(lock); > + continue; > + } > + } while (!atomic_try_cmpxchg(lock, &old, old + 1)); > } > > -static void zram_slot_unlock(struct zram *zram, u32 index) > +static void zram_slot_read_unlock(struct zram *zram, u32 index) > { > - spin_unlock(&zram->table[index].lock); > + atomic_dec(&zram->table[index].lock); > } > Given no boundaries of locking section marked in addition to lockdep, this is another usual case of inventing lock in 2025. What sense could be made by exercising molar tooth extraction in the kitchen because of pain afetr downing a pint of vodka instead of directly driving to see your dentist?