On Tue, Oct 05, 2021 at 11:18:38AM -0400, Brian Geffon wrote: > > On Mon, Oct 4, 2021 at 4:55 PM Minchan Kim <minchan@xxxxxxxxxx> wrote: > > > > > > On Mon, Oct 04, 2021 at 02:40:52PM -0400, Brian Geffon wrote: > > > > On Mon, Oct 4, 2021 at 2:29 PM Minchan Kim <minchan@xxxxxxxxxx> wrote: > > > > > > > > > > On Fri, Oct 01, 2021 at 11:16:27AM -0700, Brian Geffon wrote: > > > > > > There does not appear to be a technical reason to not > > > > > > allow the zram backing device to be assigned after the > > > > > > zram device is initialized. > > > > > > > > > > > > This change will allow for the backing device to be assigned > > > > > > as long as no backing device is already assigned. In that > > > > > > event backing_dev would return -EEXIST. > > > > > > > > > > > > Signed-off-by: Brian Geffon <bgeffon@xxxxxxxxxx> > > > > > > --- > > > > > > drivers/block/zram/zram_drv.c | 6 +++--- > > > > > > 1 file changed, 3 insertions(+), 3 deletions(-) > > > > > > > > > > > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > > > > > > index fcaf2750f68f..12b4555ee079 100644 > > > > > > --- a/drivers/block/zram/zram_drv.c > > > > > > +++ b/drivers/block/zram/zram_drv.c > > > > > > @@ -462,9 +462,9 @@ static ssize_t backing_dev_store(struct device *dev, > > > > > > return -ENOMEM; > > > > > > > > > > > > down_write(&zram->init_lock); > > > > > > - if (init_done(zram)) { > > > > > > - pr_info("Can't setup backing device for initialized device\n"); > > > > > > - err = -EBUSY; > > > > > > + if (zram->backing_dev) { > > > > > > + pr_info("Backing device is already assigned\n"); > > > > > > + err = -EEXIST; > > > > > > goto out; > > > > > > > > > > Hi Brian, > > > > > > > > > > > > > Hi Minchan, > > > > > > > > > I am worry about the inconsistency with other interface of current zram > > > > > set up. They were supposed to set it up before zram disksize setting > > > > > because it makes code more simple/maintainalbe in that we don't need > > > > > to check some feature on the fly. > > > > > > > > > > Let's think about when zram extends the writeback of incompressible > > > > > page on demand. The write path will need the backing_dev under > > > > > down_read(&zarm->init_lock) or other conditional variable to check > > > > > whether the feature is enabled or not on the fly. > > > > > > > > I don't follow what you mean by that, writeback_store already holds > > > > down_read(&zarm->init_lock). > > > > > > I should have explained a bit more. Sorry about that. > > > I am thinking about a feature to deal with incompressible page. > > > Let's have an example to handle incompressible page for that. > > > > > > zram_bvec_rw > > > zram_bvec_write > > > if (comp_len >= huge_class) > > > zs_page_writeback > > > down_read(&zram->init_lock) or some other way > > > > > > It's just idea for incompressible page but we might intorduce > > > the way for other compresible pages, too at some condition. > > (sorry for the top post before) > > Hi Minchan, > I guess the point I was trying to make was that so long as we allow a > reset operation we'll need to be taking the init lock in read mode > before doing any writeback. Does that seem right? It's true and it introduced many lock dependency problems before. We actually had the lock in the rw path but we removed the lock so without strong reason, I'd like to avoid the lock in the rw path. commit 08eee69fcf6b Author: Minchan Kim <minchan@xxxxxxxxxx> Date: Thu Feb 12 15:00:45 2015 -0800 zram: remove init_lock in zram_make_request Admin could reset zram during I/O operation going on so we have used zram->init_lock as read-side lock in I/O path to prevent sudden zram meta freeing. However, the init_lock is really troublesome. We can't do call zram_meta_alloc under init_lock due to lockdep splat because zram_rw_page is one of the function under reclaim path and hold it as read_lock while other places in process context hold it as write_lock. So, we have used allocation out of the lock to avoid lockdep warn but it's not good for readability and fainally, I met another lockdep splat between init_lock and cpu_hotplug from kmem_cache_destroy during working zsmalloc compaction. :( Yes, the ideal is to remove horrible init_lock of zram in rw path. This patch removes it in rw path and instead, add atomic refcount for meta lifetime management and completion to free meta in process context. It's important to free meta in process context because some of resource destruction needs mutex lock, which could be held if we releases the resource in reclaim context so it's deadlock, again. As a bonus, we could remove init_done check in rw path because zram_meta_get will do a role for it, instead.