On Thu, 6 Apr 2023 17:23:51 -0700 Minchan Kim <minchan@xxxxxxxxxx> wrote: > > Someone may develop such a use case in the future. And backporting > > this fix will be difficult, unless people backport all the other > > patches, which is also difficult. > > I think the simple fix is just bail out for partial IO case from > rw_page path so that bio comes next to serve the rw_page failure. > In the case, zram will always do chained bio so we are fine with > asynchronous IO. > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > index b8549c61ff2c..23fa0e03cdc1 100644 > --- a/drivers/block/zram/zram_drv.c > +++ b/drivers/block/zram/zram_drv.c > @@ -1264,6 +1264,8 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index, > struct bio_vec bvec; > > zram_slot_unlock(zram, index); > + if (partial_io) > + return -EAGAIN; > > bvec.bv_page = page; > bvec.bv_len = PAGE_SIZE; > > > > > What are the user-visible effects of this bug? It sounds like it will > > give userspace access to unintialized kernel memory, which isn't good. > > It's true. > > Without better suggestion or objections, I could cook the stable patch. Sounds good to me. Please don't forget to describe the user-visible effects and the situation under which they will be observed, etc. Then I can redo Chritoph's patches on top, so we end up with this series as-is going forward.