Re: [PATCH 1/5] zram: remove the call to page_endio in the bio end_io handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023-03-31 00:51, Minchan Kim wrote:
> On Tue, Mar 28, 2023 at 01:27:12PM +0200, Pankaj Raghav wrote:
>> zram_page_end_io function is called when alloc_page is used (for
>> partial IO) to trigger writeback from the user space. The pages used for
> 
> No, it was used with zram_rw_page since rw_page didn't carry the bio.
> 
>> this operation is never locked or have the writeback set. So, it is safe
> 
> VM had the page lock and wait to unlock.
> 
>> to remove the call to page_endio() function that unlocks or marks
>> writeback end on the page.
>>
>> Rename the endio handler from zram_page_end_io to zram_read_end_io as
>> the call to page_endio() is removed and to associate the callback to the
>> operation it is used in.
> 

I revisited the code again. Let me know if I got it right.

When we trigger writeback, we will always call zram_bvec_read() only if
ZRAM_WB is not set. That means we will only call zram_read_from_zspool() in
__zram_bvec_read when parent bio set to NULL.

static ssize_t writeback_store(struct device *dev, ...
{
if (zram_test_flag(zram, index, ZRAM_WB) ||
                   zram_test_flag(zram, index, ZRAM_SAME) ||
                   zram_test_flag(zram, index, ZRAM_UNDER_WB))
           goto next;
...
if (zram_bvec_read(zram, &bvec, index, 0, NULL)) {
...
}

static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
			    struct bio *bio, bool partial_io)
{
....
if (!zram_test_flag(zram, index, ZRAM_WB)) {
        /* Slot should be locked through out the function call */
        ret = zram_read_from_zspool(zram, page, index);
        zram_slot_unlock(zram, index);
} else {
        /* Slot should be unlocked before the function call */
        zram_slot_unlock(zram, index);

        ret = zram_bvec_read_from_bdev(zram, page, index, bio,
                                       partial_io);
}
....
}

> Since zram removed the rw_page and IO comes with bio from now on,
> IIUC, we are fine since every IO will go with chained-IO. Right?
>

We will never call zram_bvec_read_from_bdev() with parent bio set to NULL. IOW, we will always
only hit the bio_chain case in read_from_bdev_async. So we could do the following?:

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index b7bb52f8dfbd..2341f4009b0f 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -606,15 +606,6 @@ static void free_block_bdev(struct zram *zram, unsigned long blk_idx)
 	atomic64_dec(&zram->stats.bd_count);
 }

-static void zram_page_end_io(struct bio *bio)
-{
-	struct page *page = bio_first_page_all(bio);
-
-	page_endio(page, op_is_write(bio_op(bio)),
-			blk_status_to_errno(bio->bi_status));
-	bio_put(bio);
-}
-
 /*
  * Returns 1 if the submission is successful.
  */
@@ -634,9 +625,7 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
 		return -EIO;
 	}

-	if (!parent)
-		bio->bi_end_io = zram_page_end_io;
-	else
+	if (parent)
 		bio_chain(bio, parent);

 	submit_bio(bio);



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux