Switch from the deprecated kmap() to kmap_local_folio(). For the kunmap_local(), I subtract off 'chars' to prevent the possibility that p has wrapped into the next page. Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> --- fs/reiserfs/inode.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index 41c0a785e9ab..0ca2d439510a 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -390,8 +390,7 @@ static int _get_block_create_0(struct inode *inode, sector_t block, * sure we need to. But, this means the item might move if * kmap schedules */ - p = (char *)kmap(bh_result->b_page); - p += offset; + p = kmap_local_folio(bh_result->b_folio, offset); memset(p, 0, inode->i_sb->s_blocksize); do { if (!is_direct_le_ih(ih)) { @@ -439,8 +438,8 @@ static int _get_block_create_0(struct inode *inode, sector_t block, ih = tp_item_head(&path); } while (1); - flush_dcache_page(bh_result->b_page); - kunmap(bh_result->b_page); + flush_dcache_folio(bh_result->b_folio); + kunmap_local(p - chars); finished: pathrelse(&path); -- 2.35.1