On 2024/9/26 21:52, Matthew Wilcox wrote:
On Thu, Sep 26, 2024 at 10:38:34AM +0200, Pankaj Raghav (Samsung) wrote:
So this is why I don't use mapping_set_folio_order_range() here, but
correct me if I am wrong.
Yeah, the inode is active here as the max folio size is decided based on
the write size, so probably mapping_set_folio_order_range() will not be
a safe option.
You really are all making too much of this. Here's the patch I think we
need:
+++ b/mm/shmem.c
@@ -2831,7 +2831,8 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap,
cache_no_acl(inode);
if (sbinfo->noswap)
mapping_set_unevictable(inode->i_mapping);
- mapping_set_large_folios(inode->i_mapping);
+ if (sbinfo->huge)
+ mapping_set_large_folios(inode->i_mapping);
But it can't solve all issue, eg,
mount with huge = SHMEM_HUGE_WITHIN_SIZE, or
mount with SHMEM_HUGE_ALWAYS + runtime SHMEM_HUGE_DENY
and the above change will break
mount with SHMEM_HUGE_NEVER + runtime SHMEM_HUGE_FORCE