On 2024/9/30 10:02, Baolin Wang wrote:
On 2024/9/26 21:52, Matthew Wilcox wrote:
On Thu, Sep 26, 2024 at 10:38:34AM +0200, Pankaj Raghav (Samsung) wrote:
So this is why I don't use mapping_set_folio_order_range() here, but
correct me if I am wrong.
Yeah, the inode is active here as the max folio size is decided based on
the write size, so probably mapping_set_folio_order_range() will not be
a safe option.
You really are all making too much of this. Here's the patch I think we
need:
+++ b/mm/shmem.c
@@ -2831,7 +2831,8 @@ static struct inode *__shmem_get_inode(struct
mnt_idmap *idmap,
cache_no_acl(inode);
if (sbinfo->noswap)
mapping_set_unevictable(inode->i_mapping);
- mapping_set_large_folios(inode->i_mapping);
+ if (sbinfo->huge)
+ mapping_set_large_folios(inode->i_mapping);
switch (mode & S_IFMT) {
default:
IMHO, we no longer need the the 'sbinfo->huge' validation after adding
support for large folios in the tmpfs write and fallocate paths [1].
Kefeng, can you try if the following RFC patch [1] can solve your
problem? Thanks.
(PS: I will revise the patch according to Matthew's suggestion)
Sure, will try once I come back, but [1] won't solve the issue when set
force/deny at runtime, eg, mount with always/within_size, but set deny
when runtime, we still fault in large chunks, but we can't allocate
large folio, the performance of write will be degradation.
[1] https://lore.kernel.org/all/
c03ec1cb1392332726ab265a3d826fe1c408c7e7.1727338549.git.baolin.wang@xxxxxxxxxxxxxxxxx/