Re: [RFCv2 2/5] ext4: Remove PAGE_SIZE assumption of folio from mpage_submit_folio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Matthew Wilcox <willy@xxxxxxxxxxxxx> writes:

> On Mon, Jun 12, 2023 at 10:55:37PM +0530, Ritesh Harjani wrote:
>> It is easily recreatable if we have one thread doing buffered-io +
>> sync and other thread trying to truncate down inode->i_size.
>> Kernel panic maybe is happening only with -O encrypt mkfs option +
>> -o test_dummy_encryption mount option, but the size - folio_pos(folio)
>> is definitely wrong because inode->i_size is not protected in writeback path.
>
> Did you not see the email I sent right before you sent your previous
> email?

Aah yes, Matthew. I had seen that email yesterday after I sent my email.
Sorry I forgot to acknowdledge it today and thanks for pointing things
out.

I couldn't respond to your change because I still had some confusion
around this suggestion - 

> So do we care if we write a random fragment of a page after a truncate?
> If so, we should add:
> 
>         if (folio_pos(folio) >= size)
>                 return 0; /* Do we need to account nr_to_write? */

I was not sure whether if go with above case then whether it will
work with collapse_range. I initially thought that collapse_range will
truncate the pages between start and end of the file and then
it can also reduce the inode->i_size. That means writeback can find an
inode->i_size smaller than folio_pos(folio) which it is writing to.
But in this case we can't skip the write in writeback case like above
because that write is still required (a spurious write) even though
i_size is reduced as it's corresponding FS blocks are not truncated.

But just now looking at ext4_collapse_range() code it doesn't look like
it is the problem because it waits for any dirty data to be written
before truncate. So no matter which folio_pos(folio) the writeback is
writing, there should not be an issue if we simply return 0 like how
you suggested above.

    static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len)

    <...>
        ioffset = round_down(offset, PAGE_SIZE);
        /*
        * Write tail of the last page before removed range since it will get
        * removed from the page cache below.
        */

        ret = filemap_write_and_wait_range(mapping, ioffset, offset);
        if (ret)
            goto out_mmap;
        /*
        * Write data that will be shifted to preserve them when discarding
        * page cache below. We are also protected from pages becoming dirty
        * by i_rwsem and invalidate_lock.
        */
        ret = filemap_write_and_wait_range(mapping, offset + len,
                        LLONG_MAX);
        truncate_pagecache(inode, ioffset);

        <... within i_data_sem>
        i_size_write(inode, new_size);

    <...>


However to avoid problems like this I felt, I will do some more code
reading. And then I was mostly considering your second suggestion which
is this. This will ensure we keep the current behavior as is and not
change that.

> If we simply don't care that we're doing a spurious write, then we can
> do something like:
> 
> -               len = size & ~PAGE_MASK;
> +               len = size & (len - 1);


-ritesh




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux