Hello everyone! Recently, while running some pressure tests on MYSQL, noticed that occasionally a "corrupted data in log event" error would be reported. After analyzing the error, I found that extending DIO write and buffered read were competing, resulting in some zero-filled page end being read. Since ext4 buffered read doesn't hold an inode lock, and there is no field in the page to indicate the valid data size, it seems to me that it is impossible to solve this problem perfectly without changing these two things. In this series, the first patch reads the inode size twice, and takes the smaller of the two values as the copyout limit to avoid copying data that was not actually read (0-padding) into the user buffer and causing data corruption. This greatly reduces the probability of problems under 4k page. However, the problem is still easily triggered under 64k page. The second patch waits for the existing dio write to complete and invalidate the stale page cache before performing a new buffered read in ext4, avoiding data corruption by copying the stale page cache to the user buffer. This makes it much less likely that the problem will be triggered in a 64k page. Do we have a plan to add a lock to the ext4 buffered read or a field in the page that indicates the size of the valid data in the page? Or does anyone have a better idea? Comments and questions are, as always, welcome. Baokun Li (2): mm: avoid data corruption when extending DIO write race with buffered read ext4: avoid data corruption when extending DIO write race with buffered read fs/ext4/file.c | 3 +++ mm/filemap.c | 5 +++-- 2 files changed, 6 insertions(+), 2 deletions(-) -- 2.31.1