On Fri, Jul 23, 2021 at 04:56:35PM +0100, Matthew Wilcox wrote: > On Fri, Jul 23, 2021 at 11:23:38PM +0800, Gao Xiang wrote: > > Hi Matthew, > > > > On Fri, Jul 23, 2021 at 04:05:29PM +0100, Matthew Wilcox wrote: > > > On Thu, Jul 22, 2021 at 07:39:47AM +0200, Christoph Hellwig wrote: > > > > @@ -675,7 +676,7 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page, > > > > > > > > flush_dcache_page(page); > > > > addr = kmap_atomic(page); > > > > - memcpy(iomap->inline_data + pos, addr + pos, copied); > > > > + memcpy(iomap_inline_buf(iomap, pos), addr + pos, copied); > > > > > > This is wrong; pos can be > PAGE_SIZE, so this needs to be > > > addr + offset_in_page(pos). > > > > Yeah, thanks for pointing out. It seems so, since EROFS cannot test > > such write path, previously it was disabled explicitly. I could > > update it in the next version as above. > > We're also missing a call to __set_page_dirty_nobuffers(). This > matters to nobody right now -- erofs is read-only and gfs2 only > supports inline data in the inode. I presume what is happening > for gfs2 is that at inode writeback time, it copies the ~60 bytes > from the page cache into the inode and then schedules the inode > for writeback. > > But logically, we should mark the page as dirty. It'll be marked > as dirty by ->mkwrite, should the page be mmaped, so gfs2 must > already cope with a dirty page for inline data. I'd suggest we still disable tail-packing inline for buffered write path until some real user for testing. I can see some (maybe) page writeback, inode writeback and inline converting cases which is somewhat complicated than just update like this. I suggest it could be implemented with some real users, at least it can provide the real write pattern and paths for testing. I will send the next version like my previous version to disable it until some real fs user cares and works out a real pattern. Thanks, Gao Xiang