On 1/4/24 08:36, Matthew Wilcox (Oracle) wrote: > Turn the excellent documentation for this function into kernel-doc. > Replace 'page' with 'folio' and make a few other minor updates. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > --- > fs/buffer.c | 54 +++++++++++++++++++++++++++++------------------------ > 1 file changed, 30 insertions(+), 24 deletions(-) > > diff --git a/fs/buffer.c b/fs/buffer.c > index 5c29850e4781..31e171382e00 100644 > --- a/fs/buffer.c > +++ b/fs/buffer.c > @@ -687,30 +687,36 @@ void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode) > } > EXPORT_SYMBOL(mark_buffer_dirty_inode); > > -/* > - * Add a page to the dirty page list. > - * > - * It is a sad fact of life that this function is called from several places > - * deeply under spinlocking. It may not sleep. > - * > - * If the page has buffers, the uptodate buffers are set dirty, to preserve > - * dirty-state coherency between the page and the buffers. It the page does > - * not have buffers then when they are later attached they will all be set > - * dirty. > - * > - * The buffers are dirtied before the page is dirtied. There's a small race > - * window in which a writepage caller may see the page cleanness but not the > - * buffer dirtiness. That's fine. If this code were to set the page dirty > - * before the buffers, a concurrent writepage caller could clear the page dirty > - * bit, see a bunch of clean buffers and we'd end up with dirty buffers/clean > - * page on the dirty page list. > - * > - * We use private_lock to lock against try_to_free_buffers while using the > - * page's buffer list. Also use this to protect against clean buffers being > - * added to the page after it was set dirty. > - * > - * FIXME: may need to call ->reservepage here as well. That's rather up to the > - * address_space though. > +/** > + * block_dirty_folio - Mark a folio as dirty. > + * @mapping: The address space containing this folio. > + * @folio: The folio to mark dirty. > + * > + * Filesystems which use buffer_heads can use this function as their > + * ->dirty_folio implementation. Some filesystems need to do a little > + * work before calling this function. Filesystems which do not use > + * buffer_heads should call filemap_dirty_folio() instead. > + * > + * If the folio has buffers, the uptodate buffers are set dirty, to > + * preserve dirty-state coherency between the folio and the buffers. > + * It the folio does not have buffers then when they are later attached > + * they will all be set dirty. > + * > + * The buffers are dirtied before the folio is dirtied. There's a small > + * race window in which writeback may see the folio cleanness but not the > + * buffer dirtiness. That's fine. If this code were to set the folio > + * dirty before the buffers, writeback could clear the folio dirty flag, > + * see a bunch of clean buffers and we'd end up with dirty buffers/clean > + * folio on the dirty folio list. > + * > + * We use private_lock to lock against try_to_free_buffers() while > + * using the folio's buffer list. This also prevents clean buffers > + * being added to the folio after it was set dirty. > + * > + * Context: May only be called from process context. Does not sleep. > + * Caller must ensure that @folio cannot be truncated during this call, > + * typically by holding the folio lock or having a page in the folio > + * mapped and holding the page table lock. * Return: tbd ? > */ > bool block_dirty_folio(struct address_space *mapping, struct folio *folio) > { -- #Randy