Hi, On 1/9/24 06:33, Matthew Wilcox (Oracle) wrote: > The documentation for this function has become separated from it over > time; move it to the right place and turn it into kernel-doc. Mild > editing of the content to make it more about what the function does, and > less about how it does it. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > --- > fs/buffer.c | 44 ++++++++++++++++++++++++-------------------- > 1 file changed, 24 insertions(+), 20 deletions(-) > > diff --git a/fs/buffer.c b/fs/buffer.c > index 071f01b28c90..25861241657f 100644 > --- a/fs/buffer.c > +++ b/fs/buffer.c > @@ -2864,26 +2864,6 @@ int sync_dirty_buffer(struct buffer_head *bh) > } > EXPORT_SYMBOL(sync_dirty_buffer); > > -/* > - * try_to_free_buffers() checks if all the buffers on this particular folio > - * are unused, and releases them if so. > - * > - * Exclusion against try_to_free_buffers may be obtained by either > - * locking the folio or by holding its mapping's i_private_lock. > - * > - * If the folio is dirty but all the buffers are clean then we need to > - * be sure to mark the folio clean as well. This is because the folio > - * may be against a block device, and a later reattachment of buffers > - * to a dirty folio will set *all* buffers dirty. Which would corrupt > - * filesystem data on the same device. > - * > - * The same applies to regular filesystem folios: if all the buffers are > - * clean then we set the folio clean and proceed. To do that, we require > - * total exclusion from block_dirty_folio(). That is obtained with > - * i_private_lock. > - * > - * try_to_free_buffers() is non-blocking. > - */ > static inline int buffer_busy(struct buffer_head *bh) > { > return atomic_read(&bh->b_count) | > @@ -2917,6 +2897,30 @@ drop_buffers(struct folio *folio, struct buffer_head **buffers_to_free) > return false; > } > > +/** > + * try_to_free_buffers: Release buffers attached to this folio. preferably s/_buffers: /_buffers - / > + * @folio: The folio. > + * > + * If any buffers are in use (dirty, under writeback, elevated refcount), > + * no buffers will be freed. > + * > + * If the folio is dirty but all the buffers are clean then we need to > + * be sure to mark the folio clean as well. This is because the folio > + * may be against a block device, and a later reattachment of buffers > + * to a dirty folio will set *all* buffers dirty. Which would corrupt > + * filesystem data on the same device. > + * > + * The same applies to regular filesystem folios: if all the buffers are > + * clean then we set the folio clean and proceed. To do that, we require > + * total exclusion from block_dirty_folio(). That is obtained with > + * i_private_lock. > + * > + * Exclusion against try_to_free_buffers may be obtained by either > + * locking the folio or by holding its mapping's i_private_lock. > + * > + * Context: Process context. @folio must be locked. Will not sleep. > + * Return: true if all buffers attached to this folio were freed. > + */ > bool try_to_free_buffers(struct folio *folio) > { > struct address_space * const mapping = folio->mapping; -- #Randy