On Tue, Oct 26, 2021 at 5:34 PM Pasha Tatashin <pasha.tatashin@xxxxxxxxxx> wrote: > > On Tue, Oct 26, 2021 at 3:50 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > > > On Tue, Oct 26, 2021 at 05:38:15PM +0000, Pasha Tatashin wrote: > > > static inline void page_ref_add(struct page *page, int nr) > > > { > > > - atomic_add(nr, &page->_refcount); > > > + int ret; > > > + > > > + VM_BUG_ON(nr <= 0); > > > + ret = atomic_add_return(nr, &page->_refcount); > > > + VM_BUG_ON_PAGE(ret <= 0, page); > > > > This isn't right. _refcount is allowed to overflow into the negatives. > > See page_ref_zero_or_close_to_overflow() and the conversations that led > > to it being added. > > #define page_ref_zero_or_close_to_overflow(page) \ > 1204 ((unsigned int) page_ref_count(page) + 127u <= 127u) > > > Uh, right, I saw the macro but did not realize there was an (unsigned int) cast. > > OK, I think we can move this macro inside: > include/linux/page_ref.h > > modify it to something like this: > #define page_ref_zero_or_close_to_overflow(page) \ > ((unsigned int) page_ref_count(page) + v + 127u <= v + 127u) > > The sub/dec can also be fixed to ensure that we do not underflow but > still working with the fact that we use all 32bits of _refcount. I think we can do that by using: atomic_fetch_*() and check for overflow/underflow after operation. I will send the updated series soon. Pasha