On 07.12.24 09:29, Mateusz Guzik wrote:
Explicitly pre-checking the count adds nothing as atomic_add_unless
starts with doing the same thing. iow no functional changes.
I recall that we added that check because with the hugetlb vmemmap
optimization, some of the tail pages we don't ever expect to be modified
(because they are fake-duplicated) might be mapped R/O.
If the arch implementation of atomic_add_unless() would trigger an
unconditional write fault, we'd be in trouble. That would likely only be
the case if the arch provides a dedicate instruction.
atomic_add_unless()->raw_atomic_add_unless()
Nobody currently defines arch_atomic_add_unless().
raw_atomic_fetch_add_unless()->arch_atomic_fetch_add_unless() is defined
on some architectures.
I scanned some of the inline-asm, and I think most of them perform a
check first.
So this currently looks good to me, but we'll rely on the fact that
atomic_add_unless() will never trigger a write fault if the values
match. Which makes me wonder if we should document that behavior of
atomic_add_unless().
--
Cheers,
David / dhildenb