On Thu, Feb 27, 2025 at 05:10:13PM -0500, Lyude Paul wrote: > From: Boqun Feng <boqun.feng@xxxxxxxxx> > > Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx> > Signed-off-by: Lyude Paul <lyude@xxxxxxxxxx> > --- > arch/arm64/include/asm/preempt.h | 18 ++++++++++++++++++ > arch/s390/include/asm/preempt.h | 19 +++++++++++++++++++ > arch/x86/include/asm/preempt.h | 10 ++++++++++ > include/asm-generic/preempt.h | 14 ++++++++++++++ > 4 files changed, 61 insertions(+) ... > diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h > index 6ccd033acfe52..67a6e265e9fff 100644 > --- a/arch/s390/include/asm/preempt.h > +++ b/arch/s390/include/asm/preempt.h > @@ -98,6 +98,25 @@ static __always_inline bool should_resched(int preempt_offset) > return unlikely(READ_ONCE(get_lowcore()->preempt_count) == preempt_offset); > } > > +static __always_inline int __preempt_count_add_return(int val) > +{ > + /* > + * With some obscure config options and CONFIG_PROFILE_ALL_BRANCHES > + * enabled, gcc 12 fails to handle __builtin_constant_p(). > + */ > + if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES)) { > + if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) { > + return val + __atomic_add_const(val, &get_lowcore()->preempt_count); > + } > + } > + return val + __atomic_add(val, &get_lowcore()->preempt_count); > +} This should just be static __always_inline int __preempt_count_add_return(int val) { return val + __atomic_add(val, &get_lowcore()->preempt_count); } since __atomic_add_const() won't return the original value. Well.. at least it should not, but the way it is currently implemented it indeed does sometimes depending on config options - there is room for improvement. That's my fault - going to address that. I couldn't find any cover letter for the whole patch series which describes what this is about, and why it is needed. It looks like some Rust enablement?