On Fri, Sep 30, 2022 at 04:58:41PM +0200, Sebastian Andrzej Siewior wrote: > On 2022-09-23 13:09:41 [-0700], Mark Gross wrote: > > > As this was a tricky one I request people to give a good look over. > > You did good. I not so much. If you could please add the following patch > on top, then it will compile also on !RT. > > Thank you for work. > > ------->8---------- > > From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > Date: Fri, 30 Sep 2022 16:55:34 +0200 > Subject: [PATCH] local_lock: Provide INIT_LOCAL_LOCK(). > > The original code was using INIT_LOCAL_LOCK() and I tried to sneak > around it and forgot that this code also needs to compile on !RT > platforms. > > Provide INIT_LOCAL_LOCK() to initialize properly on RT and do nothing on > !RT. Let random.c use which is the only user so far and oes not compile > on !RT otherwise. > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > --- > drivers/char/random.c | 4 ++-- > include/linux/locallock.h | 5 +++++ > 2 files changed, 7 insertions(+), 2 deletions(-) > > diff --git a/drivers/char/random.c b/drivers/char/random.c > index daea466812fed..86c475f70343d 100644 > --- a/drivers/char/random.c > +++ b/drivers/char/random.c > @@ -236,7 +236,7 @@ struct crng { > > static DEFINE_PER_CPU(struct crng, crngs) = { > .generation = ULONG_MAX, > - .lock.lock = __SPIN_LOCK_UNLOCKED(crngs.lock.lock), > + .lock = INIT_LOCAL_LOCK(crngs.lock), > }; > > /* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */ > @@ -515,7 +515,7 @@ struct batch_ ##type { \ > }; \ > \ > static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = { \ > - .lock.lock = __SPIN_LOCK_UNLOCKED(batched_entropy_ ##type.lock.lock), \ > + .lock = INIT_LOCAL_LOCK(batched_entropy_ ##type.lock), \ > .position = UINT_MAX \ > }; \ > \ > diff --git a/include/linux/locallock.h b/include/linux/locallock.h > index 0c3ff5b23f6a1..70af9a177197e 100644 > --- a/include/linux/locallock.h > +++ b/include/linux/locallock.h > @@ -22,6 +22,8 @@ struct local_irq_lock { > unsigned long flags; > }; > > +#define INIT_LOCAL_LOCK(lvar) { .lock = __SPIN_LOCK_UNLOCKED((lvar).lock.lock) } > + > #define DEFINE_LOCAL_IRQ_LOCK(lvar) \ > DEFINE_PER_CPU(struct local_irq_lock, lvar) = { \ > .lock = __SPIN_LOCK_UNLOCKED((lvar).lock) } > @@ -256,6 +258,9 @@ static inline int __local_unlock_irqrestore(struct local_irq_lock *lv, > > #else /* PREEMPT_RT_BASE */ > > +struct local_irq_lock { }; > +#define INIT_LOCAL_LOCK(lvar) { } > + > #define DEFINE_LOCAL_IRQ_LOCK(lvar) __typeof__(const int) lvar > #define DECLARE_LOCAL_IRQ_LOCK(lvar) extern __typeof__(const int) lvar > > -- > 2.37.2 > > > Sebastian Thanks! I've applied this and starting some testing. I've also pushed the update to v4.9-rt-next if anyone feels like giving a spin before I make the release. I think I'll finally get the release done in the next few days. --mark