On Mon, Oct 21, 2019 at 02:19:01PM +0200, Rasmus Villemoes wrote: > On 21/10/2019 13.33, Christian Brauner wrote: > > The first approach used smp_load_acquire() and smp_store_release(). > > However, after having discussed this it seems that the data dependency > > for kmem_cache_alloc() would be fixed by WRITE_ONCE(). > > Furthermore, the smp_load_acquire() would only manage to order the stats > > check before the thread_group_empty() check. So it seems just using > > READ_ONCE() and WRITE_ONCE() will do the job and I wanted to bring this > > up for discussion at least. > > > > /* v6 */ > > - Christian Brauner <christian.brauner@xxxxxxxxxx>: > > - bring up READ_ONCE()/WRITE_ONCE() approach for discussion > > --- > > kernel/taskstats.c | 26 +++++++++++++++----------- > > 1 file changed, 15 insertions(+), 11 deletions(-) > > > > diff --git a/kernel/taskstats.c b/kernel/taskstats.c > > index 13a0f2e6ebc2..111bb4139aa2 100644 > > --- a/kernel/taskstats.c > > +++ b/kernel/taskstats.c > > @@ -554,25 +554,29 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info) > > static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk) > > { > > struct signal_struct *sig = tsk->signal; > > - struct taskstats *stats; > > + struct taskstats *stats_new, *stats; > > > > - if (sig->stats || thread_group_empty(tsk)) > > - goto ret; > > + /* Pairs with WRITE_ONCE() below. */ > > + stats = READ_ONCE(sig->stats); > > + if (stats || thread_group_empty(tsk)) > > + return stats; > > > > /* No problem if kmem_cache_zalloc() fails */ > > - stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL); > > + stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL); > > > > spin_lock_irq(&tsk->sighand->siglock); > > - if (!sig->stats) { > > - sig->stats = stats; > > - stats = NULL; > > + if (!stats) { > > + stats = stats_new; > > + /* Pairs with READ_ONCE() above. */ > > + WRITE_ONCE(sig->stats, stats_new); > > + stats_new = NULL; > > No idea about the memory ordering issues, but don't you need to > load/check sig->stats again? Otherwise it seems that two threads might > both see !sig->stats, both allocate a stats_new, and both > unconditionally in turn assign their stats_new to sig->stats. Then the > first assignment ends up becoming a memory leak (and any writes through > that pointer done by the caller end up in /dev/null...) Trigger hand too fast. I guess you're thinking sm like: diff --git a/kernel/taskstats.c b/kernel/taskstats.c index 13a0f2e6ebc2..c4e1ed11e785 100644 --- a/kernel/taskstats.c +++ b/kernel/taskstats.c @@ -554,25 +554,27 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info) static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk) { struct signal_struct *sig = tsk->signal; - struct taskstats *stats; + struct taskstats *stats_new, *stats; - if (sig->stats || thread_group_empty(tsk)) - goto ret; + stats = READ_ONCE(sig->stats); + if (stats || thread_group_empty(tsk)) + return stats; - /* No problem if kmem_cache_zalloc() fails */ - stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL); + stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL); spin_lock_irq(&tsk->sighand->siglock); - if (!sig->stats) { - sig->stats = stats; - stats = NULL; + stats = READ_ONCE(sig->stats); + if (!stats) { + stats = stats_new; + WRITE_ONCE(sig->stats, stats_new); + stats_new = NULL; } spin_unlock_irq(&tsk->sighand->siglock); - if (stats) - kmem_cache_free(taskstats_cache, stats); -ret: - return sig->stats; + if (stats_new) + kmem_cache_free(taskstats_cache, stats_new); + + return stats; }