On Tue, Apr 25, 2023 at 09:32:46PM +0800, Alan Huang wrote: > > 2023年4月25日 00:02,Paul E. McKenney <paulmck@xxxxxxxxxx> 写道: > > > > On Thu, Apr 20, 2023 at 07:40:30AM +0000, Alan Huang wrote: > >> Signed-off-by: Alan Huang <mmpgouride@xxxxxxxxx> > >> --- > >> CodeSamples/defer/hazptrtorture.h | 2 +- > >> 1 file changed, 1 insertion(+), 1 deletion(-) > >> > >> diff --git a/CodeSamples/defer/hazptrtorture.h b/CodeSamples/defer/hazptrtorture.h > >> index 29761e3d..acdd532b 100644 > >> --- a/CodeSamples/defer/hazptrtorture.h > >> +++ b/CodeSamples/defer/hazptrtorture.h > >> @@ -99,7 +99,7 @@ void *hazptr_read_perf_test(void *arg) > >> { > >> int i; > >> int me = (long)arg; > >> - int base = me * K; > >> + int base = smp_thread_id() * K; > > > > Suppose specify a number of threads greater than the number of CPUs. > > For example, on my 12-hardware-thread laptop: > > > > ./route_hazptr --stresstest --nreaders 24 > > > > In that case, don't we want "me" rather than "smp_thread_id()"? Never mind, I was confused. Maybe I should have waited another day after return before looking at this. :-/ $ ./hazptr 24 perf sched_setaffinity: Invalid argument Aborted (core dumped) But that is a pre-existing problem. If I keep the number within the number of hardware threads, it works fine: $ ./hazptr 12 perf n_reads: 576608000 n_updates: 314343 nreaders: 12 nupdaters: 1 duration: 1 ns/read: 20.8114 ns/update: 3181.24 I am not all that worried about this. Running multiple threads per hardware thread in a performance test isn't all that useful, after all. And there is a robust diagnostic. Perhaps not as helpful as one might like, but definitely robust. ;-) > As Akira said, route_hazptr.c includes routetorture.h and route_hazptr.c doesn't call hp_record(). > The fix won't have any effect on route_hazptr. Agreed, again, my post-vacation confusion, apologies!!! Thanx, Paul