On Wed, Sep 12, 2018 at 11:49:22AM -0400, Waiman Long wrote: > > unless our macrology has got too clever for the compilre to see through > > it. In which case, the right answer is to simplify the percpu code, > > not to force the compiler to optimise the code in the way that makes > > sense for your current microarchitecture. > > > I had actually looked at the x86 object file generated to verify that it > did use cmove with the patch and use branch without. It is possible that > there are other twists to make that happen with the above expression. I > will need to run some experiments to figure it out. In the mean time, I > am fine with dropping this patch as it is a micro-optimization that > doesn't change the behavior at all. I don't understand why you included it, to be honest. But it did get me looking at the percpu code to see if it was too clever. And that led to the resubmission of rth's patch from two years ago that I cc'd you on earlier. With that patch applied, gcc should be able to choose to use the cmov if it feels that would be a better optimisation. It already makes one different decision in dcache.o, namely that it uses addq $0x1,%gs:0x0(%rip) instead of incq %gs:0x0(%rip). Apparently this performs better on some CPUs. So I wouldn't spend any more time on this patch.