On Wed, Feb 27, 2019 at 04:03:25PM +1000, Nicholas Piggin wrote: > Matthew Wilcox's on February 27, 2019 3:27 am: > > 2. The setup overhead of the XA_STATE might be a problem. > > If so, we can do some batching in order to improve things. > > I suspect your test is calling __clear_shadow_entry through the > > truncate_exceptional_pvec_entries() path, which is already a batch. > > Maybe something like patch [1] at the end of this mail. > > One nasty thing about the XA_STATE stack object as opposed to just > passing the parameters (in the same order) down to children is that > you get the same memory accessed nearby, but in different ways > (different base register, offset, addressing mode etc). Which can > reduce effectiveness of memory disambiguation prediction, at least > in cold predictor case. That is nasty. At the C level, it's a really attractive pattern. Shame it doesn't work out so well on hardware. I wouldn't mind turning shift/sibs/offset into a manually-extracted unsigned long if that'll help with the addressing mode mispredictions? > I've seen (on some POWER CPUs at least) flushes due to aliasing > access in some of these xarray call chains, although no idea if > that actually makes a noticable difference in microbenchmark like > this. > > But it's not the greatest pattern to use for passing to low level > performance critical functions :( Ideally the compiler could just > do a big LTO pass right at the end and unwind it all back into > registers and fix everything, but that will never happen. I wonder if we could get the compiler people to introduce a structure attribute telling the compiler to pass this whole thing back-and-forth in registers ... 6 registers is a lot to ask the compiler to reserve though.