* Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > On Mon, Sep 2, 2013 at 12:05 AM, Ingo Molnar <mingo@xxxxxxxxxx> wrote: > > > > The Haswell perf code isn't very widely tested yet as it took quite some > > time to get it ready for upstream and thus got merged late, but on its > > face this looks like a pretty good profile. > > Yes. And everything else looks fine too. Profiles without locked > instructions all look very reasonable, and have the expected patterns. > > \> It still looks anomalous to me, on fresh Intel hardware. One suggestion: > > could you, just for pure testing purposes, turn HT off and do a quick > > profile that way? > > > > The XADD, even if it's all in the fast path, could be a pretty natural > > point to 'yield' an SMT context on a given core, giving it artificially > > high overhead. > > > > Note that to test HT off an intrusive reboot is probably not needed, if > > the HT siblings are right after each other in the CPU enumeration sequence > > then you can turn HT "off" effectively by running the workload only on 4 > > cores: > > > > taskset 0x55 ./my-test > > > > and reducing the # of your workload threads to 4 or so. > > Remember: I see the exact same profile for single-thread behavior. Oh, indeed. > Other things change (iow, lockref_get_or_lock() is either ~3% or ~30% - > the latter case is for when there are bouncing cachelines), but > lg_local_lock() stays pretty constant. > > So it's not a HT artifact or anything like that. > > I've timed "lock xadd" separately, and it's not a slow instruction. I > also tried (in user space, using thread-local storage) to see if it's > the combination of creating the address through a segment load and that > somehow causing a micro-exception or something (the P4 used to have > things like that), and that doesn't seem to account for it either. > > It is entirely possible that it is just a "cycles:pp" oddity - because > the "lock xadd" is serializing, it can't retire until everything around > it has been sorted out, and maybe it just shows up in profiles more than > is really "fair" to the instruction itself, because it ends up being > that stable point for potentially hundreds of instructions around it. One more thing to try would be a regular '-e cycles' non-PEBS run and see whether there's still largish overhead visible around that instruction. That reintroduces skid, but it eliminates any PEBS and LBR funnies, as our cycles:pp event is a really tricky/complex beast internally. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html