On Thu, Oct 26, 2023, Mingwei Zhang wrote: > On Thu, Oct 26, 2023, Sean Christopherson wrote: > > Heh, already did this too. Though I'm not entirely sure it's more readable. It's > > definitely more precise and featured :-) > > > Oh dear, this is challenging to my rusty inline assembly skills :) > > > #define GUEST_MEASURE_EVENT(_msr, _value, clflush, FEP) \ > > do { \ > > __asm__ __volatile__("wrmsr\n\t" \ > > clflush "\n\t" \ > > "mfence\n\t" \ > > "1: mov $" __stringify(NUM_BRANCHES) ", %%ecx\n\t" \ > > FEP "loop .\n\t" \ > > FEP "mov %%edi, %%ecx\n\t" \ > > FEP "xor %%eax, %%eax\n\t" \ > > FEP "xor %%edx, %%edx\n\t" \ > > "wrmsr\n\t" \ > > : "+c"((int){_msr}) \ > isn't it NUM_BRANCHES? Nope. It's hard to see because this doesn't provide the usage, but @_msr is an MSR index that is consumed by the first "wrmsr", i.e. this blob relies on the compiler to preload ECX, EAX, and EDX for WRMSR. NUM_BRANCHES is manually loaded into ECX after WRMSR (WRMSR and LOOP both hardcode consuming ECX). Ha! I can actually drop the "+c" clobbering trick since ECX is restored to its input value before the asm blob finishes. EDI is also loaded with _@msr so that it can be quickly reloaded into ECX for the WRMSR to disable the event. The purpose of doing both WRMSRs in assembly is to ensure the compiler doesn't insert _any_ code into the measured sequence, e.g. so that a random Jcc doesn't throw off instructions retired. > > : "a"((uint32_t)_value), "d"(_value >> 32), \ > > "D"(_msr) \ > > ); \ > > } while (0) > > > > do we need this label '1:' in the above code? It does not seems to be > used anywhere within the code. It's used by the caller as the target for CLFLUSH{,OPT}. if (this_cpu_has(X86_FEATURE_CLFLUSHOPT)) \ GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflushopt 1f", FEP); \ else if (this_cpu_has(X86_FEATURE_CLFLUSH)) \ GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflushopt 1f", FEP); \ else \ GUEST_MEASURE_EVENT(_ctrl_msr, _value, "nop", FEP); \ > > why is clflush needed here? As suggested by Jim, it allows verifying LLC references and misses by forcing the CPU to evict the cache line that holds the MOV at 1: (and likely holds most of the asm blob).