On Tue, May 17, 2022, Aaron Lewis wrote: > When returning from the compare function the u64 is truncated to an > int. This results in a loss of the high nybble[1] in the event select > and its sign if that nybble is in use. Switch from using a result that > can end up being truncated to a result that can only be: 1, 0, -1. > > [1] bits 35:32 in the event select register and bits 11:8 in the event > select. > > Fixes: 7ff775aca48ad ("KVM: x86/pmu: Use binary search to check filtered events") > Signed-off-by: Aaron Lewis <aaronlewis@xxxxxxxxxx> > --- > arch/x86/kvm/pmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > index eca39f56c231..1666e9d3e545 100644 > --- a/arch/x86/kvm/pmu.c > +++ b/arch/x86/kvm/pmu.c > @@ -173,7 +173,7 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) > > static int cmp_u64(const void *a, const void *b) > { > - return *(__u64 *)a - *(__u64 *)b; > + return (*(u64 *)a > *(u64 *)b) - (*(u64 *)a < *(u64 *)b); On one hand, this is downright evil. On the other, it does generate branch-free code, whereas gcc does not for explicit returns... It's a little easier to read if the values are captured in local variables? u64 l = *(u64 *)a; u64 r = *(u64 *)b; return (l > r) - (l < r); Either way, Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx>