On 19/06/2018 19:23, Joe Perches wrote: > On Tue, 2018-06-19 at 10:08 -0700, Nick Desaulniers wrote: >> On Tue, Jun 19, 2018 at 8:19 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: >>> >>> On 15/06/2018 20:45, Nick Desaulniers wrote: >>>>> >>>>>> In any case I think it it preferable to fix the code over disabling >>>>>> the warning, unless the warning is bogus or there are just too many >>>>>> occurrences. >>>>> >>>>> Maybe. >>>> >>>> Spurious warning today, actual bug tomorrow? I prefer to not to >>>> disable warnings wholesale. They don't need to find actual bugs to be >>>> useful. Flagging code that can be further specified does not hurt. >>>> Part of the effort to compile the kernel with different compilers is >>>> to add warning coverage, not remove it. That said, there may be >>>> warnings that are never useful (or at least due to some invariant that >>>> only affects the kernel). I cant think of any off the top of my head, >>>> but I'm also not sure this is one. >>> >>> This one really makes the code uglier though, so I'm not really inclined >>> to applying the patch. >> >> Note that of the three variables (w, u, x), only u is used later on. >> What about declaring them as negated with the cast, that way there's >> no cast in a ternary? > > It'd be simpler to cast in the BYTE_MASK macro itself I don't think that would work, as the ~ would be done on a zero-extended signed int. Paolo > Ex: >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index d594690d8b95..53673ad4b295 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -4261,8 +4261,9 @@ static void update_permission_bitmask(struct >> kvm_vcpu *vcpu, >> { >> unsigned byte; >> >> - const u8 x = BYTE_MASK(ACC_EXEC_MASK); >> - const u8 w = BYTE_MASK(ACC_WRITE_MASK); >> + const u8 x_not = (u8)~BYTE_MASK(ACC_EXEC_MASK); >> + const u8 w_not = (u8)~BYTE_MASK(ACC_WRITE_MASK); >> + const u8 u_not = (u8)~BYTE_MASK(ACC_USER_MASK); >> const u8 u = BYTE_MASK(ACC_USER_MASK); >> >> >> bool cr4_smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP) != 0; >> @@ -4278,11 +4279,11 @@ static void update_permission_bitmask(struct >> kvm_vcpu *vcpu, >> */ >> >> /* Faults from writes to non-writable pages */ >> - u8 wf = (pfec & PFERR_WRITE_MASK) ? ~w : 0; >> + u8 wf = (pfec & PFERR_WRITE_MASK) ? w_not : 0; >> /* Faults from user mode accesses to supervisor pages */ >> - u8 uf = (pfec & PFERR_USER_MASK) ? ~u : 0; >> + u8 uf = (pfec & PFERR_USER_MASK) ? u_not : 0; >> /* Faults from fetches of non-executable pages*/ >> - u8 ff = (pfec & PFERR_FETCH_MASK) ? ~x : 0; >> + u8 ff = (pfec & PFERR_FETCH_MASK) ? x_not : 0; >> /* Faults from kernel mode fetches of user pages */ >> u8 smepf = 0; >> /* Faults from kernel mode accesses of user pages */ >> >> >> Maybe you have a better naming scheme than *_not ? What do you think? > > It'd be nicer to cast in the BYTE_MASK macro > and using "unsigned byte;" is misleading at best. > > --- > arch/x86/kvm/mmu.c | 17 ++++++++--------- > 1 file changed, 8 insertions(+), 9 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index d594690d8b95..201711aa99b9 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -4246,15 +4246,14 @@ reset_ept_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, > boot_cpu_data.x86_phys_bits, execonly); > } > > -#define BYTE_MASK(access) \ > - ((1 & (access) ? 2 : 0) | \ > - (2 & (access) ? 4 : 0) | \ > - (3 & (access) ? 8 : 0) | \ > - (4 & (access) ? 16 : 0) | \ > - (5 & (access) ? 32 : 0) | \ > - (6 & (access) ? 64 : 0) | \ > - (7 & (access) ? 128 : 0)) > - > +#define BYTE_MASK(access) \ > + ((u8)(((access) & 1 ? 2 : 0) | \ > + ((access) & 2 ? 4 : 0) | \ > + ((access) & 3 ? 8 : 0) | \ > + ((access) & 4 ? 16 : 0) | \ > + ((access) & 5 ? 32 : 0) | \ > + ((access) & 6 ? 64 : 0) | \ > + ((access) & 7 ? 128 : 0))) > > static void update_permission_bitmask(struct kvm_vcpu *vcpu, > struct kvm_mmu *mmu, bool ept) >