RE: [PATCH] x86/cpu: correct values for GDT_ENTRY_INIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Andrew Cooper
> Sent: 26 November 2020 23:52
> 
> On 26/11/2020 19:15, Andy Lutomirski wrote:
> > On Thu, Nov 26, 2020 at 11:07 AM Lukas Bulwahn <lukas.bulwahn@xxxxxxxxx> wrote:
> >> On Thu, Nov 26, 2020 at 6:16 PM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
> >>> On 26/11/2020 11:54, Lukas Bulwahn wrote:
> >>>> Commit 1e5de18278e6 ("x86: Introduce GDT_ENTRY_INIT()") unintentionally
> >>>> transformed a few 0xffff values to 0xfffff (note: five times "f" instead of
> >>>> four) as part of the refactoring.
> >>> The transformation in that change is correct.
> >>>
> >>> Segment bases are 20 bits wide in x86,
> 
> I of course meant segment limits here, rather than bases.
> 
> >>> Does:
> >>>
> >>> diff --git a/arch/x86/include/asm/desc_defs.h
> >>> b/arch/x86/include/asm/desc_defs.h
> >>> index f7e7099af595..9561f3c66e9e 100644
> >>> --- a/arch/x86/include/asm/desc_defs.h
> >>> +++ b/arch/x86/include/asm/desc_defs.h
> >>> @@ -22,7 +22,7 @@ struct desc_struct {
> >>>
> >>>  #define GDT_ENTRY_INIT(flags, base, limit)                     \
> >>>         {                                                       \
> >>> -               .limit0         = (u16) (limit),                \
> >>> +               .limit0         = (u16) (limit) & 0xFFFF,       \
> >>>                 .limit1         = ((limit) >> 16) & 0x0F,       \
> >>>                 .base0          = (u16) (base),                 \
> >>>                 .base1          = ((base) >> 16) & 0xFF,        \
> >>>
> >>> fix the warning?
> >>>
> >> Thanks, I will try that out, and try compiling a 32-bit kernel as well.
> > You should also try comparing the objdump output before and after your
> > patch.  objdump -D will produce bizarre output but should work.
> 
> Expanding on this a little, if that does indeed fix the sparse warning,
> then I'd make an argument for this being a bug in sparse.  Explicitly
> casting to u16 is semantically and intentionally identical to & 0xffff.

Even the (u16) cast is pointless.
I don't think the current versions of gcc are as stupid as old ones,
but I have seen:
	*cp = (char)(x & 0xff);
generate code that masks with 0xff, masks with 0xff again,
and then does a byte store.

I have a strong dislike of the use of integer casts to silence
compiler warnings.
Casts should be rare because they can hide very nasty bugs.
Although the 'pointer to integer of different size' warning
does pick up most of the bad ones.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)




[Index of Archives]     [Kernel Development]     [Kernel Announce]     [Kernel Newbies]     [Linux Networking Development]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Device Mapper]

  Powered by Linux