Re: [PATCH v6 1/7] KVM: x86: Deflect unknown MSR accesses to user space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 16, 2020 at 11:31:30AM +0200, Alexander Graf wrote:
> On 03.09.20 21:27, Aaron Lewis wrote:
> > > @@ -412,6 +414,15 @@ struct kvm_run {
> > >                          __u64 esr_iss;
> > >                          __u64 fault_ipa;
> > >                  } arm_nisv;
> > > +               /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */
> > > +               struct {
> > > +                       __u8 error; /* user -> kernel */
> > > +                       __u8 pad[3];
> > 
> > __u8 pad[7] to maintain 8 byte alignment?  unless we can get away with
> > fewer bits for 'reason' and
> > get them from 'pad'.
> 
> Why would we need an 8 byte alignment here? I always thought natural u64
> alignment on x86_64 was on 4 bytes?

u64 will usually (always?) be 8 byte aligned by the compiler.  "Natural"
alignment means an object is aligned to its size.  E.g. an 8-byte object
can split a cache line if it's only aligned on a 4-byte boundary.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux