Re: Magic Page in e500v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alex

> > Hi Aashish,
> >
> > On 05/21/2012 01:51 PM, Aashish Mittal wrote:
> >> Hi
> >> I'm working on KVM optimizations on Powerpc e500v2 embedded
> >> architecture. For my
> >> project i'm trying to increase the size of shared region mapped my
> >> Magic
> >> Page
> >> between host and guest for paravirtual support. I was expecting this
> >> would
> >> possible since we are using a TLB1 entry to map the magic page entry
> >> inside the
> >> host. I'm trying to increase it's size to 1MB. I've declared a shared
> >> structure
> >> tcache of 1MB similar to vcpu->arch.shared and trying to map it in the
> >> guest
> >> virtual space. The shared page earlier is now the last page of this
> >> tcache
> >> structure.
> >>
> >> This is the modified code :
> >>
> >> Initialization in e500.c in function kvmppc_core_vcpu_create
> >>
> >>          shared = (void*)__get_free_pages(GFP_KERNEL|__GFP_ZERO,10);
> >>          vcpu->arch.tcache = (void*)(shared);
> >>          vcpu->arch.shared = (void*)(shared + (((1<<
> >> 10)-1)<<PAGE_SHIFT));
> >
> > Did you also change the shared page elements to still be within the
> > first page? Otherwise the offset wouldn't fit into the immediate fields
> > of the asm instructions. We can't reach as low as -1MB with all
> > operations.
> >

 Since i'm keeping the shared(magic page) to be as the last page of this 1
 MB section and trying to map the guest virtual address from 0xfff00000 to
 0xffffffff while setting magic.mas2. Won't the magic page remain at it's
 original location i.e 0xfffff000 ?

 I've not changed anything on the shared page yet so the believe all the
 elements would remain to be on the first page , right now i'm just trying
 to increase the shared region to 1MB using this modification.

Aashish
> >>
> >>
> >> void kvmppc_map_magic(struct kvm_vcpu *vcpu)
> >> {
> >>      struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);
> >>      struct kvm_book3e_206_tlb_entry magic;
> >>      ulong shared_page = ((ulong)vcpu->arch.tcache)&  PAGE_MASK;
> >>      ulong page = shared_page;
> >>      unsigned int stid;
> >>      pfn_t pfn,pfn1;
> >>          int i = 0;
> >>
> >>          for(i=0;i<  1024;i++){
> >>            pfn1 = (pfn_t)virt_to_phys((void *)page)>>  PAGE_SHIFT;
> >>        get_page(pfn_to_page(pfn1));
> >>            page +=  0x1000;
> >>      }
> >>
> >>          pfn = (pfn_t)virt_to_phys((void *)shared_page)>>  PAGE_SHIFT;
> >>
> >>
> >>      preempt_disable();
> >>      stid = e500_get_sid(vcpu_e500, 0, 0, 0, 0);
> >>
> >>      magic.mas1 = MAS1_VALID | MAS1_TS | MAS1_TID(stid) |
> >>                   MAS1_TSIZE(BOOK3E_PAGESZ_1M);
> >>      magic.mas2 = (vcpu->arch.magic_page_ea&  0xfff00000)| MAS2_M;
> >>
> >>      magic.mas7_3 = ((u64)pfn<<  PAGE_SHIFT) |
> >>                     MAS3_SW | MAS3_SR | MAS3_UW | MAS3_UR;
> >>
> >>      __write_host_tlbe(&magic, MAS0_TLBSEL(1) |
> >> MAS0_ESEL(tlbcam_index));
> >>      preempt_enable();
> >> }
> >>
> >> But i'm experiencing the following error printed in the guest
> >>
> >> KVM: Live patching for a fast VM worked
> >> initcall kvm_guest_init+0x0/0x1f8 returned with disabled interrupts
> >> initcall migration_init+0x0/0x8c returned with disabled interrupts
> >>
> >> and then the guest just hangs.
> >>
> >> Does anybody have any idea how to map it correctly
> >>
> >> Thanks
> >>
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
>
>
> cheers
> Aashish
>
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux