Re: xendump image with full-vitualized domain

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kazuo Moriwaka wrote:
From: Dave Anderson <anderson@xxxxxxxxxx>
Subject: Re: xendump image with full-vitualized domain
Date: Mon, 13 Nov 2006 09:36:38 -0500

> Kazuo Moriwaka wrote:
>
> > Hi,
> >
> > I tried to analise full-virtualized domain's dump image with crash.
> > It abortes with following message.
> >
> > $ crash System.map-2.6.8-2-386 vmlinux-2.6.8-2-386 2006-1110-1141.38-guest2.4.core
> > (snip)
> > crash: cannot determine vcpu_guest_context.ctrlreg offset
> >
> > Full-virtualized domain's kernel doeesn't have any information about
> > xen-hypervisor, it also doesn't have struct vcpu_guest_context.
> > I'll put kernels and xendump core files at following for reference.
> >
> > http://people.valinux.co.jp/~moriwaka/domUcore/
> >   host.tar.gz  - xen hypervisor and dom0 kernel(for amd64)
> >   full-virtualized-guest.tar.gz - domU kernel(for i386) and dump image
> >                                   taken by 'xm dump-core' command.
> >
> > any ideas?
>
> No surprise here -- there's absolutely no crash utility support for
> xendumps of fully-virtualized kernels.
>
> Much of the information that crash uses to find its way
> around a xendump currently depends upon information
> *inside* the para-virtualized kernel.  In your attempt above,
> it needs data structure information for the vcpu_guest_context
> structure, in order to get a cr3 value -- which it uses to find the
> phys_to_machine_mapping[] array built into the kernel.

This headers' vcpu_guest_context.ctrlreg points just a dummy
pagetable.  (in that file, mfn 12122.)

> But obviously there is no phys_to_machine_mapping[]
> array in fully-virtualized kernels, so no pseudo-to-physical
> address translations can be made.

Yes.  I read some of code, and now I think this xendump image header
doesn't have enough information to find shadow page table.  Shadow
page table pointed by vcpu.arch.shadow.* in hypervisor, but xendump
doesn't have them.  If threre is whole-machine dump, converting can be
one solution.

> I'm not sure what the best solution is for fully-virtualized
> kernels.
>
> Perhaps what is needed is yet another tool that takes
> a xendump of a fully-virtualized kernel, and turns it into
> a recognizable vmcore?
>
> Whatever it is, an alternative manner of translating the
> "physical" addresses in the fully-virtualized kernel (which
> become pseudo-physical addresses in the xen environment)
> and find them in the xendump.

Xen's roadmap says that it will support full-virtualized domain's
save/restore in a few months; while supporting them, xendump format
will be changed to contain enough info to re-build domain's
pseudo-physical memory area. Just waiting for them is one way.
 

Yes, I agree -- there will have to be a design change to the
xendump-creation code.

Upon revisiting the current xendump-creation code, it see now
that it would be impossible to write a post-dump tool to create
a usable dumpfile, because that tool would need the pfn-to-mfn
mapping information that would only be available when the guest
kernel was active.

It's unfortunate that the current xendump format doesn't
maintain a true pfn-to-mfn mapping.  When I first starting
working with xendumps, I thought that was true, only to find
out that it is not.

The xendump header looks like this:

typedef struct xc_core_header {
    unsigned int xch_magic;
    unsigned int xch_nr_vcpus;
    unsigned int xch_nr_pages;
    unsigned int xch_ctxt_offset;
    unsigned int xch_index_offset;
    unsigned int xch_pages_offset;
} xc_core_header_t;

The number of pages in the xendump is "xch_nr_pages", which I
believe is all of the guest domain pfn's that happened to be
instantiated with a xen mfn at the time of the dump.  Therefore,
in all xendumps that I've seen, "xch_nr_pages" is always less
than the number of pages that the guest "thinks" that it has.

So anyway, starting at "xch_index_offset", there is an array
of "xch_nr_pages" xen_pfn_t values, which are actually mfn
values.  And starting at "xch_pages_offset", there is an array
of "xch_nr_pages" page data contents.  Essentially there is this:

  xen_pfn_t xch_index_offset[xch_nr_pages];
  char xch_pages_offset[xch_nr_pages][PAGE_SIZE];

So for any mfn found at a given index into the array at
xch_index_offset, the page data for that mfn is found at the
same index into the page data array at xch_pages_offset.

When I first starting working with this, I presumed that the
index into both arrays was based upon the pfn of the guest kernel,
and therefore that it would be a really simple to translate pfn
values from the vmlinux file into mfn values -- and therefore to
get its data.  And that is true at the beginning of the 2 xendump
arrays -- but only so far into the arrays -- because eventually
when a guest pfn is not instantiated with an mfn, it gets "skipped"
in both arrays.  Therefore, with each skipped pfn, the pfn-to-mfn
relationship gets farther and farther out of sync.  Unfortunately
there is no way of telling when the first pfn gets skipped.

It would have been ideal if they put in "non-existent" mfn markers
in the first array and sparse file space in the second array if
the pfn was not instantiated with an mfn...

Anyway, that being the case, I needed to use the pfn-to-mfn
translation mechanism used by the para-virtualized vmlinux kernel
itself, that being the phys_to_machine_mapping[] array built into
the guest kernel.  That array is a one-to-one mapping of pfns to
mfns, where non-existent mfns are actually marked as such.  And
once I get an mfn from the table in the guest kernel, the xendump
can be searched for that mfn and then its page data.

So, you are correct, I see no other choice except to have a new
xendump format that also contains the pfn-to-mfn mapping information.

Dave
 
 
 
 

--
Crash-utility mailing list
Crash-utility@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/crash-utility

[Index of Archives]     [Fedora Development]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]

 

Powered by Linux