Re: virsh dump blocking problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 06 Apr 2010 09:35:09 +0800
Gui Jianfeng <guijianfeng@xxxxxxxxxxxxxx> wrote:

> Hi all,
> 
> I'm not sure whether it's appropriate to post the problem here.
> I played with "virsh" under Fedora 12, and started a KVM fedora12 guest
> by "virsh start" command. The fedora12 guest is successfully started.
> Than I run the following command to dump the guest core:
> #virsh dump 1 mycoredump             (domain id is 1)
> 
> This command seemed blocking and not return. According to he strace
> output, virsh dump seems that it's blocking at poll() call. I think
> the following should be the call trace of virsh.
> 
> cmdDump()    
>   -> virDomainCoreDump()
>     -> remoteDomainCoreDump()
>          -> call()
>              -> remoteIO()
>                  -> remoteIOEventLoop()
>                       -> poll(fds, ARRAY_CARDINALITY(fds), -1)
> 
> 
> Any one encounters this problem also, any thoughts?
> 

I met and it seems qemu-kvm continues to counting the number of dirty pages
and does no answer to libvirt. Guest never work and I have to kill it.

I met this with 2.6.32+ qemu-0.12.3+ libvirt 0.7.7.1.
When I updated the host kernel to 2.6.33, qemu-kvm never work. So, I moved
back to fedora12's latest qemu-kvm.

Now, 2.6.34-rc3+ qemu-0.11.0-13.fc12.x86_64 + libvirt 0.7.7.1
# virsh dump xxxx xxxx 
hangs.

In most case, I see following 2 back trace.(with gdb)

(gdb) bt
#0  ram_save_remaining () at /usr/src/debug/qemu-kvm-0.11.0/vl.c:3104
#1  ram_bytes_remaining () at /usr/src/debug/qemu-kvm-0.11.0/vl.c:3112
#2  0x00000000004ab2cf in do_info_migrate (mon=0x16b7970) at migration.c:150
#3  0x0000000000414b1a in monitor_handle_command (mon=<value optimized out>,
    cmdline=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.11.0/monitor.c:2870
#4  0x0000000000414c6a in monitor_command_cb (mon=0x16b7970,
    cmdline=<value optimized out>, opaque=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.11.0/monitor.c:3160
#5  0x000000000048b71b in readline_handle_byte (rs=0x208d6a0,
    ch=<value optimized out>) at readline.c:369
#6  0x0000000000414cdc in monitor_read (opaque=<value optimized out>,
    buf=0x7fff1b1104b0 "info migrate\r", size=13)
    at /usr/src/debug/qemu-kvm-0.11.0/monitor.c:3146
#7  0x00000000004b2a53 in tcp_chr_read (opaque=0x1614c30) at qemu-char.c:2006
#8  0x000000000040a6c7 in main_loop_wait (timeout=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.11.0/vl.c:4188
#9  0x000000000040eed5 in main_loop (argc=<value optimized out>,
    argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.11.0/vl.c:4414
#10 main (argc=<value optimized out>, argv=<value optimized out>,
    envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.11.0/vl.c:6263


(gdb) bt
#0  0x0000003c2680e0bd in write () at ../sysdeps/unix/syscall-template.S:82
#1  0x00000000004b304a in unix_write (fd=11, buf=<value optimized out>, len1=40)
    at qemu-char.c:512
#2  send_all (fd=11, buf=<value optimized out>, len1=40) at qemu-char.c:528
#3  0x0000000000411201 in monitor_flush (mon=0x16b7970)
    at /usr/src/debug/qemu-kvm-0.11.0/monitor.c:131
#4  0x0000000000414cdc in monitor_read (opaque=<value optimized out>,
    buf=0x7fff1b1104b0 "info migrate\r", size=13)
    at /usr/src/debug/qemu-kvm-0.11.0/monitor.c:3146
#5  0x00000000004b2a53 in tcp_chr_read (opaque=0x1614c30) at qemu-char.c:2006
#6  0x000000000040a6c7 in main_loop_wait (timeout=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.11.0/vl.c:4188
#7  0x000000000040eed5 in main_loop (argc=<value optimized out>,
    argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.11.0/vl.c:4414
#8  main (argc=<value optimized out>, argv=<value optimized out>,
    envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.11.0/vl.c:6263

And see no dump progress.

I'm sorry if this is not a hang but just verrrrry slow. I don't see any
progress at lease for 15 minutes and qemu-kvm continues to use 75% of cpus.
I'm not sure why "dump" command trigger migration code...

How long it takes to do "virsh dump xxx xxxx", an idle VM with 2G memory ?
I'm sorry if I ask wrong mailing list.

Thanks,
-Kame









--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux