Re: Fwd: kvm-autotest: False PASS results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 1, 2009 at 8:33 PM, Uri Lublin <uril@xxxxxxxxxx> wrote:
> On 05/10/2009 08:15 PM, sudhir kumar wrote:
>>
>> Hi Uri,
>> Any comments?
>>
>>
>> ---------- Forwarded message ----------
>> From: sudhir kumar<smalikphy@xxxxxxxxx>
>>
>> The kvm-autotest shows the following PASS results for migration,
>> while the VM was crashed and test should have failed.
>>
>> Here is the sequence of test commands and results grepped from
>> kvm-autotest output.
>>
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
>> -name 'vm1' -monitor
>> unix:/tmp/monitor-20090508-055624-QSuS,server,nowait -drive
>>
>> file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
>> -net nic,vlan=0 -net user,vlan=0 -m 8192
>> -smp 4 -redir tcp:5000::22 -vnc :1
>>
>>
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
>> -name 'dst' -monitor
>> unix:/tmp/monitor-20090508-055625-iamW,server,nowait -drive
>>
>> file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
>> -net nic,vlan=0 -net user,vlan=0 -m 8192
>> -smp 4 -redir tcp:5001::22 -vnc :2 -incoming tcp:0:5200
>>
>>
>>
>> 2009-05-08 05:58:43,471 Configuring logger for client level
>>                GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1
>>        END GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1
>>
>>                GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> timestamp=1241762371
>> localtime=May 08 05:59:31       completed successfully
>> Persistent state variable __group_level now set to 1
>>        END GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> timestamp=1241762371
>> localtime=May 08 05:59:31
>>
>>  From the test output it looks that the test was succesful to
>> log into the guest after migration:
>>
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Migration
>> finished successfully
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> send_monitor_cmd: Sending monitor command: screendump
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_post.ppm
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> send_monitor_cmd: Sending monitor command: screendump
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_pre.ppm
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> send_monitor_cmd: Sending monitor command: quit
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> is_sshd_running: Timeout
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logging into
>> guest after migration...
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Trying to login...
>> 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Got 'Are you sure...'; sending 'yes'
>> 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Got password prompt; sending '123456'
>> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Got shell prompt -- logged in
>> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logged in
>> after migration
>> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> get_command_status_output: Sending command: help
>> 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> postprocess_vm: Postprocessing VM 'vm1'...
>> 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> postprocess_vm: VM object found in environment
>>
>> When I did vnc to the final migrated VM was crashed with a call trace
>> as shown in the attachment.
>> Quite less possible that the call trace appeared after the test
>> finished as migration with memory
>> more than 4GB is already broken [BUG 52527]. This looks a false PASS
>> to me. Any idea how can we handle
>> such falso positive results? Shall we wait for sometime after
>> migration, log into the vm, do some work or run some good test,
>> get output and report that the vm is alive?
>>
>
>
> I don't think it's a False PASS.
> It seems the test was able to ssh into the guest, and run a command on the
> guest.
>
> Currently we only run migration once (round-trip). I think we should run
> migration more than once (using iterations). If the guest crashes due to
> migration, it would fail following rounds of migration.
Also I would like to have some scripts like basic_test.py which will
be executed inside the guest to check it's health to a more extent.
Though this will again need different scripts for windows/linux/mac
etc. Do you agree on it?

>
> Sorry for the late reply,
Its OK. Thanks for the response.
>    Uri.
>



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux