On 4 February 2014 16:37, Claudio Fontana <hw.claudio@xxxxxxxxx> wrote: > On 4 February 2014 16:39, Peter Maydell <peter.maydell@xxxxxxxxxx> wrote: >> On 4 February 2014 15:36, Claudio Fontana <hw.claudio@xxxxxxxxx> wrote: >> > I just wanted to ask what is the current state of kvm control for >> > qemu-system-aarch64. >> > I tried latest mainline but I think it's not all there yet (it complains >> > about missing cpu when I use -M virt and -cpu host, so I suspect some of VOS >> > patches are still missing). >> > >> > Is your aarch64-kvm still the one branch to look at? >> >> Nope, this should all work in mainline. If it doesn't it's >> worth investigating what exactly is going wrong. >> >> (Sanity check, you did pass -enable-kvm, right? If you don't >> then QEMU will complain about "-cpu host", because that >> only exists if KVM is enabled.) > I tried both, without -enable-kvm I get the complaint about "-cpu > host" as you mention, > but with -enable-kvm and the latest kernel I get: > > > [ 8489.895747] BUG: Bad page state in process qemu-system-aar pfn:0a5cd > [ 8489.895816] page:fffffdfc002444d8 count:-1 mapcount:0 mapping: > (null) index:0x0 > [ 8489.895870] page flags: 0x0() > [ 8489.895916] page dumped because: nonzero _count > [ 8489.895957] Modules linked in: > [ 8489.896030] CPU: 0 PID: 3031 Comm: qemu-system-aar Tainted: G B > 3.13.0cla-09218-g0e47c96-dirty #2 > [ 8489.896085] Call trace: > [ 8489.896154] [<fffffe0000095744>] dump_backtrace+0x0/0x12c > [ 8489.896231] [<fffffe0000095884>] show_stack+0x14/0x1c > [ 8489.896307] [<fffffe00003db58c>] dump_stack+0x70/0x8c > [ 8489.896378] [<fffffe00001210d8>] bad_page+0xe8/0x134 > [ 8489.896453] [<fffffe0000121740>] get_page_from_freelist+0x500/0x608 > [ 8489.896532] [<fffffe00001220d0>] __alloc_pages_nodemask+0x110/0x7ec > [ 8489.896619] [<fffffe000013ce50>] handle_mm_fault+0x760/0x980 > [ 8489.896704] [<fffffe000009a0cc>] do_page_fault+0x228/0x378 > [ 8489.896773] [<fffffe0000090104>] do_mem_abort+0x3c/0x9c > [ 8489.896833] Exception stack(0xfffffe0020247e30 to 0xfffffe0020247f50) > [ 8489.896918] 7e20: 00000001 > 00000000 aa8505b0 000003ff > [ 8489.897030] 7e40: ffffffff ffffffff aa785a84 000003ff 00000000 > 00000000 0015e5a8 fffffe00 > [ 8489.897142] 7e60: 20247e70 fffffe00 000c2e48 fffffe00 20247ea0 > fffffe00 00095490 fffffe00 > [ 8489.897254] 7e80: 20244000 fffffe00 00000000 00000000 ffffffff > ffffffff aa86f118 000003ff > [ 8489.897366] 7ea0: fea46360 000003ff 0009288c fffffe00 fea46580 > 000003ff fea463e0 000003ff > [ 8489.897476] 7ec0: fea46360 000003ff 000927ec fffffe00 00f3f710 > 00000000 00012e61 00000000 > [ 8489.897584] 7ee0: 00000000 00000000 00f4d1a0 00000000 0000da91 > 00000000 00000001 00000000 > [ 8489.897694] 7f00: 0000000d 00000000 0000036a 00000000 7f7f7f7f > 7f7f7f7f 00680ca8 00000000 > [ 8489.897800] 7f20: 0000006d 00000000 00000020 00000000 00000078 > 00000000 00000080 00000000 > [ 8489.897884] 7f40: 006812b0 00000000 aa852598 000003ff If we've managed to trigger a BUG in the host kernel that's a kernel bug and the kvmarm list is probably the best place to ask bout it. [cc'd.] thanks -- PMM _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm