Kernel oops for virtio_blk: [<c048a9f8>] __bounce_end_io_read+0x88/0xf8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

  one of my server has multiple oopses for virtio_blk devices. Is this a bug
of virtio_blk driver, or something else? Can it be fixed?
These virtual machines hangs aprox. once per day, mostly at midnight.

  Here is one oops:

BUG: unable to handle kernel paging request at fff82000
IP: [<c048a9f8>] __bounce_end_io_read+0x88/0xf8
Oops: 0002 [#1] SMP
Modules linked in: ipv6 nf_conntrack_netbios_ns virtio_balloon floppy
virtio_net pcspkr joydev i2c_piix4 i2c_core virtio_pci virtio_ring
virtio_blk virtio [last unloaded: scsi_wait_scan]

Pid: 27956, comm: httpd Not tainted (2.6.27.25-170.2.72.fc10.i686.PAE #1)
EIP: 0060:[<c048a9f8>] EFLAGS: 00210086 CPU: 2
EIP is at __bounce_end_io_read+0x88/0xf8
EAX: fff82000 EBX: e936ae00 ECX: 00000400 EDX: 00001000
ESI: ea808000 EDI: fff82000 EBP: c08c5f00 ESP: c08c5edc
 DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process httpd (pid: 27956, ti=c08c5000 task=e9940000 task.ti=e9571000)
Stack: 00000000 f1114340 d134ef80 d134ea00 00000000 00200086 c048aa7a 00001000
       00000000 c08c5f08 c048aa8a c08c5f14 c04b8474 d134ef80 c08c5f28 c05149a8
       d134ef80 00001000 ea5d1450 c08c5f60 c0514b22 00000000 00000000 007310b8
Call Trace:
 [<c048aa7a>] ? bounce_end_io_read+0x0/0x12
 [<c048aa8a>] ? bounce_end_io_read+0x10/0x12
 [<c04b8474>] ? bio_endio+0x2b/0x2e
 [<c05149a8>] ? req_bio_endio+0x84/0xa1
 [<c0514b22>] ? __end_that_request_first+0x15d/0x257
 [<c05155ef>] ? __blk_end_request+0x19/0x3b
 [<c051566a>] ? end_dequeued_request+0x32/0x35
 [<f881d602>] ? blk_done+0x3d/0x7b [virtio_blk]
 [<f8821216>] ? vring_interrupt+0x24/0x2e [virtio_ring]
 [<f8824359>] ? vp_interrupt+0x65/0x98 [virtio_pci]
 [<c046a2cf>] ? handle_IRQ_event+0x2f/0x64
 [<c046b338>] ? handle_fasteoi_irq+0x85/0xc0
 [<c046b2b3>] ? handle_fasteoi_irq+0x0/0xc0
 [<c040af0a>] ? do_IRQ+0xc7/0xfe
 [<c0409668>] ? common_interrupt+0x28/0x30
 [<c042007b>] ? native_flush_tlb_global+0x3a/0x48
 [<c04200b1>] ? paravirt_leave_lazy+0x20/0x21
 [<c041f67b>] ? kvm_leave_lazy_mmu+0x68/0x7d
 [<c04820a0>] ? unmap_vmas+0x4d5/0x644
 [<c0485cef>] ? vma_link+0x71/0x7d
 [<c04856cf>] ? unmap_region+0x7d/0xe4
 [<c04864ba>] ? do_munmap+0x193/0x1e6
 [<c048653d>] ? sys_munmap+0x30/0x3f
 [<c0408c8a>] ? syscall_call+0x7/0xb
 =======================

I am trying to understand, where there are two call for bounce_end_io_read,
but no ideas yet. Source code for this function is:

static void bounce_end_io_read(struct bio *bio, int err)
{
        __bounce_end_io_read(bio, page_pool, err);
}

How it's possible, that there is an recursive call for this function?

Host is running an updated Fedora 11, guest is an updated Fedora 10.
Fails with kernel 2.6.27 and 2.6.29 too (from fedora-updates-testing).

More in Red Hat bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=510304

Thank you.

			SAL
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux