Re: [RFC PATCH]vhost-blk: In-kernel accelerator for virtio block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2011-08-10 at 10:19 +0800, Liu Yuan wrote:
> On 08/09/2011 01:16 AM, Badari Pulavarty wrote:
> > On 8/8/2011 12:31 AM, Liu Yuan wrote:
> >> On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
> >>> On 8/7/2011 6:35 PM, Liu Yuan wrote:
> >>>> On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
> >>>>> On 8/5/2011 4:04 AM, Liu Yuan wrote:
> >>>>>> On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
> >>>>>>> Hi Liu Yuan,
> >>>>>>>
> >>>>>>> I started testing your patches. I applied your kernel patch to 3.0
> >>>>>>> and applied QEMU to latest git.
> >>>>>>>
> >>>>>>> I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
> >>>>>>> I ran simple "dd" read tests from the guest on all block devices
> >>>>>>> (with various blocksizes, iflag=direct).
> >>>>>>>
> >>>>>>> Unfortunately, system doesn't stay up. I immediately get into
> >>>>>>> panic on the host. I didn't get time to debug the problem. 
> >>>>>>> Wondering
> >>>>>>> if you have seen this issue before and/or you have new patchset
> >>>>>>> to try ?
> >>>>>>>
> >>>>>>> Let me know.
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> Badari
> >>>>>>>
> >>>>>>
> >>>>>> Okay, it is actually a bug pointed out by MST on the other 
> >>>>>> thread, that it needs a mutex for completion thread.
> >>>>>>
> >>>>>> Now would you please this attachment?This patch only applies to 
> >>>>>> kernel part, on top of v1 kernel patch.
> >>>>>>
> >>>>>> This patch mainly moves completion thread into vhost thread as a 
> >>>>>> function. As a result, both requests submitting and completion 
> >>>>>> signalling is in the same thread.
> >>>>>>
> >>>>>> Yuan
> >>>>>
> >>>>> Unfortunately, "dd" tests (4 out of 6) in the guest hung. I see 
> >>>>> following messages
> >>>>>
> >>>>> virtio_blk virtio2: requests: id 0 is not a head !
> >>>>> virtio_blk virtio3: requests: id 1 is not a head !
> >>>>> virtio_blk virtio5: requests: id 1 is not a head !
> >>>>> virtio_blk virtio1: requests: id 1 is not a head !
> >>>>>
> >>>>> I still see host panics. I will collect the host panic and see if 
> >>>>> its still same or not.
> >>>>>
> >>>>> Thanks,
> >>>>> Badari
> >>>>>
> >>>>>
> >>>> Would you please show me how to reproduce it step by step? I tried 
> >>>> dd with two block device attached, but didn't get hung nor panic.
> >>>>
> >>>> Yuan
> >>>
> >>> I did 6 "dd"s on 6 block devices..
> >>>
> >>> dd if=/dev/vdb of=/dev/null bs=1M iflag=direct &
> >>> dd if=/dev/vdc of=/dev/null bs=1M iflag=direct &
> >>> dd if=/dev/vdd of=/dev/null bs=1M iflag=direct &
> >>> dd if=/dev/vde of=/dev/null bs=1M iflag=direct &
> >>> dd if=/dev/vdf of=/dev/null bs=1M iflag=direct &
> >>> dd if=/dev/vdg of=/dev/null bs=1M iflag=direct &
> >>>
> >>> I can reproduce the problem with in 3 minutes :(
> >>>
> >>> Thanks,
> >>> Badari
> >>>
> >>>
> >> Ah...I made an embarrassing mistake that I tried to 'free()' an 
> >> kmem_cache object.
> >>
> >> Would you please revert the vblk-for-kernel-2 patch and apply the new 
> >> one attached in this letter?
> >>
> > Hmm.. My version of the code seems to have kzalloc() for used_info. I 
> > don't have a version
> > that is using kmem_cache_alloc(). Would it be possible for you to send 
> > out complete patch
> > (with all the fixes applied) for me to try ? This will avoid all the 
> > confusion ..
> >
> > Thanks,
> > Badari
> >
>
> Okay, please apply the attached patch to the vanilla kernel. :)


It looks like the patch wouldn't work for testing multiple devices.

vhost_blk_open() does
+       used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);

When opening second device, we get panic since used_info_cachep is
already created. Just to make progress I moved this call to
vhost_blk_init().

I don't see any host panics now. With single block device (dd),
it seems to work fine. But when I start testing multiple block
devices I quickly run into hangs in the guest. I see following
messages in the guest from virtio_ring.c:

virtio_blk virtio2: requests: id 0 is not a head !
virtio_blk virtio1: requests: id 0 is not a head !
virtio_blk virtio4: requests: id 1 is not a head !
virtio_blk virtio3: requests: id 39 is not a head !

Thanks,
Badari



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux