Re: [RFC] vhost-blk implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 23, 2010 at 12:55:07PM -0700, Badari Pulavarty wrote:
> Michael S. Tsirkin wrote:
>> On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote:
>>   
>>> Michael S. Tsirkin wrote:
>>>     
>>>> On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
>>>>         
>>>>> Write Results:
>>>>> ==============
>>>>>
>>>>> I see degraded IO performance when doing sequential IO write
>>>>> tests with vhost-blk compared to virtio-blk.
>>>>>
>>>>> # time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct
>>>>>
>>>>> I get ~110MB/sec with virtio-blk, but I get only ~60MB/sec with
>>>>> vhost-blk. Wondering why ?
>>>>>             
>>>> Try to look and number of interrupts and/or number of exits.
>>>>         
>>> I checked interrupts and IO exits - there is no major noticeable   
>>> difference between
>>> vhost-blk and virtio-blk scenerios.
>>>     
>>>> It could also be that you are overrunning some queue.
>>>>
>>>> I don't see any exit mitigation strategy in your patch:
>>>> when there are already lots of requests in a queue, it's usually
>>>> a good idea to disable notifications and poll the
>>>> queue as requests complete. That could help performance.
>>>>         
>>> Do you mean poll eventfd for new requests instead of waiting for new  
>>> notifications ?
>>> Where do you do that in vhost-net code ?
>>>     
>>
>> vhost_disable_notify does this.
>>
>>   
>>> Unlike network socket, since we are dealing with a file, there is no  
>>> ->poll support for it.
>>> So I can't poll for the data. And also, Issue I am having is on the   
>>> write() side.
>>>     
>>
>> Not sure I understand.
>>
>>   
>>> I looked at it some more - I see 512K write requests on the
>>> virtio-queue  in both vhost-blk and virtio-blk cases. Both qemu or
>>> vhost is doing synchronous  writes to page cache (there is no write
>>> batching in qemu that is affecting this  case).  I still puzzled on
>>> why virtio-blk outperforms vhost-blk.
>>>
>>> Thanks,
>>> Badari
>>>     
>>
>> If you say the number of requests is the same, we are left with:
>> - requests are smaller for some reason?
>> - something is causing retries?
>>   
> No. IO requests sizes are exactly same (512K) in both cases. There are  
> no retries or
> errors in both cases. One thing I am not clear is - for some reason  
> guest kernel
> could push more data into virtio-ring in case of virtio-blk vs  
> vhost-blk. Is this possible ?
> Does guest gets to run much sooner in virtio-blk case than vhost-blk ?  
> Sorry, if its dumb question -
> I don't understand  all the vhost details :(
>
> Thanks,
> Badari
>

You said you observed same number of requests in userspace versus kernel above.
And request size is the same as well. But somehow more data is
transferred? I'm confused.

-- 
MST
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/virtualization

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux