Re: [RFC PATCH 0/2] virtio nvme

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 11, 2015 at 10:53 AM, Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote:
> On Fri, Sep 11, 2015 at 6:21 PM, Ming Lin <mlin@xxxxxxxxxx> wrote:
>> On Fri, 2015-09-11 at 08:48 +0100, Stefan Hajnoczi wrote:
>>> On Thu, Sep 10, 2015 at 6:28 PM, Ming Lin <mlin@xxxxxxxxxx> wrote:
>>> > On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
>>> >> On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin@xxxxxxxxxx> wrote:
>>> >> > These 2 patches added virtio-nvme to kernel and qemu,
>>> >> > basically modified from virtio-blk and nvme code.
>>> >> >
>>> >> > As title said, request for your comments.
>>> >> >
>>> >> > Play it in Qemu with:
>>> >> > -drive file=disk.img,format=raw,if=none,id=D22 \
>>> >> > -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
>>> >> >
>>> >> > The goal is to have a full NVMe stack from VM guest(virtio-nvme)
>>> >> > to host(vhost_nvme) to LIO NVMe-over-fabrics target.
>>> >>
>>> >> Why is a virtio-nvme guest device needed?  I guess there must either
>>> >> be NVMe-only features that you want to pass through, or you think the
>>> >> performance will be significantly better than virtio-blk/virtio-scsi?
>>> >
>>> > It simply passes through NVMe commands.
>>>
>>> I understand that.  My question is why the guest needs to send NVMe commands?
>>>
>>> If the virtio_nvme.ko guest driver only sends read/write/flush then
>>> there's no advantage over virtio-blk.
>>>
>>> There must be something you are trying to achieve which is not
>>> possible with virtio-blk or virtio-scsi.  What is that?
>>
>> I actually learned from your virtio-scsi work.
>> http://www.linux-kvm.org/images/f/f5/2011-forum-virtio-scsi.pdf
>>
>> Then I thought a full NVMe stack from guest to host to target seems
>> reasonable.
>>
>> Trying to achieve similar things as virtio-scsi, but all NVMe protocol.
>>
>> - Effective NVMe passthrough
>> - Multiple target choices: QEMU, LIO-NVMe(vhost_nvme)
>> - Almost unlimited scalability. Thousands of namespaces per PCI device
>> - True NVMe device
>> - End-to-end Protection Information
>> - ....
>
> The advantages you mentioned are already available in virtio-scsi,
> except for the NVMe command set.
>
> I don't understand what unique problem virtio-nvme solves yet.  If
> someone asked me to explain why NVMe-over-virtio makes sense compared
> to the existing virtio-blk/virtio-scsi or NVMe SR-IOV options, I
> wouldn't know the answer.  I'd like to learn that from you or anyone
> else on CC.
>
> Do you have a use case in mind?

One use case is for All NVMe storage array. There is no SCSI device at all.

Samsung demoed one in flash memory summit 2015.
http://www.tomsitpro.com/articles/best-of-flash-memory-summit,1-2806.html

>
> Stefan
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux