Re: [RFC PATCH 0/2] virtio nvme

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
> On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > Hi Ming & Co,
> > > 
> > > On Thu, 2015-09-10 at 10:28 -0700, Ming Lin wrote:
> > > > On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
> > > > > On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin@xxxxxxxxxx> wrote:
> > > > > > These 2 patches added virtio-nvme to kernel and qemu,
> > > > > > basically modified from virtio-blk and nvme code.
> > > > > >
> > > > > > As title said, request for your comments.
> > > 
> > > <SNIP>
> > > 
> > > > > 
> > > > > At first glance it seems like the virtio_nvme guest driver is just
> > > > > another block driver like virtio_blk, so I'm not clear why a
> > > > > virtio-nvme device makes sense.
> > > > 
> > > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > > > 
> > > > Nick(CCed), could you correct me if I'm wrong?
> > > > 
> > > > For SCSI stack, we have:
> > > > virtio-scsi(guest)
> > > > tcm_vhost(or vhost_scsi, host)
> > > > LIO-scsi-target
> > > > 
> > > > For NVMe stack, we'll have similar components:
> > > > virtio-nvme(guest)
> > > > vhost_nvme(host)
> > > > LIO-NVMe-target
> > > > 
> > > 
> > > I think it's more interesting to consider a 'vhost style' driver that
> > > can be used with unmodified nvme host OS drivers.
> > > 
> > > Dr. Hannes (CC'ed) had done something like this for megasas a few years
> > > back using specialized QEMU emulation + eventfd based LIO fabric driver,
> > > and got it working with Linux + MSFT guests.
> > > 
> > > Doing something similar for nvme would (potentially) be on par with
> > > current virtio-scsi+vhost-scsi small-block performance for scsi-mq
> > > guests, without the extra burden of a new command set specific virtio
> > > driver.
> > 
> > Trying to understand it.
> > Is it like below?
> > 
> >   .------------------------.   MMIO   .---------------------------------------.
> >   | Guest                  |--------> | Qemu                                  |
> >   | Unmodified NVMe driver |<-------- | NVMe device simulation(eventfd based) |
> >   '------------------------'          '---------------------------------------'
> >                                                   |          ^
> >                                       write NVMe  |          |  notify command
> >                                       command     |          |  completion
> >                                       to eventfd  |          |  to eventfd
> >                                                   v          |
> >                                       .--------------------------------------.
> >                                       | Host:                                |
> >                                       | eventfd based LIO NVMe fabric driver |
> >                                       '--------------------------------------'
> >                                                         |
> >                                                         | nvme_queue_rq()
> >                                                         v
> >                                        .--------------------------------------.
> >                                        | NVMe driver                          |
> >                                        '--------------------------------------'
> >                                                         |
> >                                                         |
> >                                                         v
> >                                        .-------------------------------------.
> >                                        | NVMe device                         |
> >                                        '-------------------------------------'
> > 
> 
> Correct.  The LIO driver on KVM host would be handling some amount of
> NVMe host interface emulation in kernel code, and would be able to
> decode nvme Read/Write/Flush operations and translate -> submit to
> existing backend drivers.

Let me call the "eventfd based LIO NVMe fabric driver" as
"tcm_eventfd_nvme"

Currently, LIO frontend driver(iscsi, fc, vhost-scsi etc) talk to LIO
backend driver(fileio, iblock etc) with SCSI commands.

Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
commands to SCSI commands and then submit to backend driver?

But I thought the future "LIO NVMe target" can support frontend driver
talk to backend driver directly with NVMe commands without translation.

Am I wrong?

> 
> As with the nvme-over-fabric case, it would be possible to do a mapping
> between backend driver queue resources for real NVMe hardware (eg:
> target_core_nvme), but since it would still be doing close to the same
> amount of software emulation for both backend driver cases, I wouldn't
> expect there to be much performance advantage over just using normal
> submit_bio().
> 
> --nab
> 


_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux