On Mon, 2019-05-06 at 11:31 +0300, Maxim Levitsky wrote: > On Sat, 2019-05-04 at 08:49 +0200, Christoph Hellwig wrote: > > On Fri, May 03, 2019 at 10:00:54PM +0300, Max Gurtovoy wrote: > > > Don't see a big difference of taking NVMe queue and namespace/partition > > > to > > > guest OS or to P2P since IO is issued by external entity and pooled > > > outside > > > the pci driver. > > > > We are not going to the queue aside either way.. That is where the > > last patch in this series is already working to, and which would be > > the sensible vhost model to start with. > > Why are you saying that? I actualy prefer to use a sepearate queue per > software > nvme controller, tat because of lower overhead (about half than going through > the block layer) and it better at QoS as the separate queue (or even few > queues > if needed) will give the guest a mostly guaranteed slice of the bandwidth of > the > device. Sorry for typos - I need more coffee :-) Best regards, Maxim Levitsky