Re: [Qemu-devel] [PATCH 0/4] megaraid_sas HBA emulation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



   Hi,

> From a really quick view fixing up the data xfer code paths doesn't
> look too bad. Think I'll give it a try.

Oh well.  The interface pretty obviously designed for the esp, which is 
the oldest scsi adapter in qemu ...

ESP: There is no scatter-gather support in the hardware.  So for large 
reads/writes there are quite switches between OS and ESP:  The OS saying 
"dma next sectors to this location" via ioports, the ESP doing it and 
raising a IRQ when done, next round.  The existing callback mechanism 
models that pretty closely.

USB: streams the data in small packets (smaller than sector size, 64 
bytes IIRC).  Current interface works good enougth.

LSI: Hops through quite a few loops to work with the existing interface. 
  Current emulation reads one lsi script command at a time and does 
reads/writes in small pieces like the ESP.  I think it could do alot 
better: parse lsi scripts into scatter lists and submit larger requests. 
  Maybe even have multiple requests in flight at the same time.  That 
probably means putting the lsi script parsing code upside down though.

MEGASAS: I guess you have scatter lists at hand and want to submit them 
directly to the block layer for zerocopy block I/O.

So, where to go from here?

I'm tempted to zap the complete read-in-pieces logic.  For read/write 
transfers storage must be passed where everything fits in.  The 
completion callback is called on command completion and nothing else.

I think we'll need to modes here: xfer from/to host-allocated bounce 
buffer (linear buffer) and xfer from/to guest memory (scatter list).

That means (emulated) hardware without scatter-gather support must use 
the bounce buffer mode can can't do zerocopy I/O.  I don't think this is 
a big problem though.  Lots of small I/O requests don't perform very 
well, so one big request filling the bounce buffer then memcpy() from/to 
guest memory will most likely be faster anyway.

comments?
   Gerd
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/virtualization

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux