Re: Correct use of ap->lock versus ap->host->lock ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mark Lord wrote:
There are definitely other fish to fry elsewhere,
but don't discount the effect of "a couple register writes",
which are frequently done with readbacks to flush them,
at a cost equivalent to several thousand CPU cycles per readback.

Those numbers are for slower PIO, not MMIO as found on new SATA controllers...


This prevents new command issue from overlapping interrupt handling
for any ports of the same host.  Again, not a biggie today,
but tomorrow perhaps..

And still probably not worth the fuss on any hardware that has
registers shared across multiple ports (eg. Marvell controllers).


Remember, I come from the land of networking, where we already see over 500k packets per second. None of this is new stuff.

In networking you lock both TX submission (analogy: scsi queuecommand) and TX completion (analogy: completion via irq handler), and we don't see any such problems on multi-port controllers.

It is not worth the fuss on new SATA controllers, which look just like NIC hardware has looked for a decade -- DMA rings, with a single MMIO write (or write+read) to indicate software has new packets for hardware. Even at exponentially higher FIS rates, locking doesn't become an issue.

And its not worth the fuss for older controllers, because they're, well, old and SMP locking performance is not a pressing issue there. :)


Off the top of my head, here are two issues much more pressing than host-versus-port libata locking:


* batch submission. The block and scsi layers pass commands to us one-at-a-time, which is awful for any modern hardware. For each command in the request queue: lock+queuecommand+unlock. It is better for both locking and hardware (DMA ring) submission to add a batch of commands to the queue, and then kick the hardware.


* figuring out an interrupt mitigation scheme that is useful in the real world. Right now, even a 16-port SATA card will not give you a lot of interrupts. At some point, it does become useful to turn on interrupt mitigation.

That point is dynamic, and must be measured as such: it depends on overall system load (not just load on a single SATA chip). Networking's NAPI takes this into account, though over time we've seen the best results by combining hardware interrupt mitigation with software interrupt mitigation.


Regards,

	Jeff



--
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystems]     [Linux SCSI]     [Linux RAID]     [Git]     [Kernel Newbies]     [Linux Newbie]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Samba]     [Device Mapper]

  Powered by Linux