On Wed, Dec 28, 2016 at 9:55 AM, Christoph Hellwig <hch@xxxxxx> wrote: > On Tue, Dec 27, 2016 at 01:21:28PM +0100, Linus Walleij wrote: >> On the contrary we expect a performance regression as mq has no >> scheduling. MQ is created for the usecase where you have multiple >> hardware queues and they are so hungry for work that you have a problem >> feeding them all. Needless to say, on eMMC/SD we don't have that problem >> right now atleast. > > That's not entirely correct. blk-mq is designed to replace the legacy > request code eventually. The focus is on not wasting CPU cycles, and > to support multiple queues (but not require them). OK! Performance is paramount, so this indeed confirms that we need to re-engineer the MMC/SD stack to not rely on this kthread to "drive" transactions, instead we need to complete them quickly from the driver callbacks and let MQ drive. A problem here is that issueing the requests are in blocking context while completion is in IRQ context (for most drivers) so we need to look into this. > Sequential workloads > should always be as fast as the legacy path and use less CPU cycles, That seems more or less confirmed by my dd-test in the commit message. sys time is really small with the simple time+dd tests. > for random workloads we might have to wait for I/O scheduler support, > which is under way now: > > http://git.kernel.dk/cgit/linux-block/log/?h=blk-mq-sched Awesome. > All that assumes a properly converted driver, which as seen by your > experiments isn't easy for MMC as it's a very convoluted beast thanks > the hardware interface which isn't up to the standards we expect from > block storage protocols. I think we can hash it out, we just need to rewrite the MMC/SD core request handling a bit. Yours, Linus Walleij -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html