Jens Axboe <axboe@xxxxxxxxx> writes: > On 2010-11-10 21:03, Vivek Goyal wrote: >> On Wed, Nov 10, 2010 at 01:26:21PM -0500, David Zeuthen wrote: >>> Hi, >>> >>> On Wed, Nov 10, 2010 at 11:47 AM, Jeff Moyer <jmoyer@xxxxxxxxxx> wrote: >>>> Hi, >>>> >>>> From within the block layer in the kernel, it is difficult to >>>> automatically detect the performance characteristics of the underlying >>>> storage. It was suggested by Jens Axboe at LSF2010 that we write a udev >>>> rule to tune the I/O scheduler properly for most cases. The basic >>>> approach is to leave CFQ's default tunings alone for SATA disks. For >>>> everything else, turn off slice idling and bump the quantum in order to >>>> drive higher queue depths. This patch is an attempt to implement this. >>>> >>>> I've tested it in a variety of configurations: >>>> - cciss devices >>>> - sata disks >>>> - sata ssds >>>> - enterprise storage (single path) >>>> - enterprise storage (multi-path) >>>> - multiple paths to a sata disk (yes, you can actually do that!) >>>> >>>> The tuning works as expected in all of those scenarios. I look forward >>>> to your comments. >>> >>> This looks useful, but I really think the kernel driver creating the >>> block device should choose/change the defaults for the created block >>> device - it seems really backwards to do this in user-space as an >>> afterthought. >> >> I think it just becomes little easier to implement in user space so that >> if things don't work as expected, somebody can easily disable the rules >> or somebody can easily refine the rule further to better suite their >> needs instead of driver hardcoding this decision. > > That's the primary reason why I suggested doing this in user space. Plus > we don't always know in the kernel, at least this provides an easier way > to auto-tune things. Right, so given the above, is there still opposition to doing this in udev? Thanks! Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-hotplug" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html