it would appear indeed that my primitive "out of my hat" is identical to LOOK. On Mon, Mar 25, 2013 at 3:27 PM, Matthias Brugger <matthias.bgg@xxxxxxxxx> wrote: > 2013/3/25 Raymond Jennings <shentino@xxxxxxxxx>: >> On Sat, Mar 23, 2013 at 9:42 AM, Matthias Brugger >> <matthias.bgg@xxxxxxxxx> wrote: >>> On 03/23/2013 01:05 AM, Raymond Jennings wrote: >>> >>> On Fri, Mar 22, 2013 at 2:20 PM, <Valdis.Kletnieks@xxxxxx> wrote: >>> >>> On Fri, 22 Mar 2013 13:53:45 -0700, Raymond Jennings said: >>> >>> The first heap would be synchronous requests such as reads and syncs >>> that someone in userspace is blocking on. >>> >>> The second is background I/O like writeback and readahead. >>> >>> The same distinction that CFQ completely makes. >>> >>> Again, this may or may not be a win, depending on the exact workload. >>> >>> If you are about to block on a userspace read, it may make sense to go ahead >>> and tack a readahead on the request "for free" - at 100MB/sec transfer and >>> 10ms >>> seeks, reading 1M costs the same as a seek. If you read 2M ahead and save 3 >>> seeks later, you're willing. Of course, the *real* problem here is that how >>> much readahead to actually do needs help from the VFS and filesystem levels >>> - >>> if there's only 600K more data before the end of the current file extent, >>> doing >>> more than 600K of read-ahead is a loss. >>> >>> Meanwhile, over on the write side of the fence, unless a program is >>> specifically using O_DIRECT, userspace writes will get dropped into the >>> cache >>> and become writeback requests later on. So the vast majority of writes will >>> usually be writebacks rather than syncronous writes. >>> >>> So in many cases, it's unclear how much performance CFQ gets from making >>> the distinction (and I'm positive that given a sufficient supply of pizza >>> and >>> caffeine, I could cook up a realistic scenario where CFQ's behavior makes >>> things worse)... >>> >>> Did I mention this stuff is tricky? :) >>> >>> Oh I'm well aware that it's tricky. but as I said i'm more interested >>> in learning the api than tuning performance. >>> >>> Having a super efficient toaster won't do much good if I can't plug >>> the darn thing in. >>> >>> >>> If you want to understand the interface, I would recommend to start having a >>> look to the noop scheduler. It's by far the simplest implementation of a >>> scheduler. >>> >>> For me a good starting point were this slides: >>> http://www.cs.ccu.edu.tw/~lhr89/linux-kernel/Linux%20IO%20Schedulers.pdf >>> >>> Hope that helps you to bring the theory into practice :) >>> >>> _______________________________________________ >>> Kernelnewbies mailing list >>> Kernelnewbies@xxxxxxxxxxxxxxxxx >>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies >>> >>> >> >> Just what I was looking for. >> >> Now, how do I enable/disable my scheduler during kernel config? > > 1. Add your disk scheduler to the kernel sources (Kconfig, Makefile > and in block/bfq-iosched.c) > 2. Add the bfq scheduler in the kernel config (as a moudle might make sense) > 3. Recompile and install your new kernel > 4. You can load/unload the module dynamically. Via sysfs you can > associate the bfq scheduler with one disk. > > Happy hacking :) > > -- > --- > motzblog.wordpress.com _______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies