On 2011-08-11 02:29, Shaohua Li wrote: > 2011/8/10 Jens Axboe <axboe@xxxxxxxxx>: >> On 2011-08-10 10:47, Shaohua Li wrote: >>> 2011/8/10 Kyungmin Park <kmpark@xxxxxxxxxxxxx>: >>>> On Wed, Aug 10, 2011 at 5:08 PM, Jens Axboe <axboe@xxxxxxxxx> wrote: >>>>> On 2011-08-10 01:43, Kyungmin Park wrote: >>>>>> On Wed, Aug 10, 2011 at 3:52 AM, Jens Axboe <axboe@xxxxxxxxx> wrote: >>>>>>> On 2011-08-09 05:47, Kyungmin Park wrote: >>>>>>>> Hi Jens >>>>>>>> >>>>>>>> Now eMMC device requires the upper layer information to improve the data >>>>>>>> performance and reliability. >>>>>>>> >>>>>>>> . Context ID >>>>>>>> Using the context information, it can sort out the data internally and improve the performance. >>>>>>>> The main problem is that it's needed to define "What's the context".. >>>>>>>> Actually I expect cfq queue has own unique ID but it doesn't so decide to use the pid instead >>>>>>>> >>>>>>>> . Data Tag >>>>>>>> Using the Data Tag (1-bit information), It writes the data at SLC area when it's hot data. So it can make the chip more reliable. >>>>>>>> First I expect the REQ_META but current ext4 doesn't pass the WRITE_META. only use the READ_META. so it needs to investigate it. >>>>>>>> >>>>>>>> With these characteristics, it's helpful to teach the device. After some consideration. it's needed to pass out these information at request data structure. >>>>>>>> >>>>>>>> Can you give your opinions and does it proper fields at requests? >>>>>>> >>>>>>> You need this to work on all IO schedulers, not just cfq. >>>>>> Of course if the concept is acceptable, I'll add the other IO schedulers also. >>>>>> >>>>>>> And since that's the case, there's no need to add this field since you can just >>>>>>> retrieve it if the driver asks for it. For CFQ, it could look like this: >>>>>>> >>>>>>> static int cfq_foo(struct request *rq) >>>>>>> { >>>>>>> struct cfq_queue *cfqq = rq->elevator_private[1]; >>>>>>> >>>>>>> if (cfqq) >>>>>>> return cfqq->pid; >>>>>>> >>>>>>> return -1; >>>>>>> } >>>>>> >>>>>> The actual user of these information is device driver. e.g., >>>>>> drivers/mmc/card/block.c >>>>>> So it's not good to use cfq data structure at D/D. some time later >>>>>> these are also used at scsi device drivers. >>>>> >>>>> No, what I'm suggesting above is the CFQ implementation. You would need >>>>> to wire up an elv_ops->get_foo() and have the IO schedulers fill it in.. >>>>> If you notice, the above function does not take or output anything >>>>> related to CFQ in particular, it'll just return you the unique key you >>>>> need. >>>>> >>>>> It's either the above, or a field in the request that the schedulers >>>>> fill out. However, it'd be somewhat annoying to grow struct request for >>>>> something that has a narrow scope of use. Hence the suggestion to add a >>>>> strategy helper for this. >>>> Okay, I'll add new elevator function one for getting context or more hints. >>>> BTW, does it okay to call elevator function call at D/D? >>>> >>>> The quick-n-dirty call is like this at "drivers/mmc/card/block.c" >>>> >>>> struct elevator_queue *e = md->queue.queue->elevator; >>>> int context = -1; >>>> >>>> if (e->ops->elevator_get_req_hint_fn && req) { >>>> context = e->ops->elevator_get_req_hint_fn(req); >>> I'm wondering how the driver deal with elevator switch. A context id from >>> one elevator might just be garbage for another elevator. >> >> Any request with sched private data is drained prior to switching over. >> This problem isn't unique to this context id, we have other per-request >> IO scheduler data structures associated with the request, too. > what I'm afraid is the context id isn't consistent. Say in cfq, context id > for app1 is 1, app2 2. Then switching to deadline, context id for app1 > is 2, app2 1. Will the driver be confused about this? It's a hint, so should not be a worry at all. Things should function perfectly fine with just returning 1 all the time, the idea is to allow some more efficiency in scheduling on the hw side if we can. Realistically, the device isn't going to be tracking a ton of pids anyway. In the rare event of an IO scheduler switch, things will settle down very quickly. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html