Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 28, 2009 at 9:00 AM, Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
> On Wed, May 27, 2009 at 01:53:59PM -0700, Nauman Rafique wrote:
>
> [..]
>> > +/**
>> > + * __bfq_lookup_next_entity - return the first eligible entity in @st.
>> > + * @st: the service tree.
>> > + *
>> > + * Update the virtual time in @st and return the first eligible entity
>> > + * it contains.
>> > + */
>> > +static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
>> > +{
>> > +       struct io_entity *entity;
>> > +
>> > +       if (RB_EMPTY_ROOT(&st->active))
>> > +               return NULL;
>> > +
>> > +       bfq_update_vtime(st);
>>
>> Vivek, Paolo, Fabio,
>> Over here we call bfq_update_vtime(), and this function could have
>> been called even when we are just doing a lookup (and not an extract).
>> So vtime is updated while we are not really selecting the next queue
>> for service (for an example, see elv_preempt_queue()). This can result
>> in a call to update_vtime when only an entity with small weight (say
>> weight=1) is backlogged and another entity with bigger weight (say 10)
>> is getting serviced so it is not in the tree (we extract the entity
>> which is getting service). This results in a big vtime jump to the
>> start time of the entity with weight 1 (entity of weight 1 would have
>> big start times, as it has small weight). Now when another entity with
>> bigger weight (say 90) gets backlogged, it is assigned a new vtime
>> from service tree's vtime, causing it to get a big value. In the
>> meanwhile, iog for weight 10 keeps getting service for many quantums,
>> as it was continuously backlogged.
>>
>> The problem happens because we extract an entity (removing it from the
>> tree) while it is getting service, and do vtime jumps based on what is
>> still in the tree. I think we need to add an extra check on the vtime
>> of the entity in service, before we take a vtime jump.
>>
>> I have actually seen this happening when trying to debug on of my
>> tests. Please let me know what you think.
>>
>
> Hi Nauman,
>
> This sounds like a problem but apart from elv_preempt_queue(), in which path
> do you see it?
>
> Initially fabio posted the patches where preemption was allowed only if
> the preempting queue is the next one to be served. To keep the behavior
> same as CFQ, I have changed that logic and if we decide to preempt the
> current queue (based on many checks like sync or async), I expire the
> current queue and add the new queue to the front of the service tree so
> that it is selected next. It does impact the fairness a bit but that's how
> cfq does it today and concept of preemption with-in same prio class is
> somewhat peculiar.
>
> So in latest patches preemption path is covered. There does not seem to be
> other paths where we do lookups while and active entity is being served.
> For example, update_next_active() does not continue with lookup if there
> is an active entity. So with this version of patch, you should not face
> the issue.
>
> In general, I think it is a good idea to fix it in update_vtime() so
> that not to udpate vtime if there is an active entity and not rely on
> caller who makes sure that update_vtime() is not called when there is
> an active entity. Do you have a quick patch for that?

Adding this extra check in update_vtime() is tricky. If we don't
update vtime, then its possible that lookup would return no entity,
even though there are backlogged queues, as all backlogged entities
might be not eligible (i.e have start times bigger than vtime).
When an entity is currently under service, we don't know what would be
the next entity really, as it could be the current entity under
service, depending on the actual time slice used by under service
entity.

You are right that the lookup (without extract) was done only in
preemption path. And with your latest changes, even that is removed.
In this case, just having a BUG_ON() to ensure this does not happen
should suffice.

commit dc54ad458b907db5523e8a088786014da8c4b3b2
Author: Nauman Rafique <nauman@xxxxxxxxxx>
Date:   Thu May 28 12:27:00 2009 -0700

    Add a BUG_ON() that should fire when we can update vtime while just
    doing a lookup for next entity (and not extraction).

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index f1179aa..d512c1b 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1036,8 +1036,11 @@ struct io_entity *bfq_lookup_next_entity(struct
io_sched_data *sd,
        /*
         * One can check for which will be next selected entity without
         * expiring the current one.
+        * Also, we should not call lookup when an entity is active, as
+        * we extract an active entity from tree, and doing lookup can result
+        * in an erroneous vtime jump when active entity is extacted from tree.
         */
-       BUG_ON(extract && sd->active_entity != NULL);
+       BUG_ON(sd->active_entity != NULL);

        for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
                entity = __bfq_lookup_next_entity(st);


I have some concerns about the new preemption logic. We modify the
start and finish time to make sure the entity gets selected next. So
the entity which preempts others can eventually get more service than
others (specially if it keeps preempting others). It seems that we can
fix that by temporarily pushing the entity to the front, without
modifying start/finish times. One way to do that is to add an extra
flag to an entity. When the flag is set, we can ignore finish times
and prefer the entity (in bfq_first_active_entity()). This has the
advantage that since we are preserving start/finish times, over time,
such an entity will be de-prioritized over others. In order to ensure
that actually does happen, we can introduce a threshold (call it
preempt_threshold). Then the entity with the flag set would jump to
the front only if the difference between 'its finish time' and 'the
finish time of other entities' is within this threshold. Let me know
if it makes sense.

>
> Thanks
> Vivek
>

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux