Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/04/2012 12:56 PM, Raghavendra K T wrote:
> On 10/03/2012 10:55 PM, Avi Kivity wrote:
>> On 10/03/2012 04:29 PM, Raghavendra K T wrote:
>>> * Avi Kivity <avi@xxxxxxxxxx> [2012-09-27 14:03:59]:
>>>
>>>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>>>>>>
>>> [...]
>>>>> 2) looking at the result (comparing A & C) , I do feel we have
>>>>> significant in iterating over vcpus (when compared to even vmexit)
>>>>> so We still would need undercommit fix sugested by PeterZ
>>>>> (improving by
>>>>> 140%). ?
>>>>
>>>> Looking only at the current runqueue?  My worry is that it misses a lot
>>>> of cases.  Maybe try the current runqueue first and then others.
>>>>
>>>
>>> Okay. Do you mean we can have something like
>>>
>>> +       if (rq->nr_running == 1 && p_rq->nr_running == 1) {
>>> +               yielded = -ESRCH;
>>> +               goto out_irq;
>>> +       }
>>>
>>> in the Peter's patch ?
>>>
>>> ( I thought lot about && or || . Both seem to have their own cons ).
>>> But that should be only when we have short term imbalance, as PeterZ
>>> told.
>>
>> I'm missing the context.  What is p_rq?
> 
> p_rq is the run queue of target vcpu.
> What I was trying below was to address Rik concern. Suppose
> rq of source vcpu has one task, but target probably has two task,
> with a eligible vcpu waiting to be scheduled.
> 
>>
>> What I mean was:
>>
>>    if can_yield_to_process_in_current_rq
>>       do that
>>    else if can_yield_to_process_in_other_rq
>>       do that
>>    else
>>       return -ESRCH
> 
> I think you are saying we have to check the run queue of the
> source vcpu, if we have a vcpu belonging to same VM and try yield to
> that? ignoring whatever the target vcpu we received for yield_to.
> 
> Or is it that kvm_vcpu_yield_to should now check the vcpus of same vm
> belonging to same run queue first. If we don't succeed, go again for
> a vcpu in different runqueue.

Right.  Prioritize vcpus that are cheap to yield to.  But may return bad
results if all vcpus on the current runqueue are spinners, so probably
not a good idea.

> Does it add more overhead especially in <= 1x scenario?

The current runqueue should have just our vcpu in that case, so low
overhead.  But it's a bad idea due to the above scenario.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux