Re: [Android-virt] [PATCH v5 11/13] ARM: KVM: Support SMP hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 12, 2011 at 12:56 PM, Avi Kivity <avi@xxxxxxxxxx> wrote:
> On 12/12/2011 07:37 PM, Christoffer Dall wrote:
>> >
>> > This looks overly complicated with two levels of locks.  x86 gets by
>> > with no locks, and a much more complicated interrupt architecture.
>> >
>> > My recommendation is:
>> >  wait_for_interrupts is managed solely by the vcpu thread
>> >  KVM_IRQ_LINE does a set_bit(, virt_irq) for the appropriate irq type,
>> > then IPI/wakeups the vcpu to make it examine both wait_for_interrupts
>> > and virt_irq.
>> >
>> >
>> this sounds pretty good to me.
>>
>> something like this:
>>
>> if (irq_level->level) {
>>       set_bit(&vcpu->arch.irq_lines, bit_nr);
>>       smp_mb();
>>       wake_up_interruptible(&vcpu->wq);
>
> or, smp_send_reschedule().  See kvm_vcpu_kick().
>
> An optimization: do a cmpxchg() and don't wakeup if the operation raised
> IRQ file FIQ was set (assuming that FIQ has a higher priority than IRQ).
>
>> } else
>>       clear_bit(&vcpu->arch.irq_lines, bit_nr);
>>
>>
>> and the vcpu thread would clear the wait_for_interrupts flag if it
>> ever sees the mask field be non-zero?
>
> Yes.  This is what x86 does, except it's a lot more complicated.
>
>> >> @@ -522,47 +573,42 @@ static int init_hyp_mode(void)
>> >>               return -ENOMEM;
>> >>
>> >>       /*
>> >> -      * Allocate stack page for Hypervisor-mode
>> >> +      * Allocate stack pages for Hypervisor-mode
>> >>        */
>> >> -     kvm_arm_hyp_stack_page = (void *)__get_free_page(GFP_KERNEL);
>> >> -     if (!kvm_arm_hyp_stack_page) {
>> >> -             err = -ENOMEM;
>> >> -             goto out_free_pgd;
>> >> -     }
>> >> +     for_each_possible_cpu(cpu) {
>> >> +             void *stack_page;
>> >>
>> >> -     hyp_stack_ptr = (unsigned long)kvm_arm_hyp_stack_page + PAGE_SIZE;
>> >> +             stack_page = (void *)__get_free_page(GFP_KERNEL);
>> >
>> > Best to allocate this (and other per-cpu state) on the cpu's node.
>> >
>>
>> why, for performance reasons?
>
> Yes, I'm assuming that all multi-socket A15s will be numa?
>

I have no idea. Marc, Peter, Catalin?

>> The code get slightly more complicated,
>> since we have to pass the return value through the argument so we have
>> to pass an opaque pointer to the struct or something like that.
>
> Don't see why, just use alloc_pages_node().
>
got it, I thought you wanted to issue the actual allocation from each
cpu as to parallelize the work. Now I understand.

>> >>
>> >> +     for_each_online_cpu(cpu) {
>> >> +             smp_call_function_single(cpu, cpu_init_hyp_mode,
>> >> +                                      (void *)(long)init_phys_addr, 1);
>> >> +     }
>> >
>> > Need similar code for cpu hotplug.  See kvm_cpu_hotplug() and
>> > kvm_arch_hardware_enable() which do all this for you.
>> >
>> so just to be sure, this will only be called for cpus that are
>> hotplugged right? we still call the cpu_init_hyp_mode for each cpu
>> that's online at this point.
>
> The infrastructure will call kvm_arch_hardware_enable() for all
> currently online cpus and any future hotplugged cpu.  Just follow
> kvm_init() and fill in the arch callbacks.  You do have a call to
> kvm_init() somewhere, yes?
>
>> Do we need some locking to make sure the two don't overlap (like
>> should I grab the kvm_lock here)?
>
> Let kvm_init() do the driving and relax.
>
This sounds good (actually looking into this was on my todo list, so
now is a good time I guess).
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux