Re: [PATCH 0/7] Various cleanup/fixes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/10/12 15:12, Christoffer Dall wrote:
> On Thu, Oct 18, 2012 at 10:09 AM, Alexander Graf <agraf@xxxxxxx> wrote:
>>
>> On 18.10.2012, at 16:05, Marc Zyngier wrote:
>>
>>> On 18/10/12 14:51, Christoffer Dall wrote:
>>>> On Thu, Oct 18, 2012 at 7:00 AM, Marc Zyngier <marc.zyngier@xxxxxxx> wrote:
>>>>> On 17/10/12 21:09, Christoffer Dall wrote:
>>>>>> On Wed, Oct 17, 2012 at 1:22 PM, Marc Zyngier <marc.zyngier@xxxxxxx> wrote:
>>>>>>> On 17/10/12 17:53, Christoffer Dall wrote:
>>>>>>>> On Wed, Oct 17, 2012 at 12:09 PM, Marc Zyngier <marc.zyngier@xxxxxxx> wrote:
>>>>>>>>> On 17/10/12 16:50, Christoffer Dall wrote:
>>>>>>>>>>>>>  ARM: KVM: move MMIO handling to its own files
>>>>>>>>>>>>
>>>>>>>>>>>> this one I'll look at later today.
>>>>>>>>>>>
>>>>>>>>>>> OK. Let me know what you think. I have a couple of other patches on the
>>>>>>>>>>> same theme.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I will. Since the mmio handling is controversial, it's good that we
>>>>>>>>>> split that up.
>>>>>>>>>>
>>>>>>>>>> Unless the other patches are *necessary* for an upstream merge, I
>>>>>>>>>> think we should announce a code freeze and target an upstream merge
>>>>>>>>>> asap for everyone's benefit.
>>>>>>>>>
>>>>>>>>> Depending what you can necessary. A number of patches I've queued are
>>>>>>>>> related to moving accesses to HSR and friends into inline functions,
>>>>>>>>> making the code more readable - again, this could help the reviewers.
>>>>>>>>> They are mostly one-liners.
>>>>>>>>>
>>>>>>>>
>>>>>>>> necessary as in bugfixes or API stabilization.
>>>>>>>>
>>>>>>>> My whole point is that we can keep improving forever, but the more
>>>>>>>> cosmetics we change the more changes need to be reviewed.
>>>>>>>
>>>>>>> I agree on the stabilization. But my point here is not to introduce new
>>>>>>> features. Just to make the core mode easily reviewed. One of the
>>>>>>> complains I've heard so far is that the code is hard to read. Which is
>>>>>>> not surprising given that there's a lot of it, and that the problems it
>>>>>>> tackles are not simple.
>>>>>>>
>>>>>>> I'll post these patches as an RFC, and you're free to take them or not.
>>>>>>>
>>>>>>
>>>>>> ok, thanks, I'll have a look.
>>>>>
>>>>> Incoming.
>>>>>
>>>>>>>>>> It seems to me that we have a bug on restart to fix and
>>>>>>>>>
>>>>>>>>> Care to elaborate on this one?
>>>>>>>>>
>>>>>>>>
>>>>>>>> just fire up a guest and execute "reboot" in there and see the guest
>>>>>>>> kernel crash when it comes back up. If you can't reproduce, we should
>>>>>>>> talk more :)
>>>>>>>
>>>>>>> Interesting. It looks like the guest is taking a timer interrupt before
>>>>>>> being ready to handle it... Probably because the timer has been disabled
>>>>>>> while something is still pending. Investigating.
>>>>>>>
>>>>>>
>>>>>> yeah, but a reset should mask interrupts, right? so I'm not sure,
>>>>>> anyway cool if you have cycles to look into it.
>>>>>
>>>>> Reset? Which reset? We do not have a mechanism to propagate QEMU's reset
>>>>> into the VM. I think that is part of the problem, but that would be
>>>>> papering over a real bug hiding somewhere. Either in the vgic code or in
>>>>> the timer.
>>>>>
>>>>
>>>> I actually assumed that a reboot would generate a virtual reset to the
>>>> cpu, but I haven't looked into this at all. What exactly happens in
>>>> the guest kernel side when you call reboot?
>>>
>>> You hit some special VE device that causes the VCPUs to be reset (Peter,
>>> can you be more specific than I am?), but we don't signal anything to
>>> the VM itself - hence the guest restarting with timers ticking and GIC
>>> in some arbitrary state (interrupts being queued into the list
>>> registers, for example...).
>>
>> If you ever want to do live migration, you need to be able to get/set the state of your GIC from user space anyways. So what would usually happen is that on reset, QEMU would just set the state to a known good reset state.
>>
> hmm, Marc, how would we expose the GIC state? That's more than just
> the list registers because we can have things queued on the kernel
> side as well right? oh no...
> 

It should be both the LRs, the CPU interface state and the distributor.
But with a bit of massaging, we can get back to the distributor only,
having undone whatever is stored at the CPU level.

vgic_update_state() should be able to restore a consistent state from there.

	M.

-- 
Jazz is not dead. It just smells funny...


_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm


[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux