On Saturday 16 Oct 2021 at 13:25:45 (+0100), Marc Zyngier wrote: > At this stage, the old thread may have been destroyed and the memory > recycled. What happens if, in the interval, that memory gets shared > again in another context? My guts feeling is that either the sharing > fails, or the unshare above breaks something else (the refcounting > doesn't save us if the sharing is not with HYP). Aha, yes, that's a good point. The problematic scenario would be: a vcpu runs in the context of task A, then blocks. Then task A is destroyed, but the vcpu isn't (possibly because e.g. userspace intends to run it in the context of another task or something along those lines). But the thread_info and fpsimd_state of task A remain shared with the hypervisor until the next vcpu run, even though the memory has been freed by the host, and is possibly reallocated to another guest in the meantime. So yes, at this point sharing/donating this memory range with a new guest will fail, and confuse the host massively :/ > In any case, I wonder whether we need a notification from the core > code that a thread for which we have a HYP mapping is gone so that we > can mop up the mapping at that point. That's similar to what we have > for MMU notifiers and S2 PTs. > > This is doable by hooking into fpsimd_release_task() and extending > thread_struct to track the sharing with HYP. I've been looking into this, but struggled to find a good way to avoid all the races. Specifically, handling the case where a vcpu and the task which last ran it get destroyed at the same time isn't that easy to handle: you end up having to maintain pointers from the task to the vcpu and vice versa, but I have no obvious idea to update them both in a non-racy way (other than having a one big lock to serialize all those changes, but that would mean serializing all task destructions so that really doesn't scale). Another option is to take a refcount on 'current' from kvm_arch_vcpu_run_map_fp() before sharing thread-specific structs with the hyp and release the refcount of the previous task after unsharing. But that means we'll have to also drop the refcount when the vcpu gets destroyed, as well as explicitly unshare at that point. Shouldn't be too bad I think. Thoughts? Thanks, Quentin _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm