From: Michael Kelley <mhklinux@xxxxxxxxxxx> Sent: Friday, March 7, 2025 3:21 PM > > From: Nuno Das Neves <nunodasneves@xxxxxxxxxxxxxxxxxxx> Sent: Friday, March 7, 2025 > 2:07 PM > > [snip] > > >> @@ -485,6 +504,17 @@ int hv_common_cpu_init(unsigned int cpu) > > >> *outputarg = (char *)mem + HV_HYP_PAGE_SIZE; > > >> } > > >> > > >> + if (hv_root_partition()) { > > >> + synic_eventring_tail = (u8 **)this_cpu_ptr(hv_synic_eventring_tail); > > >> + *synic_eventring_tail = kcalloc(HV_SYNIC_SINT_COUNT, > > >> + sizeof(u8), flags); > > >> + > > >> + if (unlikely(!*synic_eventring_tail)) { > > >> + kfree(mem); > > >> + return -ENOMEM; > > >> + } > > >> + } > > >> + > > > > > > Adding this code under the "if(!*inputarg)" implicitly ties the lifecycle of > > > synic_eventring_tail to the lifecycle of hyperv_pcpu_input_arg and > > > hyperv_pcpu_output_arg. Is there some logical relationship between the > > > two that warrants tying the lifecycles together (other than just both being > > > per-cpu)? hyperv_pcpu_input_arg and hyperv_pcpu_output_arg have an > > > unusual lifecycle management in that they aren't freed when a CPU goes > > > offline, as described in the comment in hv_common_cpu_die(). Does > > > synic_eventring_tail also need that same unusual lifecycle? > > > > > I thought about it, and no I don't think it shares the same exact lifecycle. > > It's only used by the mshv_root driver - it just needs to remain present > > whenever there's a chance the module could be re-inserted and expect it to > > be there. > > > > > Assuming there's no logical relationship, I'm thinking synic_eventring_tail > > > should be managed independent of the other two. If it does need the > > > unusual lifecycle, make sure to add a comment in hv_common_cpu_die() > > > explaining why. If it doesn't need the unusual lifecycle, maybe just do > > > the normal thing of allocating it in hv_common_cpu_init() and freeing > > > it in hv_common_cpu_die(). > > > > > Yep, I suppose it should just be freed normally then, assuming > > hv_common_cpu_die() is only called when the hypervisor is going to reset > > (or remove) the synic pages for this partition. Is that the case here? > > > > Yes, it is the case here. A particular vCPU can be taken offline > independent of other vCPUs in the VM (such as by writing "0" > to /sys/devices/system/cpu/cpu<nn>/online). When that happens > the vCPU going offline runs hv_synic_cleanup() first, and then it > runs hv_cpu_die(), which calls hv_common_cpu_die(). So by the > time hv_common_cpu_die() runs, the synic_message_page and > synic_event_page will have been unmapped and the pointers set > to NULL. > > On arm64, there is no hv_cpu_init()/die(), and the "common" > versions are called directly. Perhaps at some point in the future there > will be arm64 specific things to be done, and hv_cpu_init()/die() > will need to be added. But the ordering is the same and > hv_synic_cleanup() runs first. > > So, yes, since synic_eventring_tail is tied to the synic, it sounds like > the normal lifecycle could be used, like with the VP assist page that > is handled in hv_cpu_init()/die() on x86. > One more thought: Perhaps there's more affinity with synic code than with generic per-cpu memory, and it would be even better to allocate and free the synic_eventring_tail memory for each vCPU in hv_synic_init()/cleanup(), or hv_synic_enable/disable_regs(). There's potentially some interaction with hibernate suspend/resume, which I assume isn't a valid scenario for the root partition. But I haven't thought through all the details. Michael