On Wed, May 12, 2021, Krish Sadhukhan wrote: > > On 5/12/21 9:01 AM, Jim Mattson wrote: > > On Tue, May 11, 2021 at 7:37 PM Krish Sadhukhan > > <krish.sadhukhan@xxxxxxxxxx> wrote: > > > Add the following per-VCPU statistic to KVM debugfs to show if a given > > > VCPU is running a nested guest: > > > > > > nested_guest_running > > > > > > Also add this as a per-VM statistic to KVM debugfs to show the total number > > > of VCPUs running a nested guest in a given VM. > > > > > > Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@xxxxxxxxxx> > > This is fine, but I don't really see its usefulness. OTOH, one > > Two potential uses: > > 1. If Live Migration of L2 guests is broken/buggy, this can be used to > determine a safer time to trigger Live Migration of L1 guests. This seems tenuous. The stats are inherently racy, so userspace would still need to check for "guest mode" after retrieving state. And wouldn't you want to wait until L1 turns VMX/SVM _off_? If migrating L2 is broken, simply waiting until L2 exits likely isn't going to help all that much. > 2. This can be used to create a time-graph of the load of L1 and L2 in a > given VM as well across the host. Hrm, I like the idea of being able to observe how much time a vCPU is spending in L1 vs. L2, but cross-referencing guest time with "in L2" seems difficult and error prone. I wonder if we can do better, i.e. explicitly track L1 vs. L2+ usage. I think that would also grant Jim's wish of being able to more precisely track nested virtualization utilization. > > statistic I would really like to see is how many vCPUs have *ever* run > > a nested guest.