On 05/05/2011 12:32 PM, Jan Kiszka wrote:
>> > >> > If the former, we simply do the reset operation per-cpu. It's the >> > natural thing anyway. >> >> Quite wasteful /wrt to memory given that the majority will be identical. > > We're talking a few hundred bytes per cpu. If you want to save memory, > look at the PhysPageDesc array, it takes up 0.4% of guest memory, so 4MB > for a 1GB guest. I know (that's fixable, BTW). But that should not excuse needless memory wasting elsewhere.
IMO a few hundred bytes is worth the correctness here.
> >> >> Nevertheless, the qemu-kvm code is already unneeded today and can >> safely >> >> be removed IMHO. >> > >> > I don't follow? Won't it cause a regression? >> >> Not at all. We use the "individual care" pattern upstream now, >> specifically for those MSRs (kvmclock) for which the qemu-kvm code was >> introduced. > > I mean a future regression with current+patch qemu and a new kernel. For sane scenarios, such a combination should never expose new (ie. unknown from qemu's POV) MSRs to the guest. Thus not clearing them cannot cause any harm.
The problem is with hardware MSRs (PV MSRs are protected by cpuid, and always disable themselves when zeroed).
BTW, you also do not know if 0 will be the right reset value for these to-be-invented MSRs. That could cause regression as well.
What I suggested wasn't zeroing them, but writing the value we read just after vcpu creation.
We had a regression when we started supporting PAT. Zeroing it causes the cache to be disabled, making everything ridiculously slow. We now special case it; my proposed solution would have taken care of it.
-- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html