> On 1 Jun 2022, at 10:03, Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> wrote: > > Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes: > >> On Tue, May 31, 2022 at 02:52:04PM +0000, Durrant, Paul wrote: > > ... > >>> >>> I'll bite... What's ludicrous about wanting to run a guest at a lower >>> CPU freq to minimize observable change in whatever workload it is >>> running? >> >> *why* would you want to do that? Everybody wants their stuff done >> faster. >> > > FWIW, I can see a valid use-case: imagine you're running some software > which calibrates itself in the beginning to run at some desired real > time speed but then the VM running it has to be migrated to a host with > faster (newer) CPUs. I don't have a real world examples out of top of my > head but I remember some old DOS era games were impossible to play on > newer CPUs because everything was happenning too fast. Maybe that's the > case :-) The PC version of Alpha Waves was such an example, but Frederick Raynal, who did the port, said it was the last time he made the mistake. That was 1990 :-) More seriously, what about mitigating timing-based remote attacks by arbitrarily changing the CPU frequency and injecting noise in the timing? That could be a valid use case, no? Although I can think of about a million other ways of doing this more efficiently… > > -- > Vitaly >