On 15.01.20 15:43, milanpa@xxxxxxxxxx wrote:
On 1/15/20 3:04 PM, Alexander Graf wrote:
On 15.01.20 14:43, Milan Pandurov wrote:
KVM exposes debug counters through individual debugfs files.
Monitoring these counters requires debugfs to be enabled/accessible for
the application, which might not be always the case.
Additionally, periodic monitoring multiple debugfs files from
userspace requires multiple file open/read/close + atoi conversion
operations, which is not very efficient.
Let's expose new interface to userspace for garhering these
statistics with one ioctl.
Two new ioctl methods are added:
- KVM_GET_SUPPORTED_DEBUGFS_STAT : Returns list of available counter
names. Names correspond to the debugfs file names
- KVM_GET_DEBUGFS_VALUES : Returns list of u64 values each
corresponding to a value described in KVM_GET_SUPPORTED_DEBUGFS_STAT.
Userspace application can read counter description once using
KVM_GET_SUPPORTED_DEBUGFS_STAT and periodically invoke the
KVM_GET_DEBUGFS_VALUES to get value update.
Signed-off-by: Milan Pandurov <milanpa@xxxxxxxxx>
Thanks a lot! I really love the idea to get stats directly from the
kvm vm fd owner. This is so much better than poking at files from a
random unrelated debug or stat file system.
I have a few comments overall though:
1)
This is an interface that requires a lot of logic and buffers from
user space to retrieve individual, explicit counters. What if I just
wanted to monitor the number of exits on every user space exit?
In case we want to cover such latency sensitive use cases solution b),
with mmap'ed structs you suggested, would be a way to go, IMO.
Also, we're suddenly making the debugfs names a full ABI, because
through this interface we only identify the individual stats through
their names. That means we can not remove stats or change their names,
because people may rely on them, no? Thining about this again, maybe
they already are an ABI because people rely on them in debugfs though?
I see two alternatives to this approach here:
a) ONE_REG
We can just add a new DEBUG arch in ONE_REG and expose ONE_REG per VM
as well (if we really have to?). That gives us explicit identifiers
for each stat with an explicit path to introduce new ones with very
unique identifiers.
That would give us a very easily structured approach to retrieve
individual counters.
b) part of the mmap'ed vcpu struct
We could simply move the counters into the shared struct that we
expose to user space via mmap. IIRC we only have that per-vcpu, but
then again most counters are per-vcpu anyway, so we would push the
aggregation to user space.
For per-vm ones, maybe we can just add another mmap'ed shared page for
the vm fd?
2) vcpu counters
Most of the counters count on vcpu granularity, but debugfs only gives
us a full VM view. Whatever we do to improve the situation, we should
definitely try to build something that allows us to get the counters
per vcpu (as well).
The main purpose of these counters is monitoring. It can be quite
important to know that only a single vCPU is going wild, compared to
all of them for example.
I agree, exposing per vcpu counters can be useful. I guess it didn't
make much sense exposing them through debugfs so aggregation was done in
kernel. However if we chose to go with approach 1-b) mmap counters
struct in userspace, we could do this.
We could do it in any approach, even with statfs if we do directories
per vcpu :).
The reason I dislike the debugfs/statfs approach is that it generates a
completely separate permission and access paths to the stats. That's
great for full system monitoring, but really bad when you have multiple
individual tenants on a single host.
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879