On Thu, May 12, 2011 at 5:13 PM, Avi Kivity <avi@xxxxxxxxxx> wrote: > On 05/12/2011 04:36 PM, Dhaval Giani wrote: >> >> Hi, >> >> As part of some of the work for my project, I have been looking at >> tracing some of the events in the guest from inside the host. In >> my usecase, I have been looking to co-relate the time of a network >> packet arrival with that in the host. ftrace makes such arbitrary >> use quite simple, so I went ahead an extended this functionality >> in terms of a hypercall. There are still a few issues with this patch. >> >> 1. For some reason, the first time the hypercall is called, it works >> just fine, but the second invocation refuses to happen. I am still >> clueless about it. (and am looking for hints :-) ) >> 2. I am not very sure if I got the demarcation between the guest and >> the host code fine or not. Someone more experienced than me should take >> a look at the code as well :-) >> 3. This adds a new paravirt call. >> 4. This has been implemented just for x86 as of now. If there is enough >> interest, I will look to make it more generic to be used across other >> architectures. However, it is quite easy to do the same. >> 5. It still does not have all the fancy ftrace features, but again, >> depending on the interest, I can add all those in. >> 6. Create a config option for this feature. >> >> I think such a feature is useful for debugging purposes and might make >> sense to carry upstream. > > I guess it could help things like virtio/vhost development and profiling. > Exactly what i am using it for. > > I think that one hypercall per trace is too expensive. Tracing is meant to > be lightweight! I think the guest can log to a buffer, which is flushed on > overflow or when a vmexit occurs. That gives us automatic serialization > between a vcpu and the cpu it runs on, but not between a vcpu and a > different host cpu. > hmm. So, basically, log all of these events, and then send them to the host either on an exit, or when your buffer fills up. There is one problem with approach though. One of the reasons I wanted this approach was beacuse i wanted to co-relate the guest and the host times. (which is why I kept is synchronous). I lose out that information with what you say. However I see your point about the overhead. I will think about this a bit more. >> >> +int kvm_pv_ftrace(struct kvm_vcpu *vcpu, unsigned long ip, gpa_t addr) >> +{ >> + int ret; >> + char *fmt = (char *) kzalloc(PAGE_SIZE, GFP_KERNEL); >> + >> + ret = kvm_read_guest(vcpu->kvm, addr, fmt, PAGE_SIZE); >> + >> + trace_printk("KVM instance %p: VCPU %d, IP %lu: %s", >> + vcpu->kvm, vcpu->vcpu_id, ip, fmt); >> + >> + kfree(fmt); >> + >> + return 0; >> +} > > A kmalloc and printf seem expensive here. I'd prefer to log the arguments > and format descriptor instead. Similarly the guest should pass unformatted > parameters.+int kvm_ftrace_printk(unsigned long ip, const char *fmt, ...) >> the trace_printk is actually quite cheap (IIRC), but I guess Steve is the best person to let us know about that. We can avoid the kzalloc overhead though. Dhaval -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html