Re: [PATCH] BUG in pv_clock when overflow condition is detected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 17, 2012 at 04:25:04PM +0100, Igor Mammedov wrote:
> On 02/16/2012 03:03 PM, Avi Kivity wrote:
> >On 02/15/2012 07:18 PM, Igor Mammedov wrote:
> >>>On 02/15/2012 01:23 PM, Igor Mammedov wrote:
> >>>>>>   static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time
> >>>>>>*shadow)
> >>>>>>   {
> >>>>>>-    u64 delta = native_read_tsc() - shadow->tsc_timestamp;
> >>>>>>+    u64 delta;
> >>>>>>+    u64 tsc = native_read_tsc();
> >>>>>>+    BUG_ON(tsc<   shadow->tsc_timestamp);
> >>>>>>+    delta = tsc - shadow->tsc_timestamp;
> >>>>>>       return pvclock_scale_delta(delta, shadow->tsc_to_nsec_mul,
> >>>>>>                      shadow->tsc_shift);
> >>>>>
> >>>>>Maybe a WARN_ON_ONCE()?  Otherwise a relatively minor hypervisor
> >>>>>bug can
> >>>>>kill the guest.
> >>>>
> >>>>
> >>>>An attempt to print from this place is not perfect since it often
> >>>>leads
> >>>>to recursive calling to this very function and it hang there
> >>>>anyway.
> >>>>But if you insist I'll re-post it with WARN_ON_ONCE,
> >>>>It won't make much difference because guest will hang/stall due
> >>>>overflow
> >>>>anyway.
> >>>
> >>>Won't a BUG_ON() also result in a printk?
> >>Yes, it will. But stack will still keep failure point and poking
> >>with crash/gdb at core will always show where it's BUGged.
> >>
> >>In case it manages to print dump somehow (saw it couple times from ~
> >>30 test cycles), logs from console or from kernel message buffer
> >>(again poking with gdb) will show where it was called from.
> >>
> >>If WARN* is used, it will still totaly screwup clock and
> >>"last value" and system will become unusable, requiring looking with
> >>gdb/crash at the core any way.
> >>
> >>So I've just used more stable failure point that will leave trace
> >>everywhere it manages (maybe in console log, but for sure in stack)
> >>in case of WARN it might leave trace on console or not and probably
> >>won't reflect failure point in stack either leaving only kernel
> >>message buffer for clue.
> >>
> >
> >Makes sense.  But do get an ack from the Xen people to ensure this
> >doesn't break for them.
> >
> Konrad, Ian
> 
> Could you please review patch form point of view of xen?
> Whole thread could be found here https://lkml.org/lkml/2012/2/13/286

What are the conditions under which this happens? You should probably
include that in the git description as well? Is this something that happens
often? If there is an overflow can you synthesize a value instead of
crashing the guest?

Hm, so are you asking for review for this patch or for
http://www.spinics.net/lists/kvm/msg68440.html ?

(which would also entail a early_percpu_clock_init implementation
in the Xen code naturally).

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux