On 02/15/2012 07:18 PM, Igor Mammedov wrote: > > On 02/15/2012 01:23 PM, Igor Mammedov wrote: > > >>> static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time > > >>> *shadow) > > >>> { > > >>> - u64 delta = native_read_tsc() - shadow->tsc_timestamp; > > >>> + u64 delta; > > >>> + u64 tsc = native_read_tsc(); > > >>> + BUG_ON(tsc< shadow->tsc_timestamp); > > >>> + delta = tsc - shadow->tsc_timestamp; > > >>> return pvclock_scale_delta(delta, shadow->tsc_to_nsec_mul, > > >>> shadow->tsc_shift); > > >> > > >> Maybe a WARN_ON_ONCE()? Otherwise a relatively minor hypervisor > > >> bug can > > >> kill the guest. > > > > > > > > > An attempt to print from this place is not perfect since it often > > > leads > > > to recursive calling to this very function and it hang there > > > anyway. > > > But if you insist I'll re-post it with WARN_ON_ONCE, > > > It won't make much difference because guest will hang/stall due > > > overflow > > > anyway. > > > > Won't a BUG_ON() also result in a printk? > Yes, it will. But stack will still keep failure point and poking > with crash/gdb at core will always show where it's BUGged. > > In case it manages to print dump somehow (saw it couple times from ~ > 30 test cycles), logs from console or from kernel message buffer > (again poking with gdb) will show where it was called from. > > If WARN* is used, it will still totaly screwup clock and > "last value" and system will become unusable, requiring looking with > gdb/crash at the core any way. > > So I've just used more stable failure point that will leave trace > everywhere it manages (maybe in console log, but for sure in stack) > in case of WARN it might leave trace on console or not and probably > won't reflect failure point in stack either leaving only kernel > message buffer for clue. > Makes sense. But do get an ack from the Xen people to ensure this doesn't break for them. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html