On 2021-08-26 10:30 a.m., Rolf Eike Beer wrote: > Am 2021-03-14 13:08, schrieb Rolf Eike Beer: >> Am Sonntag, 14. März 2021, 12:16:11 CET schrieben Sie: >>> On 3/14/21 10:47 AM, Rolf Eike Beer wrote: >>> > Am Mittwoch, 3. März 2021, 15:29:42 CET schrieb Helge Deller: >>> >> On 3/1/21 7:44 PM, Rolf Eike Beer wrote: >>> >>> Am Montag, 1. März 2021, 17:49:42 CET schrieb Rolf Eike Beer: >>> >>>> Am Montag, 1. März 2021, 17:25:18 CET schrieb Rolf Eike Beer: >>> >>>>> After upgrade to 5.11 get this multiple times per second on my C8000: >>> >>>>> >>> >>>>> [ 36.998702] WARNING: timekeeping: Cycle offset (29) is larger than >>> >>>>> allowed by the 'jiffies' clock's max_cycles value (10): time overflow >>> >>>>> danger [ 36.998705] timekeeping: Your kernel is sick, but >>> >>>>> tries >>> >>>>> to cope by capping time updates >>> >> >>> >> I know I have seen this at least once with a 32-bit kernel in qemu as >>> >> well.... >>> >> >>> >>>> Not 5.11, but 5.10.11. 5.10.4 is fine. It could be a bad upgrade >>> >>>> attempt, >>> >>>> I'll retry once I have built a proper 5.11 kernel. >>> >>> >>> >>> Ok, it's there also in 5.11.2: >>> >> You don't see it in 5.11, but in 5.11.2. >>> >> Sadly none of the changes between those versions seem related >>> >> to this problem. >>> >> >>> >> Do you still see this? >>> >> I'd like to get it anaylzed/fixed. >>> > >>> > Me too. What do you need? >>> >>> I actually don't know. >>> First of all it would be great if we could reproduce it. >>> Right now I don't see this issue any longer, so I have nowhere to start >>> from. >> >> I get it every time if I boot that kernel. > > I still see it in 5.13.12, so I'm in the bad situation now that I don't have a working kernel on that machine anymore and need to think about > how to restore it. While the latter is exactly my problem I would still love to see the kernel problem solved. I don't see this either. So, it probably has to do with your config. Since it fails consistently on your system, you could use git bisect to find the change which introduced problem. Dave -- John David Anglin dave.anglin@xxxxxxxx