Re: Possible sandybridge livelock issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* James Bottomley <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx> wrote:

> > Can you figure out better what the kswapd is doing?
> 
> We have ... it was the thread in the first email.  We don't need a fix for 
> the kswapd issue, what we're warning about is a potential sandybridge 
> problem.
> 
> The facts are that only sandybridge systems livelocked in the kswapd problem 
> ... no other systems could reproduce it, although they did see heavy CPU time 
> accumulate to kswapd.  And this is with a gang of mm people trying to 
> reproduce the problem on non-sandybridge systems.
> 
> On the sandybridge systems that livelocked, it was sometimes possible to 
> release the lock by pushing kswapd off the cpu it was hogging.

It's not uncommon at all to see certain races (or even livelocks) only with the 
latest and greatest CPUs.

I have a first-gen CPU system that when i got it a couple of years ago 
triggered like a dozen Linux kernel races and bugs possible theoretically on 
all other CPUs but not reported on any other Linux system up to that point, 
*ever* - and some of those bugs were many years old.

> If you think the theory about why this happend to be wrong, fine ... come up 
> with another one.  The facts are as above and only sandybridge systems seem 
> to be affected.

I can see at least four other plausible hypotheses, all matching the facts as 
you laid them out:

 - i could be a bug/race in the kswapd code.

 - it could be that the race window needs a certain level of instruction 
   parallelism - which occurs with a higher likelyhood on Sandybridge.

 - it could be that Sandybridge CPUs keep dirty cachelines owned a bit longer 
   than other CPUs, making an existing livelock bug in the kernel code easier 
   to trigger.

 - a hardware bug: if cacheline ownership is not arbitrated between 
   nodes/cpus/cores fairly (enough) and a specific CPU can monopolize a 
   cacheline for a very long time if only it keeps modifying it in an 
   aggressive enough kswapd loop.

Note, since each of these hypotheses has a specific non-zero chance of being 
the objective truth, your hypothesis might in the end turn out to be the right 
one and might turn into a proven scientific theory: CPU and scheduler bugs do 
happen after all.

The other hypotheses i outlined have non-zero chances as well: kswapd bugs do 
happen as well and various CPU timing differences do tend to occur as well.

But above you seem to be confused about how supporting facts and hypotheses 
relate to each other: you seemed to imply that because your facts support your 
hypothesis the ball is somehow on the other side. As things stand now we 
clearly need more facts, to exclude more of the many possibilities.

So i wanted to clear up these basics of science first, before any of us wastes 
too much time on writing mails and such. Oh ... never mind ;-)

Thanks,

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]