(switched to email. Please respond via emailed reply-to-all, not via the bugzilla web interface). On Wed, 18 Jan 2012 09:22:12 GMT bugzilla-daemon@xxxxxxxxxxxxxxxxxxx wrote: > https://bugzilla.kernel.org/show_bug.cgi?id=42578 Stuart has an 8GB x86_32 machine. It has large amounts of NTFS pagecache in highmem. NTFS is using 512-byte buffer_heads. All of the machine's lowmem is being consumed by struct buffer_heads which are attached to the highmem pagecache and the machine is dead in the water, getting a storm of ooms. A regression, I think. A box-killing one on a pretty simple workload on a not uncommon machine. We used to handle this by scanning highmem even when there was plenty of free highmem and the request is for a lowmmem pages. We have made a few changes in this area and I guess that's what broke it. I think a suitable fix here would be to extend the buffer_heads_over_limit special-case. If buffer_heads_over_limit is true, both direct-reclaimers and kswapd should scan the highmem zone regardless of incoming gfp_mask and regardless of the highmem free pages count. In this mode, we only scan the file lru. We should perform writeback as well, because the buffer_heads might be dirty. [aside: If all of a page's buffer_heads are dirty we can in fact reclaim them and mark the entire page dirty. If some of the buffer_heads are dirty and the others are uptodate we can even reclaim them in this case, and mark the entire page dirty, causing extra I/O later. But try_to_release_page() doesn't do these things.] I think it is was always wrong that we only strip buffer_heads when moving pages to the inactive list. What happens if those 600MB of buffer_heads are all attached to inactive pages? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>