On Thu, 4 Aug 2005, Paul wrote: > On Thu, 2005-08-04 at 21:43 -0400, Dave Jones wrote: > > On Fri, Aug 05, 2005 at 09:22:55AM +0800, Ian Kent wrote: > > > I also find it hard to understand why it is such a problem having a larger > > > stack. As you point out, as software evolves it ultimately becomes more > > > complex. If the developers design needs it and the software is reliable > > > and efficient (aka performs well) then why not. > > > > > > A quick caclulation. > > > > > > 2000*4k is about 8M in say 1G at least. > > > > > > Not a large percentage overhead I think. > > > > Now try finding 2000 _contiguous_ pairs of pages after the machine > > has been up for a while, under load. Memory fragmentation makes > > this a really nasty problem, and the VM eats its own head after > > repeatedly scanning every page in the system. > > I thought I heard that there was some work being done in the upstream > kernel to have a process "defrag" memory in the background. This would > help alleviate this problem on systems with long up-times. I'm afraid I have to agree with Dave on this. Scanning pagelists really needs to be reduced to a minimum where ever possible. Ian -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/fedora-devel-list