Thanks! I just figured it out this morning (brazilian time). It is nice to confirmation my hypothesis holds. Thanks again. Edésio On Thu, Sep 18, 2008 at 01:19:32PM -0400, Rik van Riel wrote: > On Wed, 17 Sep 2008 17:52:00 -0300 > Edesio Costa e Silva <edesio@xxxxxxxxxxxxxxxx> wrote: > > > I am running Linux 32 bits (2.6.26.5-bigsmp) on a virtual machine with 64 GB > > of RAM. When I try to create a large file, for example "dd if=/dev/null > > of=/tmp/huge.file bs=1024k count=65536" the machine hangs when the file hits > > 22 GB mark. I instrumented the kernel with the debug options and before the > > hang I got a message "lowmem_reserve[]: 0 0 0 0". Any hints on how to tune > > Linux to handle this configuration? > > The problem is that the kernel needs buffer heads and all > kinds of other metadata to keep track of the page cache. > > This metadata needs to be addressable by kernel functions, > which means it has to live in lowmem. > > With 64GB total memory, the 896MB of lowmem will be filled > up quickly and the kernel will lock up. > > > P.S.: Switching to a 64 bit kernel is NOT an option. > > Switching to a 64 bit kernel is one of only two options. > > The second option is to rewrite part of the VM so less > metadata is kept for page cache. Specifically, you will > have to write code to reclaim buffer heads from page > cache pages. > > -- > All rights reversed. -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ