Quoting Les Mikesell <lesmikesell@xxxxxxxxx>:
Just because you have some memory to spare doesn't mean that malloc() can hand you 600 megs of contiguous memory. And I've lost track of what the fashionable way of handling VM is now. Is it the slow but sure "check first and fail if impossible" or the fast and dirty "always succeed and worry about paging later"? Or is it user-configurable now? Usually I just allocate 2 gigs of swap on the theory that if it goes that far the machine will be so unresponsive I'll think it's dead anyway.
I beleive it is the "lazy" system, where request for memory allocation is always successfull, and the app gets killed if there is not enough backing store later. I know this is configurable parameter in Tru64 and Solaris. Not sure if it is configurable parameter in Linux.
However, the thing is, there was enough free swap space to handle things. More than 1 gig of free swap space, that is. There was enough free swap to move *everything* that was in physical RAM onto it, if needed. So it wasn't the case of "ran out of free swap". It was a clear failure of VM to utilize the resources it had (real memory + swap space). The VM should have put things on hold until enough pages were swapped out. Not kill things or deny memory to other kernel modules (and in this case, cause file system corruption).
-- NOTICE: If you are not intended recipient, you are hereby notified that by reading this message you agreed not to disturb frogs during mating season. For more info, visit http://www.8-P.ca/ _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos