On Sunday 26 February 2012 19.59.07 Tomas Vondra wrote: ... > i.e. about 200 MB of free memory, but apache fails because of segfaults > when forking a child process: > > [16:49:51 2012] [error] (12)Cannot allocate memory: fork: Unable to > fork new process > [16:51:17 2012] [notice] child pid 2577 exit signal Segmentation > fault (11) In general things can get quite bad with relatively high memory pressure and no swap. That said, one thing that comes to mind is stacksize. When forking the linux kernel needs whatever the current stacksize is to be available as (free + free swap). Also, just because you see Y bytes free doesn't mean you can successfully malloc that much (fragmentation, memory zones, etc.). /Peter > or when processing requests: > > [26 16:30:16 2012] [error] [client 66.249.72.1] PHP Fatal error: Out > of memory (allocated 262144) (tried to allocate 523800 bytes) in > Unknown on line 0 > > The memory_limit in PHP is set to 32MB, so it's not the case. Similar > issues happen to PostgreSQL: > > 16:42:01 CET pid%04 db=xxxxxx-drupal user=xxxxxx FATAL: out of > memory > 16:42:01 CET pid%04 db=xxxxxx-drupal user=xxxxxx DETAIL: Failed on > request of size 2488. > 16:42:01 CET pid$38 db= user= LOG: could not fork new process for > connection: Nelze alokovat paměť > 16:42:01 CET pid$38 db= user= 4f4a5247.986:21 LOG: could not fork > new process for connection: cannot allocate memory > > I have absolutely no clue what's causing this / how to fix it. According > to free/vmstat there's about 200MB of free RAM all the time, so I have > no idea why the alloc calls fail.
Attachment:
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos