On Thu, May 4, 2006 3:45 pm, Oz wrote: > After lots of testing I can say: It helped. > I do still not exactly know why forking failed, but it seems to be > somehow related to the high number of processes. I'm pretty confident that it is directly related to the high number of processes. Your Operating System has a compiled-in (or possibly configurable in /etc/ somewhere) limit on the number of processes to allow. There are very good reasons for this limit, however, not least of which is catching malicious and/or run-away infinite "fork" code. (aka "fork-bomb") You might be able to tweak the limit (much) higher. You may even be able to "remove" the limit, in principle, and just suffer the consequences if you fork-bomb yourself, and crash the machine. There may even be probably N-tier fail-safe roll-over applications where this is the "right" answer, as weird as that seems. > I also do not > understand why this is a *fatal* error, since it could easily be > handled by returning -1 (like in the PHP docs). You can (and probably should) use http://php.net/set_error_handler to catch the E_FATAL and do whatever you think is appropriate. For most cases, a failed fork probably should be E_FATAL, though, so that's unlikely to change. > First I wanted to create a queue for tasks; instead of forking > directly only a limited number of processes should be run from the > queue, when one finishes another should start. But I decided not to do > this, because the queue can easily grow to reach the > memory limit. > > At last I decided to simply pause the script just before forking, if a > maximum number of processes has been reached, until one has finished. > Not optimal, since the parent process has a higher priority task, but > at least it's stable now. I'm not seeing how this solves the problem of the queue overflowing... Or perhaps the parent process is getting its tasks from somewhere to build the queue?... You MAY want to consider doing away with this controlling process that queues up tasks, and simply have some kind of "id" on each source item that is NULL for unprocessed and set the "id" to the PHP 'pid' when the item is grabbed by a child to be processed. http://php.net/getmypid In other words, instead of the parent getting the data and queueing it up, just have the child get the data and mark it (in a single get/mark operation) as "in process" It's only one less process, but this arrangement generally makes the application simpler. Depends on your application, of course, so it's just an idea to consider. > To me it appears to be impossible to track if it was the maximum > number of processes or a lack of any system resource. I think the "resource" in question is basically "your computer" rather than a specific "resource" like open file handlers. You could test this rather quickly by writing an infinite "fork" process, and seeing at what point it gives you Error 11. If it's around the same number of processes, you know that's it. ps auxwwww | grep yourscriptnamehere -c will (I think) give you a count on how many child processes are running, or you could hack some kind of minimal counter in the fork. -- Like Music? http://l-i-e.com/artists.htm -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php