I have set my siege concurrency level a bit lower (20 users) and that seems to have resolved the segfault issue. Its strange that I hadn't read anywhere else that a lack of resources could cause that, but there it is. I guess that running Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a bit too much to ask of my single core 512mb VPS?
On Fri, Aug 21, 2015 at 6:14 PM, Daryl King <allnatives.online@xxxxxxxxx> wrote:Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh session, but 1024 in webmin? Which one would be correct?Limits set by the ulimit command (and the setrlimit syscall) are correct if they are high enough to allow a correctly functioning program to perform its task. They are incorrect if set too low for the needs of a correctly functioning program or so high that a malfunctioning program is able to adversely affect the functioning of other processes. So the answer to your question is: it depends.Having said that it is very unusual these days for "ulimit -n" to be set too high. Supporting thousands of open files in a single process is normally pretty cheap in terms of kernel memory, CPU cycles, etc. So if you have a reason to think your program (e.g., httpd) has a legitimate need to have more than 1024 files open simultaneously go ahead and increase the "ulimit -n" (which is the setrlimit RLIMIT_NOFILE parameter) to a higher value.However, in my experience it is unusual for a too low limit on the number of open files to result in a segmentation fault. Especially in a well written program like Apache HTTPD. A well written program will normally check whether the open (or any syscall which returns a file descriptor) failed and refuse to use the -1 value as if it were a valid file descriptor number. So I would be surprised if increasing that value resolved the segmentation fault.--Kurtis RaderCaretaker of the exceptional canines Junior and Hank