Re: apache segfault debugging with gdb - need advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>However, in my experience it is unusual for a too low limit on the number of open files to result in a segmentation fault. Especially in a well written program like Apache HTTPD. A well written program will normally check whether the >open (or any syscall which returns a file descriptor) failed and refuse to use the -1 value as if it were a valid file descriptor number. So I would be surprised if increasing that value resolved the segmentation fault.

Kurtis: I think Darryl's issue here is with the resource intensity of siege, not so much his LAMP stack. But generally speaking I agree, tweaking the ulimits is a hack and should not be necessary for most mature software. Though, I have seen posts (not necessarily here) wherein people mention adjusting the ulimit in the Apache environment to facilitate higher concurrency. But, personally, I wouldn't do it in a production environment. 

>I have set my siege concurrency level a bit lower (20 users) and that seems to have resolved the segfault issue. Its strange that I hadn't read anywhere else that a lack of resources could cause that, but there it is. I guess that running >Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a bit too much to ask of my single core 512mb VPS?

Daryl: Siege is a hobby project of one individual (at least last I checked), and while it is a valuable tool, it needs some optimization and is highly resource intensive. It uses a virtualization procedure to simulate user concurrency - this is where available memory and the ulimit issue can be a factor. In my case, I ended up spinning up a separate virtual machine with 16GB RAM just to run siege against my development LAMP stack. I adjusted the security limits in /etc/sysconfig/security/limits.conf to allow for higher concurrency - to the extent that in the end the ulimit wasn't the issue - the server ran out of memory due to all of the virtual sessions spawned by siege.

It does sound as though you are tight on resources - though you could certainly run your LAMP stack on that server acceptably depending on expected traffic load. If a light load is expected, you should be ok. But if you are running siege locally for load testing, I would definitely recommend running it remotely from another machine if you have the architecture available, as siege and your LAMP stack will be competing for resources obviously if you run both locally. 


Cheers,

Ryan

On Sat, Aug 22, 2015 at 9:58 AM, Daryl King <allnatives.online@xxxxxxxxx> wrote:

I have set my siege concurrency level a bit lower (20 users) and that seems to have resolved the segfault issue. Its strange that I hadn't read anywhere else that a lack of resources could cause that, but there it is. I guess that running Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a bit too much to ask of my single core 512mb VPS?

On Aug 22, 2015 1:13 PM, "Kurtis Rader" <krader@xxxxxxxxxxxxx> wrote:
On Fri, Aug 21, 2015 at 6:14 PM, Daryl King <allnatives.online@xxxxxxxxx> wrote:
Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh session, but 1024 in webmin? Which one would be correct?

Limits set by the ulimit command (and the setrlimit syscall) are correct if they are high enough to allow a correctly functioning program to perform its task. They are incorrect if set too low for the needs of a correctly functioning program or so high that a malfunctioning program is able to adversely affect the functioning of other processes. So the answer to your question is: it depends.

Having said that it is very unusual these days for "ulimit -n" to be set too high. Supporting thousands of open files in a single process is normally pretty cheap in terms of kernel memory, CPU cycles, etc. So if you have a reason to think your program (e.g., httpd) has a legitimate need to have more than 1024 files open simultaneously go ahead and increase the "ulimit -n" (which is the setrlimit RLIMIT_NOFILE parameter) to a higher value.

However, in my experience it is unusual for a too low limit on the number of open files to result in a segmentation fault. Especially in a well written program like Apache HTTPD. A well written program will normally check whether the open (or any syscall which returns a file descriptor) failed and refuse to use the -1 value as if it were a valid file descriptor number. So I would be surprised if increasing that value resolved the segmentation fault.

--
Kurtis Rader
Caretaker of the exceptional canines Junior and Hank


[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux