Re: How does Prefork work?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It was thus said that the Great Doug Bell once stated:
> On May 28, 2009, at 2:55 PM, CrystalCracker wrote:
> 
> >I have at least 20 active apache threads (ps -ef | grep httpd), average
> >is about 40 threads and goes upto 70 at the peak. Does the above setting
> >sounds resonable?
> 
> MaxClients at 250 means that potentially 250 httpd processes can be  
> running at once. Do you have the memory to support that many without  
> swapping? Swapping usually ends up killing a system, especially if  
> it's already handling a lot of disk i/o.

  Under a modern Unix system, the amount of memory a process like Apache
consumes is hard to answer.  Let's say that when started, Apache takes 5M of
memory.  It does not necessarily follow that 100 Apache processes will then
consume 500M of memory.  The executable (and any libraries it uses) will all
be shared between all copies of Apache, as well as any data that hasn't
changed yet (when a process calls fork() [1], any pages with pure data are
marked as read-only so they can be shared between parent and child; if
either one attempts to change such a page, the OS will then make a separate
copy of that page, one for the parent and switch the page(s) in question as
read-write---this is known as copy-on-write, and again, is a means to keep
memory consumption down).

  About the only segment of memory that isn't initially shared is the stack
space, which can be checked with "ulimit -s" (on my system here, it reports
back that the default stack space is 10M).  So that, and any actual data
that is written to really only contribute to the overall size of a running
Apache.

  I picked one Apache process on my development server, and checked the
size of the actual code and initial data of the Apache program and every
library used [2], and it seems that there's 19M of actual code that is
shared, plus a few megabytes (say, 6M) of data that could potentially be
shared (until it's changed).  I have 9 Apache processes running, and given a
default stack size of 10M per, and assuming that none of the 6M of data is
shared, means that overall, I'm probably consuming 163M of RAM (90M for
stack, 54M for data, 19M of exectuable code) instead of the 435M that ps
reports back (the "size" of each process) [3]

  So, how much "memory" Apache consumes isn't an easy question to answer (at
least, using the prefork MPM).  About the only way to know just how much
your server can handle is to crank the settings up until you see swap space
being used [4] and then crank down a bit.

  -spc (I should mention that everything I've done here is under Linux,
	by the way ... )

[1]	The only way to create a new process under Unix, by the way.

[2]	GenericRootPrompt> cd /proc/<pid>
	GenericRootPrompt> size -t `cat maps | awk '{print $6}' | sort | uniq | grep -v /dev`
	   text    data     bss     dec     hex filename
		... output snipped ... 
	19036351        1829176  816840 21682367        14ad8bf (TOTALS)

[3]	The "resident set size" column of ps adds up to 215M, but quite a
	bit of even that is shared among the processes.

[4]	Under Linux, use the "free" command to see this.


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
   "   from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx


[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux