HTTP server scalability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Group,
 
How do web servers achieve scalability is bothering me for a long time.  My understanding is that an application can open one and only one socket connection through four system calls (socket, bind, listen, and accept).  It is at 'listen' level that a server can specify for how many connection can queue up on that connection and my understanding is that subsequent requests will be queued up and no concurrency will be achieved, thus defeating the purpose of scalability.
 
Q. 1.  Can some one tell me how many connections in present Unix/Linux OSes a server can queue up.
 
While reading some literature on http server, I came across the following that in 
 
"1.3.x, the server can allow 150 client connections and in 2.0 version the server can create 50 "threadsperchild".
 
Both parameters in different versions are configurable i.e. (150 client connections or 50 threadsperchild).
 
Q. 2. What exactly is client connection.  My understanding is that 150 'connect' system calls will queue up on the same SINGLE socket and no concurrency can be achieved.
 
Q. 3. I also do not understand the 'threadsperchild.'  In this case what constitutes a Child and what will each thread's task is.
 
Can someone who knows about this issue please shed some light on it.
 
Thanks in advance.

--
Thanks

Nagrik

[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux