Re: Persistent PHP web application?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Josh Whiting wrote:
> My web application (an online classifieds server) requires a set of
> fairly large global arrays which contain vital information that most all
> the page scripts rely upon for information such as the category list,
> which fields belong to each category, and so on.

For this, you should look into:

Putting all this stuff in MySQL, but only getting the data you NEED right
now at any given time.  This will probably be slower, but it will give you
a good base-line benchmark, and prepare you for the next step.

Moving all that into LDAP. http://php.net/ldap/

However, given your programming philosophy so far, and the fact that you
are worried about 7ms and that you specifically requested some kind of
shared memory space in PHP, you should Read This:
http://us4.php.net/manual/en/ref.sem.php

That pretty much is exactly what you asked for.

Be forewarned that few PHP scripters have needed this stuff, and it's not
anywhere near as hammered on (read: debugged) as the other options above.

> Additionally, there are
> a large number of function definitions (more than 13,000 lines of code
> in all just for these global definitions).

You should look into breaking these up into groups of functionality -- I'm
willing to bet you could segment these and have many pages that only call
in a few functions.

> These global arrays and functions never change between requests.
> However, the PHP engine destroys and recreates them every time. After
> having spent some serious time doing benchmarking (using Apache Bench),
> I have found that this code takes at least 7ms to parse per request on
> my dual Xeon 2.4ghz server (Zend Accelerator in use*). This seriously
> cuts into my server's peak capacity, reducing it by more than half.

I'm not sure I'm right, but several things in this paragraph tweak my gut...

You're never going to get that 7ms to go to down *TOO* far if you insist
on loading all the data and all the functions for your entire site for
pages that don't really really need *ALL* of that...

> My question is: is there a way to define a global set of variables and
> functions ONCE per Apache process, allowing each incoming hit to run a
> handler function that runs within a persistent namespace? OR, is it
> possible to create some form of shared variable and function namespace
> that each script can tap?

Variables would be easy:
http://us4.php.net/manual/en/ref.sem.php

Functions, not so much...
Though I suppose if you created all of this as classes, and instantiated a
class and dumped a singleton into shared memory, you MIGHT trick PHP into
keeping all its class definitions and whatnot in RAM instead of loading
from the hard drive...

You could also patch Apache and get all your data into your Apache
$_SERVER space, or you could (I think) hack your Apache user's shell and
get all the data into their shell environment $_ENV.  Those would be a bit
more hack-y than using shared memory, but they technically fit what you
describe...

> PHP/FastCGI) and create a persistent PHP *application*... any
> suggestions?

This part here suggests an entirely different approach...

Depending on your application, you could also consider running a loop in
PHP which responds to requests on sockets.

http://us4.php.net/manual/en/ref.sockets.php

You could then define your own "server protocol" -- Kind of like making up
your own HTTP/FTP/RPC rules for your own specific application.

So if what your application mostly does is load in all this data and
respond to requests, you could write a *SINGLE* PHP application which
listened on port 12345 (or whatever port you like) and responded with the
data requested.  Like writing your own web-server, only it's a
_________-server where you get to fill in the blank with whatever your
application does.

> * - Please note that I am using the Zend Accelerator (on Redhat
> Enterprise with Apache 1.3) to cache the intermediate compiled PHP code.
> My benchmarks (7ms+) are after the dramatic speedup provided by the
> accelerator. I wouldn't even bother benchmarking this without the
> compiler cache, but it is clear that a compiler cache does not prevent
> PHP from still having to run the (ableit precompiled) array and function
> definition code itself.

Actually, Zend and others could be interested in your comparisons of
"with" and "without" cache...

To Summarize:

The solution that most closely approximates what you think you want is
"shared memory" features in PHP.

The solution that might lead to a much better application would be to
segment the data and functions needed into smaller files, and only suck in
the ones you REALLY need.

An alternative solution that MIGHT fit what you think you want, if you
feel up to defining your own XYZ protocol, and if your only goal is to
provide the data/functions to respond to requests, is to write a single
PHP application that loads the data/functions once, and then sits there
listening on a socket to respond to requests.

Hope that helps...  Almost feel like I ought to invoice you at this point :-)

-- 
Like Music?
http://l-i-e.com/artists.htm

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


[Index of Archives]     [PHP Home]     [Apache Users]     [PHP on Windows]     [Kernel Newbies]     [PHP Install]     [PHP Classes]     [Pear]     [Postgresql]     [Postgresql PHP]     [PHP on Windows]     [PHP Database Programming]     [PHP SOAP]

  Powered by Linux