Re: Basic Database Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are running a prototype of a system running on
PHP/Postgresql on an Intel Xeon 2ghz server, 1GB RAM, 40GB hard drive,

	I think this is a decent server...

	Now, I guess you are using Apache and PHP like everyone.

	Know these facts :

- A client connection means an apache process (think HTTP 1.1 Keep-Alives...) - The PHP interpreter in mod_php will be active during all the time it takes to receive the request, parse it, generate the dynamic page, and send it to the client to the last byte (because it is sent streaming). So, a php page that might take 10 ms to generate will actually hog an interpreter for between 200 ms and 1 second, depending on client ping time and other network latency figures. - This is actually on-topic for this list, because it will also hog a postgres connection and server process during all that time. Thus, it will most probably be slow and unscalable.

	The solutions I use are simple :

First, use lighttpd instead of apache. Not only is it simpler to use and configure, it uses a lot less RAM and resources, is faster, lighter, etc. It uses an asynchronous model. It's there on my server, a crap Celeron, pushing about 100 hits/s, and it sits at 4% CPU and 18 megabytes of RAM in the top. It's impossible to overload this thing unless you benchmark it on gigabit lan, with 100 bytes files.

Then, plug php in, using the fast-cgi protocol. Basically php spawns a process pool, and you chose the size of this pool. Say you spawn 20 PHP interpreters for instance.

When a PHP page is requested, lighttpd asks the process pool to generate it. Then, a PHP interpreter from the pool does the job, and hands the page over to lighttpd. This is very fast. lighttpd handles the slow transmission of the data to the client, while the PHP interpreter goes back to the pool to service another request.

This gives you database connection pooling for free, actually. The connections are limited to the number of processes in the pool, so you won't get hundreds of them all over the place. You can use php's persistent connections without worries. You don't need to configure a connection pool. It just works (TM).

Also you might want to use eaccelerator on your PHP. It precompiles your PHP pages, so you don't lose time on parsing. Page time on my site went from 50-200 ms to 5-20 ms just by installing this. It's free.

	Try this and you might realize that after all, postgres was fast enough !





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux