Re: DB calls vs Session solution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I just realized I must be evil ;-)

firstly I'm top posting.

secondly I take 'the cache your db results in the session'
to an extreme - by shoving serialized ResultSet objects (with all
results in tow of course) into shared memory by way of the APC
extension... this is of course evil because Rasmus' says "don't
stick objects in the APC cache you fools" (I paraphrase ;-),
the server in question is a single site machine with lots of crazy
marketing induced queries - by caching _all_ select queries I have
reduced the load on average from 5+ to 0.5+-

out of interest, Curt when you say 25 requests a second exactly what
kind of requests? I mean 25 request each second on a php file that queries
a DB with lots of LEFT JOINs, etc is going to require a pretty heavy
piece of iron no? (at least more powerful than anything I can get my hands
on in general.)

PS Curt thanks for the braindump - you got me thinking again!

Curt Zirzow wrote:
On Fri, Mar 10, 2006 at 02:10:54PM +0200, Thomas wrote:

Hi there,

Quick question on performance:

I have got some expensive sql queries (doing lots of LEFT JOINS) that result
in anything between 10 to 10000 results. I need to do paging on those. So
far I have used a LIMIT solution which will not do anymore because you don't
get the whole total.

The next couple paragraphs are assuming your are using mysql.

A little side note (assuming your using Mysql), there is a select
option called SQL_CALC_FOUND_ROWS, which will allow you to know the
total of rows even though you specify a LIMIT clause.  Becareful
with this though, mysql has to perform the query fully before doing
anything so it can be harmful with large results sets from queries.


The first thing I would look at is, if there is a way you can make
the query not have to use a left join, and ensure that indexes are
all properly used.

Another helpful suggestion is the SQL_CACHE option, this will take
the logic of caching data from your hands.  Be careful, again,
cause if you are dealing with a heavily modified table it will bite
you in the ass.


Now, the idea is to store the results (or the sql query string?) in the
session to avoid too many db calls (on paging).
The question now is: is the serializing/unserializing more expensive than
another db call?


Well it depends.

At first you might notice that you execute a query that takes very
long 4 seconds before a response happens, you then store the
resulted data in a session array variable.
On the next page you just jump to the proper record from the array
of data you have stored in the session array, and the page loads
much quicker, probably less than 2 seconds (of course pending how
many results your dealing with.


The problem with this approach is you are taking the cost of the
CPU to other resources, like disk usage, memory usage, as well
with managing the session data yourself.

If we take a resultset of 5000 records, with the above method those
records will have to be stored in the session of some sort of
array. Stored on  that may or may not be used again.  And if you
get a medium load say 25 requests per second, now all this data is
being transfered to a local disk.  Now your bottle neck is your
hard drive.


I guess what I'm saying is try to find a solution on how to do this
at the Database level vs trying to solve it with php, even if it
means you have to flatten your table structure for the searches you
are looking for.


Curt.

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


[Index of Archives]     [PHP Home]     [Apache Users]     [PHP on Windows]     [Kernel Newbies]     [PHP Install]     [PHP Classes]     [Pear]     [Postgresql]     [Postgresql PHP]     [PHP on Windows]     [PHP Database Programming]     [PHP SOAP]

  Powered by Linux