From a quick perusal of the article Chris mentions, I'd generally agree
with that view about table optimization -- I'm not an expert on
Postgres, but the recommendations generally seem to apply to MySQL as well.
My basic view is that, if you are routinely doing a select on millions
of rows, you probably need to take a step back and consider your general
structure.
Without revising the structure and other than indexing as Chris
suggested, a couple off-the-cuff ideas: if the stability of the table is
not critical, use MyISAM tables rather than InnoDB tables; try using
stored procedures (MySQL>=5.0).
While it isn't always true, my experience is that any table with a
million rows or more is a problem created because the initial assumption
was that the table would never grow that large so the general data
structure was not fully thought through.
Google is capable of handling searches through billions of rows of data
not because it uses supercomputers but because of its data structure.
Just my two centavos,
Jeff
Chris wrote:
Shelley wrote:
Hi all,
I made a post a week ago to ask for the idea of the fastest way to
get table records.
Fyi,
http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table
Look at the time even a 'count(1)' took.
Then you can imagine how much time sql such as "select a,b from
table_name where c='d'" will take.
I have a lot of tables like that. So my questions is:
What's your practice to optimize tables like that?
I pretty much follow what I've said in this article:
http://www.designmagick.com/article/16/PostgreSQL/How-to-index-a-database
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php