On Dec 6, 2006, at 2:34 PM, Wei Weng wrote:
On Tue, 2006-12-05 at 15:56 -0500, Wei Weng wrote:
I have a table that has roughly 200,000 entries and many columns.
The query is very simple:
SELECT Field1, Field2, Field3... FieldN FROM TargetTable;
TargetTable has an index that is Field1.
I think I have discovered the reason for why the query runs so
slow. The
original query has an ORDER BY Field1 clause that I forgot to put
in my
email.
So the query looks like this:
SELECT Field1, Field2, Field3... FieldN FROM TargetTable ORDER BY
Field1
DESC;
What is the effective way to optimize this query(or to optimize the
system) to run a little faster than it does now?
Thanks and I really appreciate all the helps I've gotten so far.
clustering the table on the index used for the sort might help, but
likely performance of the above is probably worse than your original
example due to the added random access overhead caused by the index
scan (assuming the sort uses an index). If the table changes
infrequently, you could consider creating an in-order copy of the
data (using INSERT INTO) so that you don't need to use an index to
order it.
If you need the full list of results before the application can do
anything (thus cursors won't help), then you'll either need to reduce
the amount of data scanned and returned (optimize fields for byte
size, move any fields not needed by this query to another table,
reduce the number of rows in the table, etc) or scan it faster
(faster disks and cpu).
If you have multiple cpus on the database side, and you are not I/O
bound, you could consider using table partitioning to break the table
up into pieces where each piece contains a fraction of the whole (say
one quarter). Then issue four concurrent queries from the application
and reassemble the results on that end. You might not need to use
table partitioning if you can efficiently determine the "cleave"
points at run time. Then you would just use use a where clause to
select the proper range.
-Casey