On Thu, Mar 28, 2013 at 02:03:42PM -0700, Kevin Grittner wrote: > kelphet xiong <kelphet@xxxxxxxxx> wrote: > > > When I use postgres and issue a simple sequential scan for a > > table inventory using query "select * from inventory;", I can see > > from "top" that postmaster is using 100% CPU, which limits the > > query execution time. My question is that, why CPU is the > > bottleneck here and what is postmaster doing? Is there any way to > > improve the performance? Thanks! > > > explain analyze select * from inventory; > > > > Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16) (actual time=0.005..1030.403 rows=11745000 loops=1) > > Total runtime: 1750.889 ms > > So it is reading and returning 11.7 million rows in about 1 second, > or about 88 nanoseconds (billionths of a second) per row. You > can't be waiting for a hard drive for many of those reads, or it > would take a lot longer, so the bottleneck is the CPU pushing the > data around in RAM. I'm not sure why 100% CPU usage would surprise > you. Are you wondering why the CPU works on the query straight > through until it is done, rather than taking a break periodically > and letting the unfinished work sit there? > > -- > Kevin Grittner > EnterpriseDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company > Alternatively, purchase a faster CPU if CPU is the bottleneck as it is in this case or partition the work into parallel queuries that can each use a processor. Regards, Ken -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance