I would also like to add that I am very suspicious of a table with 80 columns. Offhand, it sounds like poor database design where someone was trying to put all the eggs in one basket (figuratively). Further, what was the exact query? Queries of the form SELECT * will always be inherently slow with tables that have many columns. Ideally, you only want to select just the columns that are needed. On 11/10/15, Jim Nasby <Jim.Nasby@xxxxxxxxxxxxxx> wrote: > On 11/10/15 9:39 AM, Mammarelli, Joanne T wrote: >> Hi – same rookie user as before. >> >> We have one table >> >> 100,000 rows >> >> 80 columns >> >> When we try to retrieve the data (select * from table) using pgadmin, we >> get a 193456 ms retrieve time. >> >> When I ran a query analyze in the command prompt, we get a 316ms >> retrieve time. > > You mean EXPLAIN ANALYZE? > >> .. and finally. When we retrieve the data from the command line, we get >> a 5720 ms retrieve time. > > What was psql doing with the output? > > Basically, pgAdmin and psql aren't meant for users to deal with huge > data sets, because humans don't deal well with huge data sets. > -- > Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX > Experts in Analytics, Data Architecture and PostgreSQL > Data in Trouble? Get it in Treble! http://BlueTreble.com > > > -- > Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- *Melvin Davidson* I reserve the right to fantasize. Whether or not you wish to share my fantasy is entirely up to you. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general