On Friday 10 June 2005 10:03 am, Lee Wu wrote: > That is exactly what I did: > > \o a_lot_room_to_hold_my_result > select * from a_table > > either > 1. out of memory for query result > 2. killed > 3. crash PG > > "If you have a very large table you can exhaust memory on the > client side unless you are writing the data directly to a file." > How besides "\o" and pg_dump? > > We have 4G RAM, and shared_buffers= 32768, it is a dedicate test > box, while the table is about 2G. Something to try (I don't know if it will work because I don't know the exact internals of the PG libraries but it's something I've observed): set the output formatting to unaligned (\pset format unaligned). I had a situation once where selecting a few thousand rows exhausted my RAM. The problem was that one item in a text column was ~8k long which meant that every other row had 8k of padding. And this was the case for more than one column so the pager was having to swallow gobs of data most of which was blank space. I ran the output to a file and ran some tests. Switching to unaligned output dropped the size by orders of magnitude. Depending on the nature of your data this may help or do nothing at all. Of course you are going to hit RAM or disk limitations on any given machine. Cursors are there for your use and your project may require them. Cheers, Steve