Hello, I'm executing this query: SELECT x, y, another_field FROM generate_series(1, 10) x, generate_series(1, 10) y, my_table The field 'another_field' belongs to 'my_table'. And that table has 360000 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and postgres 8.4.7, the query works fine. But in a 32 bits machine, with 1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed after taking about 80% of available memory. In the 64 bits machine the query takes about 60-70% of the available memory too, but it ends. And this happens even if I simply get x and y: SELECT x, y FROM generate_series(1, 10) x, generate_series(1, 10) y, my_table Is it normal? I mean, postgres has to deal with millions of rows, ok, but shouldn't it start swapping memory instead of crashing? Is a question of postgres configuration? Thanks in advance, -- Jorge Arévalo Internet & Mobilty Division, DEIMOS jorge.arevalo@xxxxxxxxxxxxxxxx http://es.linkedin.com/in/jorgearevalo80 http://mobility.grupodeimos.com/ http://gis4free.wordpress.com http://geohash.org/ezjqgrgzz0g -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general