Hi! I thought that the sorting values are stored entirely in work_mem, but in fact it works somehow differently. Can anyone suggest how this works? For example, I created this 512MB incompressible file and test table: tr -dcs '[:print:]' '[:print:]' < /dev/urandom | tr -d '"'\' | dd bs=1K count=512K of=asciidump create unlogged table t1 (v text); insert into t1 select pg_read_file('asciidump') from generate_series(1, 10); select pg_column_size(v), octet_length(v) from t1 limit 1; pg_column_size | octet_length ----------------+-------------- 536870912 | 536870912 set work_mem to '64MB'; Now I think that 64MB is not enough to sort such large values and postgres will use temp files, but in fact it does not. select temp_files, temp_bytes from pg_stat_database where datname = current_catalog; temp_files | temp_bytes ------------+------------ 0 | 0 explain (analyze,verbose,buffers) select v from t1 order by v; QUERY PLAN --------------------------------------------------------------------------------------------------------------- Sort (cost=94.38..97.78 rows=1360 width=32) (actual time=6433.138..6433.140 rows=10 loops=1) Output: v Sort Key: t1.v Sort Method: quicksort Memory: 25kB Buffers: shared hit=543881 read=679794 written=118012 -> Seq Scan on public.t1 (cost=0.00..23.60 rows=1360 width=32) (actual time=0.007..0.009 rows=10 loops=1) Output: v Buffers: shared hit=1 Planning Time: 0.035 ms Execution Time: 6433.155 ms > Sort Method: quicksort Memory: 25kB select temp_files, temp_bytes from pg_stat_database where datname = current_catalog; temp_files | temp_bytes ------------+------------ 0 | 0 WOW! How does it work?! :-) -- Sergey Burladyan