Thank you for your answer.
Instead of launching a VACUUM FULL ANALYZE on the "big" table, I will
try to use the "ALTER TABLE SET STATISTICS" command.
I will done this in a second time, during the optimization of the
benchmark.
Best regards,
Alexandra DANTE
Tom Lane a écrit :
DANTE ALEXANDRA <ALEXANDRA.DANTE@xxxxxxxx> writes:
Is it correct ? Why ANALYZE has taken less than one minute ?
That's what it's supposed to do --- ANALYZE just samples the table,
so it's fast even on very large tables.
If you think you need better statistics, you could increase the
statistics target for ANALYZE (see ALTER TABLE SET STATISTICS),
but VACUUM FULL per se is not going to improve them at all.
VACUUM and ANALYZE have entirely different purposes --- it's only
historical happenstance that there is a combination command to do
both.
regards, tom lane
DANTE ALEXANDRA a écrit :
Thank you for your answer.
To give you more explanations, first, I have launched the ANALYZE
command on this big table, but the elasped time was very short. As my
queries launched on this table were very slow, I have tried to launch
the VACUUM FULL ANALYZE to update the statistics.
Is it correct ? Why ANALYZE has taken less than one minute ?
Thanks for your help.
Regards
Alexandra DANTE
Tom Lane a écrit :
DANTE ALEXANDRA <ALEXANDRA.DANTE@xxxxxxxx> writes:
Then, on each table, I have launched the "VACUUM FULL ANALYZE"
command as a non-root user.
This command failed on the last table, the biggest, called
"lineitem" which contains 1799989091 rows (near 300 Gb of datas).
VACUUM FULL requires workspace proportional to the number of blocks in
the table. You probably need to bump up the kernel's per-process memory
limit (see ulimit and so on) if you want to VACUUM FULL such a large
table.
My advice: you shouldn't be using VACUUM FULL anyway. Quite aside from
the memory issue, it's likely to take forever and a day.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings