Re: Continuing issues... Can't vacuum!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff,

Thank you very much. I didn't know about this command. There are lots of times this would have saved me looking it up in my documentation. I've got a couple of different instances of postgres running and I always have to check the docs to find out where my config file is.

Well, the table I told everyone about yesterday won't let me run pg_dump against it. My hesitancy comes from not knowing what the result will be of dumping the other databases. I've had the experience of dumping my "can of worms" out and not getting them back in the same "can" when I was done with them. =)

Carol

On May 23, 2008, at 2:02 PM, Jeff Frost wrote:

Carol Walter wrote:

vacuumdb: vacuuming database "km"
NOTICE: number of page slots needed (2275712) exceeds max_fsm_pages (200000) HINT: Consider increasing the configuration parameter "max_fsm_pages" to a value over 2275712.

The problem is I've found the max_fsm_pages parameter in the postgresql.conf file, but changing it doesn't seem to be having any effect. So I'm going to ask some questions that are probably pretty silly, but I hope you'll help me. First of all, perhaps the postgresql.conf file that I am editing may not be the one that is being read when the database server starts. There are several postgresql.conf files on this system. How do I tell which one is the one being read?

Carol,

Do the following in psql:

show config_file;

You should get some output like so:

            config_file
-------------------------------------
/var/lib/pgsql/data/postgresql.conf
(1 row)

Go forth and edit that file. :-)
The max_fsm_pages parameter says what the minimum setting is in the file. Is there also a maximum?

I'm not sure I understand the question. max_fsm_pages is the maximum number of pages to be used in the free space map. See Jim Nasby's article here: http://decibel.org/~decibel/pervasive/fsm.html
I can't back up my database with the error right now, because of the error. I have a backup but I'm afraid to restore it. I feel like I have many indices that are bloated and I want to get the databases vacuumed before I start trying to create and load a new database. I'm concerned that there won't be enough index space for the system to sort and copy the indices.
What's the error? All I see in this email is a warning about max_fsm_pages and that should not stop you from doing a pg_dump.

--
Jeff Frost, Owner 	<jeff@xxxxxxxxxxxxxxxxxxxxxx>
Frost Consulting, LLC 	http://www.frostconsultingllc.com/
Phone: 650-780-7908	FAX: 650-649-1954




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux