Scott,
Purely idle. I compared these transactions with our other "healthy" databases, and they checked out.
On Fri, Jul 12, 2013 at 4:25 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
Prepared transactions that are sitting still do the same thing, and show no connections.On Fri, Jul 12, 2013 at 2:25 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
Idle in Transaction? Or plain Idle? Idle in Transaction stops vacuum from reclaiming space and is indicative of a broken application.On Fri, Jul 12, 2013 at 9:39 AM, Bradley McCune <bradley.mccune@xxxxxxxx> wrote:
The only transactions present were "<IDLE>" for current_query. I even stopped the remote services, restarted the PostgreSQL server (assumingly, there should be no transactions occurring now), and performed another VACUUM FULL followed by REINDEX due to known fullvac index bloat in pre-9 pgsql version.On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
Did you have a long running trasnaction? Especially a prepared
transaction, blocking the vacuum from reclaiming the space?
--
On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune <bradley.mccune@xxxxxxxx> wrote:
> David,
>
> (As a preface, I have already gone forward with completely rebuilding the
> database which seems to have finally fixed the problem. Rebuilding the
> table itself had no effect, and I couldn't wait much longer to move
> forward.)
>
> Yes, this seems similar, however, the key difference being that VACUUM FULL
> did not alleviate the problem. The extra "bloated" disk space was still
> considered "in use" by the data server, and so it was never returned to the
> system. I have a suspicion that the server was storing the table data in
> pages in an inefficient manner (by unknown means) because we had roughly ~5x
> the number of pages used on that TOAST table to store the same number of
> tuples compared to other similar databases.
>
> Depending on how often you have to use VACUUM FULL, you might want to
> consider tweaking the autovacuum to be more aggressive on that hot table to
> keep it in check more often. (Recycling the disk space more efficiently
> rather than sending it back to the server only to be reallocated to the
> database again.)
>
>
> On Fri, Jul 12, 2013 at 4:09 AM, David Welton <davidw@xxxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> I have a very similar problem... details below.
>>
>> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles <paul.tilles@xxxxxxxx> wrote:
>> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order
>> > to
>> > take advantage of autovacuum features. This server exists in a very
>> > closed
>> > environment (isolated network, limited root privileges; this explains
>> > the
>> > older software in use) and runs on RHEL5.5 (i686). After the upgrade,
>> > the
>> > database has constantly been growing to the tune of 5-6 GB a day.
>> > Normally,
>> > the database, as a whole, is ~20GB; currently, it is ~89GB. We have a
>> > couple
>> > other servers which run equivalent databases and actually synchronize
>> > the
>> > records to each other via a 3rd party application (one I do not have
>> > access
>> > to the inner workings). The other databases are ~20GB as they should be.
>>
>> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit
>> system:
>>
>> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>>
>> > Running the following SQL, it's fairly obvious there's an issue with a
>> > particular table, and, more specifically, its TOAST table.
>>
>> Same thing here: we have a table with around 2-3 megs of data that is
>> blowing up to *10 gigs*.
>>
>> > This TOAST table is for a table called "timeseries" which saves large
>> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
>> > records
>> > in timeseries yields ~16GB for that column. There should be [b]no
>> > reason[/b]
>> > this table's TOAST table should be as large as it is.
>>
>> Similar situation: it's a bytea column that gets "a lot" of updates;
>> in the order of 10's of thousands a day.
>>
>> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum
>> > runs
>> > to completion with no errors.
>>
>> VACUUM FULL fixes the problem for us by recouping all the wasted disk
>> space. I don't have the knowledge to investigate much further on my
>> own, but I'd be happy to try out a few things. The database is,
>> unfortunately, sensitive data that I can't share, but I could probably
>> script a similar situation...
>>
>> --
>> David N. Welton
>>
>> http://www.dedasys.com/
>
>
>
>
> --
> Bradley D. J. McCune
To understand recursion, one must first understand recursion.
--
Bradley D. J. McCuneNOAA/OCWWS/HSDCommunity Hydrologic Prediction System - SupportCHPS FogBugz Administrator
--
To understand recursion, one must first understand recursion.--
To understand recursion, one must first understand recursion.
Bradley D. J. McCune
NOAA/OCWWS/HSD
Community Hydrologic Prediction System - Support
CHPS FogBugz Administrator
Office phone: (301) 713-1625 x160