> Probably your biggest issue will be temporary files created by temporary tables, sorts that spill to disk, etc.
Is there any way to monitor this so I can estimate?
> What I'm confused by is the concern about disk space in the first place.
We provide a device to customers that must run in an unattended fashion for as long as the hardward holds up. So, regardless of the disk size, they will run out at some time. Some of our solutions grow by 3.5 Gigs per day - and 6 months of history is not an unreasonable expectation. We've just decided we want to keep as much history as possible given space limitations.
-----Original Message-----
From: Jim C. Nasby [mailto:jnasby@xxxxxxxxxxxxx]
Sent: Tue 3/28/2006 10:19 AM
To: Mark Liberman
Cc: pgsql-admin@xxxxxxxxxxxxxx
Subject: Re: [ADMIN] Feedback on auto-pruning approach
On Mon, Mar 27, 2006 at 06:32:42PM -0800, Mark Liberman wrote:
> So, I have finally complete this auto-pruning solution. It has proven effective in keeping the size of the db under whichever threshold I set in an unattended fashion.
>
> I have one final question. If my goal is to maximize the amount of historical data that we can keep - e.g. set the db size limit to be as large as possible - how much disk space should I reserve for standard Postgres operations - e.g. sort space, WAL, etc.. I'm sure this depends a bit on our configuration, etc.. but if someone can point me in the direction as to what factors I should consider, I'd greatly appreciate it.
Probably your biggest issue will be temporary files created by temporary
tables, sorts that spill to disk, etc.
What I'm confused by is the concern about disk space in the first place.
Drives are very cheap, people are normally much more concerned about IO
bandwidth.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@xxxxxxxxxxxxx
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461