df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 132G 99G 34G 75% /
tmpfs 4.0G 0 4.0G 0% /dev/shm
/dev/sda1 74M 16M 54M 23% /boot
is there another dump tool that dumps blobs (or all) as binary content
(not as insert statements, maybe directly dbblocks)?
Marcelo Costa schrieb:
To decrease shared buffers you need restart your pgsql.
If do you make on df -h command what is the result, please send.
2006/12/15, Thomas Markus < t.markus@xxxxxxxxxxxxx
<mailto:t.markus@xxxxxxxxxxxxx>>:
Hi,
free diskspace is 34gb (underlying xfs) (complete db dump is 9gb).
free
-tm says 6gb free ram and 6gb unused swap space.
can i decrease shared buffers without pg restart?
thx
Thomas
Shoaib Mir schrieb:
> Looks like with 1.8 GB usage not much left for dump to get the
> required chunk from memory. Not sure if that will help but try
> increasing the swap space...
>
> -------------
> Shoaib Mir
> EnterpriseDB ( www.enterprisedb.com
<http://www.enterprisedb.com> <http://www.enterprisedb.com>)
>
> On 12/15/06, *Thomas Markus* <t.markus@xxxxxxxxxxxxx
<mailto:t.markus@xxxxxxxxxxxxx>
> <mailto: t.markus@xxxxxxxxxxxxx
<mailto:t.markus@xxxxxxxxxxxxx>>> wrote:
>
> Hi,
>
> logfile content see
http://www.rafb.net/paste/results/cvD7uk33.html
> - cat /proc/sys/kernel/shmmax is 2013265920
> - ulimit is unlimited
> kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
> postmaster process usage is 1.8gb ram atm
>
> thx
> Thomas
>
>
> Shoaib Mir schrieb:
> > Can you please show the dbserver logs and syslog at the same
> time when
> > it goes out of memory...
> >
> > Also how much is available RAM you have and the SHMMAX set?
> >
> > ------------
> > Shoaib Mir
> > EnterpriseDB ( www.enterprisedb.com
<http://www.enterprisedb.com>
> < http://www.enterprisedb.com> <http://www.enterprisedb.com>)
> >
> > On 12/15/06, *Thomas Markus* < t.markus@xxxxxxxxxxxxx
<mailto:t.markus@xxxxxxxxxxxxx>
> <mailto:t.markus@xxxxxxxxxxxxx <mailto:t.markus@xxxxxxxxxxxxx>>
> > <mailto: t.markus@xxxxxxxxxxxxx
<mailto:t.markus@xxxxxxxxxxxxx>
> <mailto: t.markus@xxxxxxxxxxxxx
<mailto:t.markus@xxxxxxxxxxxxx>>>> wrote:
> >
> > Hi,
> >
> > i'm running pg 8.1.0 on a debian linux (64bit) box (dual
> xeon 8gb ram)
> > pg_dump creates an error when exporting a large table with
> blobs
> > (largest blob is 180mb)
> >
> > error is:
> > pg_dump: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: SQL command to dump the contents of table
> "downloads" failed:
> > PQendcopy() failed.
> > pg_dump: Error message from server: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: The command was: COPY public.downloads
... TO stdout;
> >
> > if i try pg_dump with -d dump runs with all types (c,t,p),
> but i cant
> > restore (out of memory error or corrupt tar header at
...)
> >
> > how can i backup (and restore) such a db?
> >
> > kr
> > Thomas
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
<http://archives.postgresql.org>
> >
> >
>
>
---------------------------(end of
broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
--
Marcelo Costa