The PostgreSQL default configuration is very conservative so as to insure it will work on almost any system.
However, based on your latest information, you should definitely adjust
shared_buffers = 4GB
maintenance_work_mem = 512MB
Note that you will need to restart PostgreSQL for this to take effect.However, based on your latest information, you should definitely adjust
shared_buffers = 4GB
maintenance_work_mem = 512MB
On Thu, Oct 15, 2015 at 4:35 PM, anj patnaik <patna73@xxxxxxxxx> wrote:
Hello all,I will experiment with -Fc (custom). The file is already growing very large.I am running this:./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dumpAre there any other options for large tables to run faster and occupy less disk space?Below is memory info:[root@onxl5179 tmp]# cat /proc/meminfoMemTotal: 16333720 kBMemFree: 187736 kBBuffers: 79696 kBCached: 11176616 kBSwapCached: 2024 kBActive: 11028784 kBInactive: 4561616 kBActive(anon): 3839656 kBInactive(anon): 642416 kBActive(file): 7189128 kBInactive(file): 3919200 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 33456120 kBSwapFree: 33428960 kBDirty: 33892 kBWriteback: 0 kBAnonPages: 4332408 kBMapped: 201388 kBShmem: 147980 kBSlab: 365380 kBSReclaimable: 296732 kBSUnreclaim: 68648 kBKernelStack: 5888 kBPageTables: 37720 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 41622980 kBCommitted_AS: 7148392 kBVmallocTotal: 34359738367 kBVmallocUsed: 179848 kBVmallocChunk: 34359548476 kBHardwareCorrupted: 0 kBAnonHugePages: 3950592 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 10240 kBDirectMap2M: 16766976 kB# CPUs=8RHEL 6.5The PG shared memory info is the defaults as I've not touched the .conf file. I am not a DBA, just a test tools developer who needs to backup the table efficiently. I am fairly new to PG and not an expert at Linux.Also if there are recommended backup scripts/cron that you recommend, please point them to me.Thanks!!On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead <scottm@xxxxxxxxxxx> wrote:On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <guillaume@xxxxxxxxxxxx> wrote:2015-10-15 20:40 GMT+02:00 anj patnaik <patna73@xxxxxxxxx>:It's a Linux machine with 8 CPUs. I don't have the other details.I get archive member too large for tar format.Is there a recommended command/options when dealing with very large tables, aka 150K rows and half of the rows have data being inserted with 22MB?Don't use tar format? I never understood the interest on this one. You should better use the custom method.+ 1Use -F c
--
Scott Mead
Sr. Architect
OpenSCG
PostgreSQL, Java & Linux Experts-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dumppg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth: no password supplied-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dumpPassword:pg_dump: [tar archiver] archive member too large for tar format-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz-bash: pg_dumpall: command not found-bash: tmpdb.out-2015101510.gz: Permission denied-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gzThank you so much for replying and accepting my post to this NG.On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925@xxxxxxxxx> wrote:What is the value of shared_memory & maintenance_work_mem in postgresql.conf?What other applications are running simultaneously with pg_dump?In addition to exactly what you mean by "a long time" to pg_dump 77k of your table,What is your O/S and how much memory is on your system?How many CPU's are in your system?Also, what is your hard disk configuration?On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:On 10/14/2015 06:39 PM, anj patnaik wrote:
Hello,
I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.
For the majority of the time, the app will connect to the server to do
insert/fetch.
For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.
I know I can use pg_dump to export the tables. The questions are:
1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.
No.
2) does pg support embedded db?
No.
3) Or is my best option to convert the dump to sqlite and the import the
sqlite and have the app read that embedded db.
Sqlite tends to follow Postgres conventions, so you might be able to use the pg_dump output directly if you use --inserts or --column-inserts:
http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so quickly?
Define long time.
What is the pg_dump command you are using?
Sure use a cron job.
Thanks for your inputs!
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--Melvin Davidson
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
--
--
Melvin Davidson
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.