Search Postgresql Archives

Re: question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/15/2015 01:35 PM, anj patnaik wrote:
Hello all,
I will experiment with -Fc (custom). The file is already growing very large.

I am running this:
./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump

Are there any other options for large tables to run faster and occupy
less disk space?

Yes, do not double compress. -Fc already compresses the file.

This information and a lot more can be found here:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html


Below is memory info:

[root@onxl5179 tmp]# cat /proc/meminfo
MemTotal:       16333720 kB
MemFree:          187736 kB
Buffers:           79696 kB
Cached:         11176616 kB
SwapCached:         2024 kB
Active:         11028784 kB
Inactive:        4561616 kB
Active(anon):    3839656 kB
Inactive(anon):   642416 kB
Active(file):    7189128 kB
Inactive(file):  3919200 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:      33456120 kB
SwapFree:       33428960 kB
Dirty:             33892 kB
Writeback:             0 kB
AnonPages:       4332408 kB
Mapped:           201388 kB
Shmem:            147980 kB
Slab:             365380 kB
SReclaimable:     296732 kB
SUnreclaim:        68648 kB
KernelStack:        5888 kB
PageTables:        37720 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    41622980 kB
Committed_AS:    7148392 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      179848 kB
VmallocChunk:   34359548476 kB
HardwareCorrupted:     0 kB
AnonHugePages:   3950592 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       10240 kB
DirectMap2M:    16766976 kB


# CPUs=8
RHEL 6.5

The PG shared memory info is the defaults as I've not touched the .conf
file. I am not a DBA, just a test tools developer who needs to backup
the table efficiently. I am fairly new to PG and not an expert at Linux.

Also if there are recommended backup scripts/cron that you recommend,
please point them to me.

Thanks!!

On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead <scottm@xxxxxxxxxxx
<mailto:scottm@xxxxxxxxxxx>> wrote:


    On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge
    <guillaume@xxxxxxxxxxxx <mailto:guillaume@xxxxxxxxxxxx>> wrote:

        2015-10-15 20:40 GMT+02:00 anj patnaik <patna73@xxxxxxxxx
        <mailto:patna73@xxxxxxxxx>>:

            It's a Linux machine with 8 CPUs. I don't have the other
            details.

            I get archive member too large for tar format.

            Is there a recommended command/options when dealing with
            very large tables, aka 150K rows and half of the rows have
            data being inserted with 22MB?


        Don't use tar format? I never understood the interest on this
        one. You should better use the custom method.


    + 1

      Use -F c


    --
    Scott Mead
    Sr. Architect
    /OpenSCG/
    PostgreSQL, Java & Linux Experts


    http://openscg.com <http://openscg.com/>


            -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  >
            /tmp/dump
            pg_dump: [archiver (db)] connection to database "postgres"
            failed: fe_sendauth: no password supplied
            -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   >
            /tmp/dump
            Password:
            pg_dump: [tar archiver] archive member too large for tar format
            -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date
            +\%Y\%m\%d\%H`.gz
            -bash: pg_dumpall: command not found
            -bash: tmpdb.out-2015101510 <tel:2015101510>.gz: Permission
            denied
            -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date
            +\%Y\%m\%d\%H`.gz


            Thank you so much for replying and accepting my post to this NG.

            On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson
            <melvin6925@xxxxxxxxx <mailto:melvin6925@xxxxxxxxx>> wrote:

                In addition to exactly what you mean by "a long time" to
                pg_dump 77k of your table,

                What is your O/S and how much memory is on your system?
                How many CPU's are in your system?
                Also, what is your hard disk configuration?
                What other applications are running simultaneously with
                pg_dump?
                What is the value of shared_memory &
                maintenance_work_mem in postgresql.conf?

                On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver
                <adrian.klaver@xxxxxxxxxxx
                <mailto:adrian.klaver@xxxxxxxxxxx>> wrote:

                    On 10/14/2015 06:39 PM, anj patnaik wrote:

                        Hello,

                        I recently downloaded postgres 9.4 and I have a
                        client application that
                        runs in Tcl that inserts to the db and fetches
                        records.

                        For the majority of the time, the app will
                        connect to the server to do
                        insert/fetch.

                        For occasional use, we want to remove the
                        requirement to have a server
                        db and just have the application retrieve data
                        from a local file.

                        I know I can use pg_dump to export the tables.
                        The questions are:

                        1) is there an in-memory db instance or file
                        based I can create that is
                        loaded with the dump file? This way the app code
                        doesn't have to change.


                    No.


                        2) does pg support embedded db?


                    No.

                        3) Or is my best option to convert the dump to
                        sqlite and the import the
                        sqlite and have the app read that embedded db.


                    Sqlite tends to follow Postgres conventions, so you
                    might be able to use the pg_dump output directly if
                    you use --inserts or --column-inserts:

                    http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html


                        Finally, I am noticing pg_dump takes a lot of
                        time to create a dump of
                        my table. right now, the table  has 77K rows.
                        Are there any ways to
                        create automated batch files to create dumps
                        overnight and do so quickly?


                    Define long time.

                    What is the pg_dump command you are using?

                    Sure use a cron job.


                        Thanks for your inputs!



                    --
                    Adrian Klaver
                    adrian.klaver@xxxxxxxxxxx
                    <mailto:adrian.klaver@xxxxxxxxxxx>


                    --
                    Sent via pgsql-general mailing list
                    (pgsql-general@xxxxxxxxxxxxxx
                    <mailto:pgsql-general@xxxxxxxxxxxxxx>)
                    To make changes to your subscription:
                    http://www.postgresql.org/mailpref/pgsql-general




                --
                *Melvin Davidson*
                I reserve the right to fantasize.  Whether or not you
                wish to share my fantasy is entirely up to you.





        --
        Guillaume.
        http://blog.guillaume.lelarge.info
        http://www.dalibo.com





--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux