In addition to exactly what you mean by "a long time" to pg_dump 77k of your table,
What is your O/S and how much memory is on your system?How many CPU's are in your system?
Also, what is your hard disk configuration?On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:
On 10/14/2015 06:39 PM, anj patnaik wrote:
Hello,
I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.
For the majority of the time, the app will connect to the server to do
insert/fetch.
For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.
I know I can use pg_dump to export the tables. The questions are:
1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.
No.
2) does pg support embedded db?
No.
3) Or is my best option to convert the dump to sqlite and the import the
sqlite and have the app read that embedded db.
Sqlite tends to follow Postgres conventions, so you might be able to use the pg_dump output directly if you use --inserts or --column-inserts:
http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so quickly?
Define long time.
What is the pg_dump command you are using?
Sure use a cron job.
Thanks for your inputs!
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
Melvin Davidson
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.