Hi,
I use postgresql often but I'm not very familiar with how it works internal.
I've made a small script to backup files from different computers to a
postgresql database.
Sort of a versioning networked backup system.
It works with large objects (oid in table, linked to large object),
which I import using psycopg
It works well but slow.
The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
If I copy a file to the mirror using scp I get 37MB/sec
My script achieves something like 7 or 8MB/sec on large (+100MB) files.
I've never used postgresql for something like this, is there something I
can do to speed things up ?
It's not a huge problem as it's only the initial run that takes a while
(after that, most files are already in the db).
Still it would be nice if it would be a little faster.
cpu is mostly idle on the server, filesystem is running 100%.
This is a seperate postgresql server (I've used freebsd profiles to have
2 postgresql server running) so I can change this setup so it will work
better for this application.
I've read different suggestions online but I'm unsure which is best,
they all speak of files which are only a few Kb, not 100MB or bigger.
ps. english is not my native language
thx
Bram
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance