On Mon, Jun 24, 2013 at 06:03:40PM +0000, Stuart Ford wrote: > On 24/06/2013 17:18, "Bruce Momjian" <bruce@xxxxxxxxxx> wrote: > > > >On Mon, Jun 24, 2013 at 03:25:44PM +0000, Stuart Ford wrote: > >> On 24/06/2013 14:00, "Bruce Momjian" <bruce@xxxxxxxxxx> wrote: > >> > >> > >> > > >> >Looking further, here is the command that is executed: > >> > > >> > SELECT pg_catalog.lo_create(t.loid) > >> > FROM (SELECT DISTINCT loid FROM pg_catalog.pg_largeobject) AS t; > >> > > >> >If you have created _new_ large objects since the upgrde, the script > >> >might throw an error, as there is already metadata for those large > >> >objects. You might need to delete the rows in pg_largeobject_metadata > >> >before running the script; this will reset all the large object > >> >permissions to default. > >> > >> There doesn't appear to be, if this command, which returns 0, is > >>correct: > >> > >> select count(*) from pg_catalog.pg_largeobject_metadata ; > >> > >> So it's OK to go ahead and run at any time? > > > >Yep. If it fails for some reason, just delete the contents of > >pg_largeobject_metadata and run it again. > > Do you know if not running this script would explain the fact that our > dump file sizes have been much smaller than expected? It might be possible if lack of pg_largeobject_metadata values causes your large objects not to be dumped; I have not tested this. -- Bruce Momjian <bruce@xxxxxxxxxx> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general