John, all, * John R Pierce (pierce@xxxxxxxxxxxx) wrote: > On 12/5/2017 2:09 PM, Martin Mueller wrote: > >Time is not really a problem for me, if we talk about hours rather > >than days. On a roughly comparable machine I’ve made backups of > >databases less than 10 GB, and it was a matter of minutes. But I > >know that there are scale problems. Sometimes programs just hang > >if the data are beyond some size. Is that likely in Postgres if > >you go from ~ 10 GB to ~100 GB? There isn’t any interdependence > >among my tables beyond queries I construct on the fly, because I > >use the database in a single user environment > > another factor is restore time. restores have to create > indexes. creating indexes on multi-million-row tables can take > awhile. (hint, be sure to set maintenance_work_mem to 1GB before > doing this!) I'm sure you're aware of this John, but for others following along, just to be clear: indexes have to be recreated when restoring from a *logical* (eg: pg_dump based) backups. Indexes don't have to be recreated for *physical* (eg: file-based) backups. Neither pg_dump nor the various physical-backup utilities should hang or have issues with larger data sets. Thanks! Stephen
Attachment:
signature.asc
Description: Digital signature