On Wed, 2024-06-05 at 08:22 -0400, Ron Johnson wrote: > On Wed, Jun 5, 2024 at 5:52 AM Laurenz Albe <laurenz.albe@xxxxxxxxxxx> wrote: > > On Tue, 2024-06-04 at 11:32 -0400, Ron Johnson wrote: > > > > I would not set "fsync" to off. It won't make a measurable difference for > > > > pg_restore, and it may break your database. The same applies to "full_page_writes". > > > > The *cluster* would be corrupted. > > You'd have to start with a new "initdb". > > This is a brand new instance with zero databases. > > If the pg_restore which creates that one database fails for some reason, I just > restart the pg_restore. In the unlikely event that the VM crashes or becomes corrupt, > doing an initdb and creating all the users again adds 10 minutes to the 7 hour pg_restore. > That's "line noise" compared to the variability in performance of the VM, SAN and network. Then go ahead and do it. But I tell you that you won't see a noteworthy performance gain with a large "pg_restore". > > > > What might make a difference is if you use the --single-transaction option of > > > > pg_restore. > > > > > > A single 4+TB transaction?? > > > > Yes, sure. Why not? > > "Giant transactions bad, small transactions good", right? It's been drilled into me for 35 years. It is more "long transactions bad, short transactions good" in PostgreSQL, but that will amount to the same in your case. There are certainly negative effects of a large transaction, but I thought you want to optimize the performance of a "pg_restore". If you optimize one thing, you will certainly pessimize some other things. In the case at hand, you shouldn't run a heavy data modifying workload in the same database concurrently to the large pg_restore. The mere size of a transaction can be a problem as such in other databases like Oracle, but not in PostgreSQL. Yours, Laurenz Albe