Bryan Field-Elliot <bryan_lists@xxxxxxxxxxx> writes: > We have a huge database which must be backed up every day with pg_dump. > The problem is, it takes around half an hour to produce the dump file, and > all other processes on the same box are starved for cycles (presumably due > to I/O) during the dump. It's not just an inconvenience, it's now evolved > into a serious problem that needs to be addressed. You should probably use 'top' and 'vmstat' or 'iostat' to make sure the problem is what you think it is. Guessing is usually a bad idea. :) That said, I/O starvation is the most likely candidate. > Is there any mechanism for running pg_dump with a lower priority? I don't > mind if the backup takes two hours instead of half an hour, as long as > other processes were getting their fair share of cycles. Unfortunately, changing the CPU priority with 'nice' doesn't generally affect I/O bandwidth (since an I/O bound process doesn't use much CPU). I think there has been some work on I/O priorities in the Linux kernel, but I'm not sure where that is. Are you putting the dump file on the same device as the database lives on? If so, moving it to a different device/controller would take some of the write load off your database disk. You could also send the dump file over the network to another machine rather than saving it locally, which would do the above and also (probably) slow down the whole dump process, depending on the relative speeds of your disk and your network. -Doug ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend