With this kind of design requirements it is worth considering hardware "failure & recovery". Even SSDs can and do fail. It is not just a matter of just speed. RAID disks of some kind, depending on the budget is worth the effort. -----Original Message----- From: Alvaro Herrera <alvherre@xxxxxxxxxxxxxxx> Sent: 2019 July 26 22:39 To: Arya F <arya6000@xxxxxxxxx> Cc: Tom Lane <tgl@xxxxxxxxxxxxx>; Ron <ronljohnsonjr@xxxxxxxxx>; pgsql-general@xxxxxxxxxxxxxxxxxxxx Subject: Re: Hardware for writing/updating 12,000,000 rows per hour On 2019-Jul-26, Arya F wrote: > I think I can modify my application to do a batch update. Right now > the server has an HDD and it really can't handle a lot of updates and > inserts per second. Would changing to a regular SSD be able to easily > do 3000 updates per second? That's a pretty hard question in isolation -- you need to consider how many indexes are there to update, whether the updated columns are indexed or not, what the datatypes are, how much locality of access you'll have ... I'm probably missing some other important factors. (Of course, you'll have to tune various PG server settings to find your sweet spot.) I suggest that should be measuring instead of trying to guess. A reasonably cheap way is to rent a machine somewhere with the type of hardware you think you'll need, and run your workload there for long enough, making sure to carefully observe important metrics such as table size, accumulated bloat, checkpoint regime, overall I/O activity, and so on. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services