Thank you all - Karsten, Benjamin, Pavel, PostgreSql team,
I've discussed all your inputs with our developers and they came with a solution for this problem, which was already agreed (on a high level) by our auditor. I am adding it here so it can inspire the others, when potentially getting in a same situation. <> Process For Managing Secure Data With PostgreSQL This sets out the process we have developed for managing secure data with PostgreSQL; firstly for any technique to work we are going to assume that you are using a filesystem and media compliant with NIST 800-88 in the sense that:
So the problem that must be solved is data that’s not used any more must be securely erased even if it was encrypted. So imagine you have a transaction log used for settling transactions with batch entities overnight (standard UK processing); once you’ve finished with those card numbers being held encrypted they must be securely erased from the system. Another use case is expiry of keys that are no longer used in an application. Here we don’t want to destroy the entire table or database but simply a partition of the data. We propose that data is stored in two ways:
For scenario 1 where data is finished with this table is sent to the “Secure Delete” process. For scenario 2 where data is finished with the remaining rows are copied to a new instance of the table (imagine a view sitting over tables active and inactive so for example key_store view sits over key_store_active and key_store_inactive) so you’d send inactive to the “Secure Delete” process and then recreate inactive by select * into inactive from active then finally swap active and inactive and then “Secure Delete” the new inactive table. The secure dropping of a table would operate as follows:
Another process running with permissions to access the underlying data is then running (probably running as postgres user):
In this way we have enabled:
The limitation is that data isn’t securely erased until the above process is run where the data is row based rather than table based expiry and your exposure is then limited to how often the process is run to
cutover data. Disclaimers: - All credits to our principal architect (CD) who put this together, I am just a messenger here, where he prefers to stay behind. - Feel free to comment, but implementation on our side is already commenced. Kind Regards, Jan CTO - EFTlab On 2019-06-06 18:14:39+10:00 karsten.hilbert@xxxxxxx wrote:
|