As with base backups, the easiest way to produce a standalone hot backup is to use the pg_basebackup tool. If you include the -X
parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is required to restore the backup.
Well I think my question is somewhat away from my intention cause of my poor understanding and questioning :(
Actually, I have 1TB data and have hardware spec enough to handle this amount of data, but the problem is that it needs too many join operations and the analysis process is going too slow right now.
I've searched and found that graph model nicely fits for network data like social data in query performance.
If your data is hierarchal, then storing it in a network database is perfectly reasonable. I'm not sure, though, that there are many network databases for Linux. Raima is the only one I can think of.
Should I change my DB (I mean my DB for analysis)? or do I need some other solutions or any extension?
Thanks
Angular momentum makes the world go 'round.