Search Postgresql Archives

Do you have any best practices for managing large file's metadata with postgres?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings, 
I have some large files (larger than a few gigs) that I need to keep on this, and I'm using postgres to manage metadata about these files. 

Postgres wiki covers all the basics of the topic here: http://wiki.postgresql.org/wiki/BinaryFilesInDB#Storing_Meta_data_and_symbolic_link_in_the_database_to_where_a_Binary_file_is_located

My concern is making sure that the metadata and file stay in sync and as far as I can see, introducing  transactional support to file + db operations requires another software layer that gives me a transaction manager that I can (hopefully) use to keep metadata and file updates in sync. 

I've been thinking about moving file modification code to postgres functions and wrapping the whole transaction mechanism using postgres transactions. Just a high level idea, but I wanted to ask for your input before looking into it. Is it too stupid to attempt to use postgres's functions as a transaction manager for file operations and db updates? 

I'm going to make calls for metadata update to db anyway, and my updates to large files is incremental and small in nature. So I thought why don't I get rid of a higher level transaction layer and do it all from a function in postgres?

Other than my probably stupid idea, your advice to manage this setup would be much appreciated. 

Seref


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux