On Sat, 2009-02-28 at 21:46 -0800, bruce wrote: > Hi. > > Got a bit of a question/issue that I'm trying to resolve. I'm asking this of > a few groups so bear with me. > > I'm considering a situation where I have multiple processes running, and > each process is going to access a number of files in a dir. Each process > accesses a unique group of files, and then writes the group of files to > another dir. I can easily handle this by using a form of locking, where I > have the processes lock/read a file and only access the group of files in > the dir based on the open/free status of the lockfile. > > However, the issue with the approach is that it's somewhat synchronous. I'm > looking for something that might be more asynchronous/parallel, in that I'd > like to have multiple processes each access a unique group of files from the > given dir as fast as possible. > > So.. Any thoughts/pointers/comments would be greatly appreciated. Any > pointers to academic research, etc.. would be useful. > > thanks > > > > You could do it one of several ways: 1. Have the files actually written to a subversion/git repository, and let that handle differences. 2. Store the files in a database as blobs 3. Do something clever with filename suffixes to indicate versions of the file Ash www.ashleysheridan.co.uk -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php