On 3/23/07, Richard Lynch <ceo@xxxxxxxxx> wrote:
On Fri, March 23, 2007 7:52 pm, Yvan Strahm wrote: > I am confused with the flock function and its usage. I have jobs which > are > stored in a database, these jobs are run by a series of job_runners > scripts > but sometimes the job_runners stop ( server or php crash-down). So i > put a > job_controller in crontab to check regularly if the runners run. But > after > a while I have a bunch of job_controller running, so to avoid that I > tried > to use flock. > > I try to put this in the job_controller: > > $wouldblock=1; > $f=fopen("controller.lock", "r"); > flock($f, LOCK_EX+LOCK_NB, $wouldblock) or die("Error! cant lock!"); > > hoping that as long as the first job_controller run or don't close the > file > handle, a second job_controller won't be able to lock the > controller.lockfile and die, but it didn't work. > > I also try this: > > $wouldblock=1; > $f=fopen("controller.php", "r"); > flock($f, LOCK_EX+LOCK_NB, $wouldblock) or die("Error! cant lock!"); > > hoping the first job_controller will lock it-self, but it didn't work. > > I also thought of writing in the lock file the PID of the first > job_controller and then compare it and if it doesn't match then die, > but my > main concern is , if the server crash down the "surviving" lock file > will > prevent any job_controller to start. > > So how could prevent multiple instance of the same script? Is flock > the best > way? You can do it with flock, but then you end up sooner or later with a locked file from an "exit" or killed script, and then you have to know to remove locks older than X minutes. You could also just do a "mkdir" for your lock, and check its filemtime. You could even use "touch" within loop of the script to make sure the script is still going, and safely assume that any lock older than X seconds is stale and can be ignored/removed. A final option is to use 'exec' to figure out if another process is running already: //bail out if it's already running: $pid = getmypid(); $command = "/bin/ps auxwwww | grep " . __FILE__ . " | grep -v grep "; exec($command, $existing, $error); if ($error) die("OS Error: $error\n" . implode("\n", $existing) . "\n"); $other_count = 0; foreach($existing as $procline){ if (!strstr($procline, " $pid ")) $other_count++; } if ($other_count) exit; This allows you to be sure there is always one, and only one, running prcess or this file, with no assumptions about lock files maybe being stale. I use different ones at different times, depending on what the process needs to do, and how critical it is that it runs frequently. -- Some people have a "gift" link here. Know what I want? I want you to buy a CD from some indie artist. http://cdbaby.com/browse/from/lynch Yeah, I get a buck. So?
Thanks very much for the code, it works nicely. Just have to adjust the command, __FILE__ returns the absolute path to the script but ps returns only the relative path. So the existing array was empty. Thanks again yvan