"Jay Soffian" <jaysoffian@xxxxxxxxx> writes: > On Thu, Mar 13, 2008 at 7:14 PM, Petr Baudis <pasky@xxxxxxx> wrote: > > ... > > > + if ($cache_lifetime and -f $cache_file) { > > + # Postpone timeout by two minutes so that we get > > + # enough time to do our job. > > + my $time = time() - $cache_lifetime + 120; > > + utime $time, $time, $cache_file; > > + } > > Race condition. I don't see any locking. Nothing keeps multiple > instances from regenerating the cache concurrently... > > > + @projects = git_get_projects_details($projlist, $check_forks); > > + if ($cache_lifetime and open (my $fd, '>'.$cache_file)) { > > ...and then clobbering each other here. You have two choices: > > 1) Use a lock file for the critical section. > > 2) Assume the race condition is rare enough, but you still need to > account for it. In that case, you want to write to a temporary file > and then rename to the cache file name. The rename is atomic, so > though N instances of gitweb may regenerate the cache (at some > CPU/IO overhead), at least the cache file won't get corrupted. What should the code for this look like? Like below? use File::Temp; my ($fh, $temp_file) = tempfile(); ... close $fh; rename $temp_file, $cache_file; -- Jakub Narebski Poland ShadeHawk on #git -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html