On Thu, Mar 13, 2008 at 7:14 PM, Petr Baudis <pasky@xxxxxxx> wrote: > diff --git a/gitweb/gitweb.css b/gitweb/gitweb.css > index 8e2bf3d..673077a 100644 > --- a/gitweb/gitweb.css > +++ b/gitweb/gitweb.css > @@ -85,6 +85,12 @@ div.title, a.title { > color: #000000; > } > > +div.stale_info { > + display: block; > + text-align: right; > + font-style: italic; > +} > + > div.readme { > padding: 8px; > } What does this have to do with it? > diff --git a/gitweb/gitweb.perl b/gitweb/gitweb.perl > index bcb6193..0eee195 100755 > --- a/gitweb/gitweb.perl > +++ b/gitweb/gitweb.perl > @@ -122,6 +122,15 @@ our $fallback_encoding = 'latin1'; ... > + if ($cache_lifetime and -f $cache_file) { > + # Postpone timeout by two minutes so that we get > + # enough time to do our job. > + my $time = time() - $cache_lifetime + 120; > + utime $time, $time, $cache_file; > + } Race condition. I don't see any locking. Nothing keeps multiple instances from regenerating the cache concurrently... > + @projects = git_get_projects_details($projlist, $check_forks); > + if ($cache_lifetime and open (my $fd, '>'.$cache_file)) { ...and then clobbering each other here. You have two choices: 1) Use a lock file for the critical section. 2) Assume the race condition is rare enough, but you still need to account for it. In that case, you want to write to a temporary file and then rename to the cache file name. The rename is atomic, so though N instances of gitweb may regenerate the cache (at some CPU/IO overhead), at least the cache file won't get corrupted. Out of curiosity, repo.or.cz isn't running this as a CGI is it? If so, wouldn't running it as a FastCGI or modperl be a vast improvement? j. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html