Re: [RFC PATCH 10/10] gitweb: Show appropriate "Generating..." page when regenerating cache (WIP)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/25/2010 05:56 AM, Petr Baudis wrote:
> On Mon, Jan 25, 2010 at 02:48:26PM +0100, Jakub Narebski wrote:
>> Now those patches (mine and J.H. both) make gitweb use locking
>> (it is IIRC configurable in J.H. patch) to make only one process
>> generate the page if it is missing from cache, or is stale.  Now
>> if it is missing, we have to wait until it is generated in full
>> before being able to show it to client.  While it is possible to
>> "tee" output (using PerlIO::tee, or Capture::Tiny, or tie like
>> CGI::Cache) writing it simultaneously to browser and to cache for 
>> the process that is generating data, it is as far as I understand
>> it impossible for processes which are waiting for data.  Therefore
>> the need for "Generating..." page, so the user does not think that
>> web server hung or something, and is not generating output.
> 
> Ah, ok, so the message is there to cover up for a technical problem. ;-)
> I didn't quite realize. Then, it would be great to tweak the mechanisms
> so that the user does not really have to wait.

No, that is an incorrect assumption on how the 'Generating...' page
works, and your missing a bit of the point.

(1) The message itself 'Generating...' is a que to the user that
something is happening and that the browser is not actually hanging.
Web users are at the point where if things are not instantaneous and
show immediately they will either browse away completely or hit the
refresh button incessantly until content does appear.  While the page is
usually only seen for about a second, and I'll admit it can be annoying,
it's nothing more than a 'sit tight a second'.  For things like the
front page it can take upwards of 7 seconds to generate for a single
user, a lot to ask for a no response scenario.

(2) It prevents the stampeding herd problem, which was very vehemently
discussed 4 years ago by HPA and myself and roughly boils down to this:

When a single user comes into the site, in particular the front page, it
kicks off a process that will start to generate at it, causing a huge
amount of git requests into individual repositories and a lot of disk
i/o.  A second user will then come in and the same requests will start
to be done from the beginning again, and so on until you basically kill
the machine because the disk i/o goes up enough that it can't ever
service the requests fast enough.

This does 2 things in the end:

1) means there's only 1 copy of the page ever being generated, thus
meaning there isn't extraneous and dangerous disk i/o going on on the system

2) prevents a user from reporting to the website that it's broken by
giving them a visual que that things aren't broken.


> So, I wonder about two things:
> 
> (i) How often does it happen that two requests for the same page are
> received? Has anyone measured it? Or is at least able to make
> a minimally educated guess? IOW, isn't this premature optimization?

For most pages, not many but it happens more often than you think.  The
data I have is much too old to be useful now but the front page could,
at times, have up to 30 people waiting for it without caching.  This is
a very important patch believe it or not.  For a site the size of
kernel.org it cannot exist without this.

But here's a quick stat, in 36 hours git.kernel.org has had
156099 accesses world wide or about 1.2 accesses a second.

android.git.kernel.org, in the same time period has had 115818 accesses.

If the first request takes 7 seconds to generate, by the time it's done
there are now 3 additional requests running.  If it again takes 7
seconds to generate there are now another 3 requests running, etc.  Very
quickly you've got so much i/o running the box more or less is useless.

> (ii) Can't the locked gitwebs do the equivalent of tail -f?

Not really going to help much, most of the gitweb operations won't
output much of anything beyond the header until it's collected all of
the data it needs anyway and then there will be a flurry of output.  It
also means that this 'Generating...' page will only work for caching
schemes that tail can read out of, which I'm not sure it would work all
that well with things like memcached or a non-custom caching layer where
we don't necessarily have direct access to the file being written to.

At least the way I had it (and I'll admit I haven't read through Jakub's
re-working of my patches so I don't know if it's still there) is that
with background caching you only get the 'Generating...' page if it's
new or the content is grossly out of data.  If it's a popular page and
it's not grossly out of date it shows you the 'stale' data while it
generates the new content in the background anyway, only locking you out
when the new file is being written.  Or at least that's how I had it.

- John 'Warthog9' Hawley
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]