Re: [RFC/PATCH] gitweb: Paginate project list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[The original email by Lars didn't get to git mailing list because of
 lack of quotes around J.H. in "J.H." <warthog19@xxxxxxxxxxxxxx>
 email address in Cc:]

Dnia niedziela 11. maja 2008 08:56, Lars Hjemli napisał:
> On 5/11/08, J.H. <warthog19@xxxxxxxxxxxxxx> wrote:
>>  On Sun, 2008-05-11 at 00:32 +0200, Jakub Narebski wrote:
>>>
>>> First, when using $projectslist file with new (second patch in series,
>>> "gitweb: Allow project description in project_index file" most of data
>>> (well, all except age) would be filled by parsing single file.
>>>
>>> Second, the idea is to cache results of filled in @$projlist e.g. using
>>> Storable, i.e. cache Perl data and not final HTML output.
>>
>> I approve of that plan, caching all the html is kinda expensive *hides
>>  the 20G of gitweb cache he has*
> 
> It seems to me that "projectlist in a single file" and "cache results
> of filled in @$projlist" are different solutions to the same problem:
> rapidly filling a perl datastructure.

Well, yes and no.  "Projectlist in single file" is about _static_ data
(which changes only if projects are added, deleted, its description
changed; those are usually rare events), and avoiding mainly I/O and
not CPU (scanning filesystem for repositories, reading config and
description, etc.).

"Cache data" is about caching _variable_ data, such as "Last changed"
information for project.  Caching data instead of caching output
(caching HTML) allows to share cache for different presentation of
the very same data (e.g. 'history'/'shortlog' vs 'rss').  And for some
pages, like project search results, caching HTML output doesn't make
much sense, while caching data has it.

> In cgit I've chosen "projectlist in a single file" and "cache html
> output". This makes it cheap (in terms of cpu and io) to both generate
> and serve the cached page (and the cache works for all pages).

As I said, for some pages, like for search results, caching output
doesn't make sense, while caching data has.

> This used to be expensive in terms of cache size (similar to k.orgs
> 20G), but current cgit solves this by treating the cache as a hash
> table; cgitrc has an option to set the cache size (number of files),
> each filename is generated as `hash(url) % cachesize` and each file
> contains the full url (to detect hash collisions) followed by the
> cached content for that url (see
> http://hjemli.net/git/cgit/tree/cache.c for the details).

I guess that is the simplest solution, but I don't think that is
the best solution to have size-limited cache.  For example CPAN Perl
module Cache::SizeAwareCache and its derivatives use the following
algorithm

  The default cache size limiting algorithm works by removing cache
  objects in the following order until the desired limit is reached:

    1) objects that have expired
    2) objects that are least recently accessed
    3) objects that that expire next


BTW. if majority of your clients support transparent compression
(J.H., could you check it for kernel.org; Pasky, could you check
it for repo.or.cz?) then you can reduce cache size by storing pages
compressed.

> Btw: gitweb and cgit seems to aquire the same features these days:
> cgit recently got pagination + search on the project list.

I haven't checked what features cgit has lately...

Gitweb development seems a bit stalled; I got no response to latest
turn od gitweb TODO and wishlist list...
-- 
Jakub Narebski
Poland
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux