Re: [RFC/PATCH] gitweb: Paginate project list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/12/08, Jakub Narebski <jnareb@xxxxxxxxx> wrote:> [The original email by Lars didn't get to git mailing list because of>   lack of quotes around J.H. in "J.H." <warthog19@xxxxxxxxxxxxxx>>   email address in Cc:]
Gaah, bad gmail...

>  Dnia niedziela 11. maja 2008 08:56, Lars Hjemli napisał:>>  > It seems to me that "projectlist in a single file" and "cache results>  > of filled in @$projlist" are different solutions to the same problem:>  > rapidly filling a perl datastructure.>> Well, yes and no.  "Projectlist in single file" is about _static_ data>  (which changes only if projects are added, deleted, its description>  changed; those are usually rare events), and avoiding mainly I/O and>  not CPU (scanning filesystem for repositories, reading config and>  description, etc.).>>  "Cache data" is about caching _variable_ data, such as "Last changed">  information for project.  Caching data instead of caching output>  (caching HTML) allows to share cache for different presentation of>  the very same data (e.g. 'history'/'shortlog' vs 'rss').  And for some>  pages, like project search results, caching HTML output doesn't make>  much sense, while caching data has it.
While I agree that caching search result output almost never makessense, I think it's more important that cache hits requires minimalprocessing. This is why I've chosen to cache the final result insteadof an intermediate state, but both solutions obviously got some prosand cons.
>  > This used to be expensive in terms of cache size (similar to k.orgs>  > 20G), but current cgit solves this by treating the cache as a hash>  > table; cgitrc has an option to set the cache size (number of files),>  > each filename is generated as `hash(url) % cachesize` and each file>  > contains the full url (to detect hash collisions) followed by the>  > cached content for that url (see>  > http://hjemli.net/git/cgit/tree/cache.c for the details).>>> I guess that is the simplest solution, but I don't think that is>  the best solution to have size-limited cache.  For example CPAN Perl>  module Cache::SizeAwareCache and its derivatives use the following>  algorithm>>   The default cache size limiting algorithm works by removing cache>   objects in the following order until the desired limit is reached:>>     1) objects that have expired>     2) objects that are least recently accessed>     3) objects that that expire next
Again, minimal processing is the goal of cgits cache implementation,hence the simple solution.
>  > Btw: gitweb and cgit seems to aquire the same features these days:>  > cgit recently got pagination + search on the project list.>>> I haven't checked what features cgit has lately...>>  Gitweb development seems a bit stalled; I got no response to latest>  turn od gitweb TODO and wishlist list...
Well, I for one found the wishlist interesting; I've been pondering onimplementing a graphic log in cgit (inspired by git-forest andgit-graph), but I refuse to perform a  topo-sort ;-)
Hopefully I can exploit the fact that cgit never uses more than onecommit as starting point for log traversal, combined with heuristicson commit date, to enable a fast graphic log that will be correct forall but the most pathological cases.
--larsh��.n��������+%������w��{.n��������n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�m


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux