Search squid archive

Re: Enable caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Alex,

Thanks for the help.
I have written a simple perl script that outputs the contents of the metadata of the rock database slots Maybe it will be useful for other squid users.
rock_cache_dump.pl

#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
use Sys::Mmap;

my ($file) = @ARGV;
die "Usage: $0 <cache_dir>/rock\n" if not $file;

my @TLVS = ('VOID', 'KEY_URL', 'KEY_SHA', 'KEY_MD5', 'URL', 'STD', 'HITMETERING', 'VALID', 'VARY_HEADERS', 'STD_LFS', 'OBJSIZE', 'STOREURL', 'VARY_ID', 'END');

open my $fh, '<', $file or die;
my %H, $mmap;
mmap($mmap, 0, PROT_READ, MAP_SHARED, $fh) or die "Can't mmap: $!";

my $slots = length($mmap)/0x4000-1;
my($slot, $empty_slots, $used_slots, $first_slot, $start_time) = (0, 0, 0, 0, time);

while($slot<$slots){
        my($key, $total_len, $current_len, $version, $first_slot, $next_slot, $meta_ok, $tlvheader_len) = unpack('H32QLLLLCL', substr($mmap, ($slot+1)<<14, 45));
        if($first_slot){$used_slots++}else{$empty_slots++}
        process_slot($slot,$version, $tlvheader_len) if($next_slot && $meta_ok == 3 && $first_slot == $slot);
        $slot++
}

print Dumper(\%H);
print STDERR "File size: \t".int(length($mmap)/1024/1024)." MB\nTotal slots: \t".($empty_slots+$used_slots)."\nEmpty slots: \t$empty_slots\nUsed  slots: \t$used_slots\nUsage: \t\t".int($used_slots*100/($empty_slots+$used_slots))." %\nProcess time:\t".(time-$start_time)." s\n\n";

sub process_slot{
  my($slot, $version, $tlvheader_len) = @_;
  $tlvheader_len-=5;
  $H{$slot}={VERSION => "".localtime($version)};

  my $remain=substr($mmap, (($slot+1)<<14)+45, $tlvheader_len);

  my($type, $len, $value);
  while($remain){

        ($type, $len,$remain) = unpack('cLa*', $remain);
        ($value, $remain) = unpack("a$len"."a*", $remain);
        $H{$slot}{$TLVS[$type]} = $value;
        $H{$slot}{$TLVS[$type]} = unpack("A*", $H{$slot}{$TLVS[$type]}) if($type == 4 || $type == 8); #URL || VARY_HEADERS
        $H{$slot}{$TLVS[$type]} = unpack("H*", $H{$slot}{$TLVS[$type]}) if($type == 3 ); #KEY_MD5
        $H{$slot}{$TLVS[$type]} = unpack("Q*", $H{$slot}{$TLVS[$type]}) if($type == 10); #OBJSIZE
        $H{$slot}{$TLVS[$type]} = parse_STD(unpack("qqqqQSH4", $H{$slot}{$TLVS[$type]})) if($type == 9); #STD_LFS
  }

}

sub parse_STD{
   my($timestamp, $lastref, $expires, $lastmod) = map {"".localtime($_)} @_[0..3];
   my($swap_file_sz, $refcount, $flags)  = ($_[4], $_[5], $_[6]);
   return {timestamp => $timestamp, lastref => $lastref, expires => $expires, lastmod => $lastmod, swap_file_sz => $swap_file_sz, refcount => $refcount, flags => "0x".$flags};
}

And maybe you will add it to new squid releases.

Kind regards,
       Ankor.

чт, 6 апр. 2023 г. в 20:35, Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx>:
On 4/6/23 09:08, Andrey K wrote:

> Could you tell me if there is a way to view the objects (URLs) and their
> statuses stored in the rock file?

There is no visualization software for rock db storage. One can
obviously use xxd and similar generic tools to look at raw db bytes,
even in a running Squid instance, but your needs are probably different.


> I tried unsuccessfully to find this information using squidclient in
> mgr:menu.

Cache manager queries are currently ineffective for analyzing individual
rock cache_dir objects because cache manager code relies on the legacy
in-memory worker-specific store index while rock uses shared memory
structures shared among workers.


> You gave me a very useful link:
> https://wiki.squid-cache.org/Features/LargeRockStore
> Maybe there is a more detailed description of the internal rock data
> structures?

IIRC, the next level of detail is available in source code only. For
starting points, consider src/fs/rock/RockDbCell.h and the end of
Rock::SwapDir::create() that writes an all-zeroes db file header.


HTH,

Alex.


> I could try to write a script that reads the necessary information from
> the cache_dir file.
>
> Kind regards,
>       Ankor.
>
>
> ср, 5 апр. 2023 г. в 16:27, Alex Rousskov
> <rousskov@xxxxxxxxxxxxxxxxxxxxxxx
> <mailto:rousskov@xxxxxxxxxxxxxxxxxxxxxxx>>:
>
>     On 4/5/23 06:07, Andrey K wrote:
>
>      > Previously, caching was disabled on our proxy servers. Now we
>     need to
>      > cache some content (files about 10 MB in size).
>      > So we changed the squid.conf:
>
>      > cache_dir ufs /data/squid/cache 32000 16 256 max-size=12000000
>      >
>      > We have 24 workers on each proxy.
>
>     UFS-based cache_dirs are not supported in multi-worker configurations
>     and, in most cases, should not be used in such configurations. The
>     combination will violate basic HTTP caching rules and may crash Squid
>     and/or corrupt responses.
>
>
>      > We saw that some requests were taken from the cache, and some
>     were not.
>      > The documentation says:
>      > "In SMP configurations, cache_dir must not precede the workers
>     option
>      > and should use configuration macros or conditionals to give each
>     worker
>      > interested in disk caching a dedicated cache directory."
>
>     The official documentation quoted above is stale and very misleading in
>     modern Squids. Ignore it. I will try to find the time to post a PR to
>     fix this.
>
>
>      > So we switched to a rock cache_dir:
>      > cache_dir rock /data/squid/cache 32000 max-size=12000000
>      >
>      > Now everything seems to be working fine in the test environment,
>     but I
>      > found limitations on the RockStore
>      > (https://wiki.squid-cache.org/Features/RockStore
>     <https://wiki.squid-cache.org/Features/RockStore>:
>      > "Objects larger than 32,000 bytes cannot be cached when
>     cache_dirs are
>      > shared among workers."
>
>     The Feature/RockStore page is stale and can easily mislead. In general,
>     Feature/Foo wiki pages are often development-focused and get stale with
>     time. They cannot be reliably used as a Squid feature documentation.
>
>
>      > Does this mean that RockStore is not suitable for caching large
>     files?
>
>     No, it does not. Rock storage has evolved since that Feature page was
>     written. You can see the following wiki page discussing evolved rock
>     storage design, but that page probably has some stale info as well:
>     https://wiki.squid-cache.org/Features/LargeRockStore
>     <https://wiki.squid-cache.org/Features/LargeRockStore>
>
>
>      > Should I switch back to the UFS and configure 24 cache_dirs
>
>     If everything is "working fine", then you should not. Otherwise, I
>     recommend discussing specific problems before switching to that
>     unsupported and dangerous hack.
>
>
>     HTH,
>
>     Alex.
>
>     _______________________________________________
>     squid-users mailing list
>     squid-users@xxxxxxxxxxxxxxxxxxxxx
>     <mailto:squid-users@xxxxxxxxxxxxxxxxxxxxx>
>     http://lists.squid-cache.org/listinfo/squid-users
>     <http://lists.squid-cache.org/listinfo/squid-users>
>

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux