Re: Q. cache in squashfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



J. R. Okajima wrote:

O: no-fragments x inner ext3

A: frag=3 x without -no-fragments
B: frag=3 x with -no-fragments

C: frag=100 x without -no-fragments
-: frag=100 x with -no-fragments

	cat10		cache_get		read		zlib
	(sec,cpu)	(meta,frag,data)	(meta,data)	(meta,data)
	----------------------------------------------------------------------
O	.06, 35%	92, -, 41		3, 44		2, 3557
A	.09, 113%	12359, 81, 22		4, 90		6, 6474
B	.07, 104%	12369, -, 109		3, 100		5, 3484
C	.06, 112%	12381, 80, 35		4, 53		6, 3650


OK,

I've done some tests of my own, and I can report that there is no issue
with Squashfs.  Squashfs on its own is performing better than ext3 on
Squashfs. The reason why your tests suggest otherwise is because your
testing methodology is *broken*.

In your first column (ext3 on squashfs), only a small amount of the
overall cost is being accounted to the 'cat10' command, the bulk of
the work is being accounted to the kernel 'loop1' thread and this isn't
showing up. In the other cases (Squashfs only) the entire cost is being
accounted to the 'cat10' command.  The resulting results are therefore
completely bogus, and incorrectly show higher CPU usage for Squashfs.

The following should illustrate this (all tests done under kvm):

1. Squashfs native

Following sqsh.sh shell used

#!/bin/sh
 for i in `seq 2`; do
 	mount -t squashfs /data/comp/bin.sqsh /mnt -o loop
 	find /mnt -type f|xargs wc 2>&1 > /dev/null
 	umount /mnt
 done

bin.sqsh is a copy of /usr/bin, without any fragments.

# /usr/bin/time sqsh.sh

root@slackware:/data/blame-game/data# /usr/bin/time ./test-sqsh.sh
5.51user 12.70system 0:18.72elapsed 97%CPU (0avgtext+0avgdata 5712maxresident)k

High CPU usage, however, this should not be surprising, in an otherwise idle
system there is no reason not to use all CPU.

Snapshot from top while running confirms this:

top - 01:59:30 up  1:13,  2 users,  load average: 0.49, 0.23, 0.10
 Tasks:  58 total,   2 running,  56 sleeping,   0 stopped,   0 zombie
 Cpu(s): 36.0%us, 64.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
 Mem:   2023364k total,  1342200k used,   681164k free,   127316k buffers
 Swap:        0k total,        0k used,        0k free,  1134448k cached

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
  3214 root      20   0  4696 1124  552 R 97.1  0.1   0:05.33 wc

The system is running fully occupied, with 0 % idle.

Note overall elapsed time (from time command) running Squashfs native: 18.72 s

2. ext3 on squashfs

Following ext3.sh shell used

#!/bin/sh
for i in `seq 2`; do
 	mount -t squashfs /data/comp/ext3.sqsh /mnt2 -o loop
 	mount -t ext3 /mnt2/ext3.img /mnt -o loop
 	find /mnt -type f | xargs wc  2&>1 > /dev/null
 	umount /mnt
 	umount /mnt2
 done

ext3.img is an ext3 fs containing /usr/bin.

# /usr/bin/time ext3.sh

5.70user 5.11system 0:20.28elapsed 53%CPU (0avgtext+0avgdata 5712maxresident)k
0inputs+0outputs (0major+5346minor)pagefaults 0swaps

Much lower CPU, but this is bogus.

A snapshot from top shows:

top - 02:04:29 up  1:18,  2 users,  load average: 0.44, 0.18, 0.10
Tasks:  61 total,   2 running,  59 sleeping,   0 stopped,   0 zombie
Cpu(s): 33.0%us, 67.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2023364k total,  1637056k used,   386308k free,   143416k buffers
Swap:        0k total,        0k used,        0k free,  1410636k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3241 root       0 -20     0    0    0 S 52.8  0.0   0:03.19 loop1
 3248 root      20   0  4696 1148  576 R 44.5  0.1   0:02.38 wc

Again the system is running fully occupied with 0 % idle.

The major difference is 52.8 % of the CPU is being accounted to the
'loop1' kernel thread, and this does not show up in the time command.
To make that clear, all of the cost of reading the loop1 file (ext3.img)
is being accounted to the loop1 kernel thread, and therefore no
decompression overhead is showing up in the time command.  As decompression
cost is the majority of the overhead of reading compressed data, it is
little wonder the CPU usage reported by time is only 53 %.

In fact as the 53 % CPU figure only includes time spent in user-space and
ext3 (and excludes decompression cost), it is surprising it is so *high*.
On this basis Squashfs is using only 47 % of CPU or less to decompress
the data.  Which is *good*, and a complete reversal of your bogus results.

Note overall elapsed time (from time command) running ext3 on Squashfs: 20.28 s.

3. Overall conclusion

On my tests both Squashfs native and ext3 on Squashfs uses 100 % CPU.  However,
Squashfs native is faster, 18.72. seconds versus 20.28 seconds.

Cheers

Phillip
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux