Re: [PATCH v3 0/6] Composefs: an opportunistically sharing verified image filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/1/23 5:46 PM, Alexander Larsson wrote:
> On Wed, 2023-02-01 at 12:28 +0800, Jingbo Xu wrote:
>> Hi all,
>>
>> There are some updated performance statistics with different
>> combinations on my test environment if you are interested.
>>
>>
>> On 1/27/23 6:24 PM, Gao Xiang wrote:
>>> ...
>>>
>>> I've made a version and did some test, it can be fetched from:
>>> git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
>>> -b
>>> experimental
>>>
>>
>> Setup
>> ======
>> CPU: x86_64 Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz
>> Disk: 6800 IOPS upper limit
>> OS: Linux v6.2 (with composefs v3 patchset)
> 
> For the record, what was the filesystem backing the basedir files?
> 
>> I build erofs/squashfs images following the scripts attached on [1],
>> with each file in the rootfs tagged with "metacopy" and "redirect"
>> xattr.
>>
>> The source rootfs is from the docker image of tensorflow [2].
>>
>> The erofs images are built with mkfs.erofs with support for sparse
>> file
>> added [3].
>>
>> [1]
>> https://lore.kernel.org/linux-fsdevel/5fb32a1297821040edd8c19ce796fc0540101653.camel@xxxxxxxxxx/
>> [2]
>> https://hub.docker.com/layers/tensorflow/tensorflow/2.10.0/images/sha256-7f9f23ce2473eb52d17fe1b465c79c3a3604047343e23acc036296f512071bc9?context=explore
>> [3]
>> https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git/commit/?h=experimental&id=7c49e8b195ad90f6ca9dfccce9f6e3e39a8676f6
>>
>>
>>
>> Image size
>> ===========
>> 6.4M large.composefs
>> 5.7M large.composefs.w/o.digest (w/o --compute-digest)
>> 6.2M large.erofs
>> 5.2M large.erofs.T0 (with -T0, i.e. w/o nanosecond timestamp)
>> 1.7M large.squashfs
>> 5.8M large.squashfs.uncompressed (with -noI -noD -noF -noX)
>>
>> (large.erofs.T0 is built without nanosecond timestamp, so that we get
>> smaller disk inode size (same with squashfs).)
>>
>>
>> Runtime Perf
>> =============
>>
>> The "uncached" column is tested with:
>> hyperfine -p "echo 3 > /proc/sys/vm/drop_caches" "ls -lR $MNTPOINT"
>>
>>
>> While the "cached" column is tested with:
>> hyperfine -w 1 "ls -lR $MNTPOINT"
>>
>>
>> erofs and squashfs are mounted with loopback device.
>>
>>
>>                                   | uncached(ms)| cached(ms)
>> ----------------------------------|-------------|-----------
>> composefs (with digest)           | 326         | 135
>> erofs (w/o -T0)                   | 264         | 172
>> erofs (w/o -T0) + overlayfs       | 651         | 238
>> squashfs (compressed)	            | 538         | 211
>> squashfs (compressed) + overlayfs | 968         | 302
> 
> 
> Clearly erofs with sparse files is the best fs now for the ro-fs +
> overlay case. But still, we can see that the additional cost of the
> overlayfs layer is not negligible. 
> 
> According to amir this could be helped by a special composefs-like mode
> in overlayfs, but its unclear what performance that would reach, and
> we're then talking net new development that further complicates the
> overlayfs codebase. Its not clear to me which alternative is easier to
> develop/maintain.
> 
> Also, the difference between cached and uncached here is less than in
> my tests. Probably because my test image was larger. With the test
> image I use, the results are:
> 
>                                   | uncached(ms)| cached(ms)
> ----------------------------------|-------------|-----------
> composefs (with digest)           | 681         | 390
> erofs (w/o -T0) + overlayfs       | 1788        | 532
> squashfs (compressed) + overlayfs | 2547        | 443
> 
> 
> I gotta say it is weird though that squashfs performed better than
> erofs in the cached case. May be worth looking into. The test data I'm
> using is available here:
>   
> https://my.owndrive.com/index.php/s/irHJXRpZHtT3a5i
> 
> 

Hi,

I also tested upon the rootfs you given.


Setup
======
CPU: x86_64 Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz
Disk: 11800 IOPS upper limit
OS: Linux v6.2 (with composefs v3 patchset)
FS of backing objects: xfs


Image size
===========
8.6M large.composefs (with --compute-digest)
7.6M large.composefs.wo.digest (w/o --compute-digest)
8.9M large.erofs
7.4M large.erofs.T0 (with -T0, i.e. w/o nanosecond timestamp)
2.6M large.squashfs.compressed
8.2M large.squashfs.uncompressed (with -noI -noD -noF -noX)


Runtime Perf
=============

The "uncached" column is tested with:
hyperfine -p "echo 3 > /proc/sys/vm/drop_caches" "ls -lR $MNTPOINT"


While the "cached" column is tested with:
hyperfine -w 1 "ls -lR $MNTPOINT"


erofs and squashfs are mounted with loopback device.

				  | uncached(ms)| cached(ms)
----------------------------------|-------------|-----------
composefs			  | 408		| 176
erofs			  	  | 308		| 190
erofs     + overlayfs	  	  | 1097	| 294
erofs.hack			  | 298		| 187
erofs.hack + overlayfs	  	  | 524		| 283
squashfs (compressed)		  | 770		| 265
squashfs (compressed) + overlayfs | 1600	| 372
squashfs (uncompressed)		  | 646		| 223
squashfs (uncompressed)+overlayfs | 1480	| 330

- all erofs mounted with "noacl"
- composefs: using large.composefs
- erofs: using large.erofs
- erofs.hack: using large.erofs.hack where each file in the erofs layer
redirecting to the same lower block, e.g.
"/objects/00/02bef8682cac782594e542d1ec6e031b9f7ac40edcfa6a1eb6d15d3b1ab126",
to evaluate the potential optimization of composefs like "lazy lookup"
in overlayfs
- squashfs (compressed): using large.squashfs.compressed
- squashfs (uncompressed): using large.squashfs.uncompressed


-- 
Thanks,
Jingbo



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux