Re: dm overlaybd: targets mapping OverlayBD image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2023/5/26 03:26, Du Rui wrote:
Hi Alexander,

all the lvm volume changes and mounts during runtime caused
weird behaviour (especially at scale) that was painful to manage (just
search the docker issue tracker for devmapper backend). In the end
everyone moved to a filesystem based implementation (overlayfs based).

Yes, we had exactly the same experience. This is another reason why
this proposal is for dm and lvm, not for container.
(BTW, we are using TCMU and ublk for overlaybd in production. They are awesome.)


This solution doesn't even allow page cache sharing between shared
layers (like current containers do), much less between independent
layers.

Page cache sharing can be realized with DAX support of the dm targets
(and the inner file system), together with virtual pmem device backend.

First, here I'd suggest you could learn some kernel knowledge of what
DAX is and what page cache is before you explain to a kernel mailing
list.  For example, DAX memory cannot be reclaimed at all.

Block drivers has nothing to do on filesystem page cache stuffs, also
currently your approach has nothing to do with pmem stuffs (If you must
mention "DAX" to proposal your "page cache sharing", please _here_
write down your detailed design first and explain how it could work to
ours if you really want to do.)

Apart from unable to share page cache among filesystems, even with
your approach all I/Os are duplicated among your qcow2-like layers.

For example, there are 3 qcow2-like layers: A, B, C:

filesystem 1:  A + B
filesystem 2:  A + B + C

Filesystem 1 and 2 are runtimely independent filesystems and your block
driver can do nothing help: both duplicated I/Os and page cache for any
data and metadata of layer A, B.

If those container layers are even more (dozens or hundreds), your
approach is more inefficient on duplicated I/Os.

You could implement some internal block cache, but block level cache is
not flexible compared with page cache on kernel memory reclaim and page
migration.


Erofs already has some block-level support for container images

It is interesting. Erofs runs insider a block device in the first place,
like what many file systems do. But do you konw why it implements another
"some block-level support" by itself?


That is funny honestly.  As for container image use cases, although OCI
image tgz is unseekable but actually ext4 and btrfs images are seekable
and on-demand load could be done with these raw images directly. In
principle, you could dump your container image stuffs from tgz to raw
ext4, btrfs, erofs, whatever.  Or if you like, you could dump to some
"qcow2", "vhdx", "vmdx" wildly-used format, their ecosystem is more
mature but all the above don't help on page cache sharing stuffs.

Please don't say "I like erofs" and at the same time "why it implements
another some block-level support" by itself".  Local filesystems must
do their block-mapping theirselves: ext4 (extents or blockmap), XFS
(extents), etc.

I've explained internally to your team multiple times as a kernel
developer, personally I don't want to repeat here again and again to
your guys.

Thanks,
Gao Xiang

And this new approach doesn't help
No. It is intended for dm and lvm.>

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux