Re: Initial patches for Incremental FS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 22, 2019 at 9:25 PM Miklos Szeredi <miklos@xxxxxxxxxx> wrote:

> What would benefit many fuse applications is to let the kernel
> transfer data to/from a given location (i.e. offset within a file).
> So instead of transferring data directly in the READ/WRITE messages,
> there would be a MAP message that would return information about where
> the data resides (list of extents+extra parameters for
> compression/encryption).  The returned information could be generic
> enough for your needs, I think.  The fuse kernel module would cache
> this mapping, and could keep the mapping around for possibly much
> longer than the data itself, since it would require orders of
> magnitude less memory. This would not only be saving memory copies,
> but also the number of round trips to userspace.

Yes, and this was _exactly_ our first plan, and it mitigates the read
performance
issue. The reasons why we didn't move forward with it are that we figured out
all other requirements, and fixing each of those needs another change in
FUSE, up to the level when FUSE interface becomes 50% dedicated to
our specific goal:
1. MAP message would have to support data compression (with different
algorithms), hash verification (same thing) with hash streaming (because
even the Merkle tree for a 5GB file is huge, and can't be preloaded
at once)
  1.1. Mapping memory usage can get out of hands pretty quickly: it has to
be at least (offset + size + compression type + hash location + hash size +
hash kind) per each block. I'm not even thinking about multiple storage files
here. For that 5GB file (that's a debug APK for some Android game we're
targeting) we have 1.3M blocks, so ~16 bytes *1.3M = 20M of index only,
without actual overhead for the lookup table.
If the kernel code owns and manages its own on-disk data store and the
format, this index can be loaded and discarded on demand there.

2. We need the same kind of a MAP message but for the directory structure
and for stat(2) calls - Android does way too many of these, and has no
intention to fix it. These caches need to be dynamically sized as well
(as I said, standard kernel caches don't hold anything long enough on
Android because of the usual thing when all memory is used by running
apps)

3. Several smaller features would have to be added, again with their own
interface and specific code in FUSE
3.1 E.g. collecting logs of all block reads - we're planning to have a ring
buffer of configurable size there, and a way to request its content from the
 user space; this doesn't look that useful for other FUSE users, and may
actually be a serious security hole there. We'd not need it at all if FUSE
was calling into user space on each read, so here it's almost like we're
fighting ourselves and making two opposing changes in FUSE

4. All these features are much easier to implement for a readonly
filesystem (cache invalidation is a big thing). But if we limit them in FUSE
to readonly mode we'd make half of its interface dedicated to even
smaller use case.

> There's also work currently ongoing in optimizing the overhead of
> userspace roundtrip.  The most promising thing appears to be matching
> up the CPU for the userspace server with that of the task doing the
> request.  This can apparently result in  60-500% speed improvement.

That sounds almost too good to be true, and will be really cool.
Do you have any patches or git remote available in any compilable state to
try the optimization out? Android has quite complicated hardware config
and I want to see how this works, especially with our model where
several processes may send requests into the same filesystem FD.

> Understood.  Did you re-enable readahead for the case when the file
> has been fully downloaded?

Yes, and it doesn't really help - readahead wants a bunch of blocks
together, but those are scattered around the backing image because they
arrived separately and at different times. So usermode process still has to
issue multiple read commands to respond to a single FUSE read(ahead)
request, which is still slow. Even worse thing happens if CPU was in
reduced frequency idling mode at that time (which is normal for mobile) -
it takes couple hundred ms to ramp it up, and during that time latency
is huge (milliseconds / block)

Overall, I see that it is possible to change FUSE in a way that meets our
needs, but I'm not sure if that kind of change keeps FUSE interface
friendly for all existing and new uses. The set of requirements is so big
and the mobile platform constraints are so harsh that _as efficient as
possible_ and _generic_ do, unfortunately, contradict each other.

Please tell me if you see it differently, and if you have some better ideas
on how to change FUSE in a simpler way
--
Thanks, Yurii



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux