On Thu, May 2, 2019 at 12:03 AM <ezemtsov@xxxxxxxxxx> wrote: > +Design alternatives > +=================== > + > +Why isn't incremental-fs implemented via FUSE? > +---------------------------------------------- > +TLDR: FUSE-based filesystems add 20-80% of performance overhead for target > +scenarios, and increase power use on mobile beyond acceptable limit > +for widespread deployment. A custom kernel filesystem is the way to overcome > +these limitations. he 80% performance overhead sounds bad. As fuse maintainer I'd really be interested in finding out the causes. > + > +From the theoretical side of things, FUSE filesystem adds some overhead to > +each filesystem operation that’s not handled by OS page cache: > + > + * When an IO request arrives to FUSE driver (D), it puts it into a queue > + that runs on a separate kernel thread The queue is run on a *user* thread, there's no intermediate kernel thread involved. > + * Then another separate user-mode handler process (H) has to run, > + potentially after a context switch, to read the request from the queue. Yes. How is it different from the data loader doing read(2) on .cmd? > + Reading the request adds a kernel-user mode transition to the handling. > + * (H) sends the IO request to kernel to handle it on some underlying storage > + filesystem. This adds a user-kernel and kernel-user mode transition > + pair to the handling. > + * (H) then responds to the FUSE request via a write(2) call. > + Writing the response is another user-kernel mode transition. > + * (D) needs to read the response from (H) when its kernel thread runs > + and forward it to the user Again, you've just described exactly the same thing for data loader and .cmd. Why is the fuse case different? Thanks, Miklos