Re: [PATCH v4 5/5] virtiofs: propagate sync() to file server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 16, 2021 at 10:11 PM Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
>
> On Mon, Aug 16, 2021 at 09:57:08PM +0300, Amir Goldstein wrote:
> > On Mon, Aug 16, 2021 at 6:29 PM Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
> > >
> > > On Sun, Aug 15, 2021 at 05:14:06PM +0300, Amir Goldstein wrote:
> > > > Hi Greg,
> > > >
> > > > Sorry for the late reply, I have some questions about this change...
> > > >
> > > > On Fri, May 21, 2021 at 9:12 AM Greg Kurz <groug@xxxxxxxx> wrote:
> > > > >
> > > > > Even if POSIX doesn't mandate it, linux users legitimately expect
> > > > > sync() to flush all data and metadata to physical storage when it
> > > > > is located on the same system. This isn't happening with virtiofs
> > > > > though : sync() inside the guest returns right away even though
> > > > > data still needs to be flushed from the host page cache.
> > > > >
> > > > > This is easily demonstrated by doing the following in the guest:
> > > > >
> > > > > $ dd if=/dev/zero of=/mnt/foo bs=1M count=5K ; strace -T -e sync sync
> > > > > 5120+0 records in
> > > > > 5120+0 records out
> > > > > 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 5.22224 s, 1.0 GB/s
> > > > > sync()                                  = 0 <0.024068>
> > > > > +++ exited with 0 +++
> > > > >
> > > > > and start the following in the host when the 'dd' command completes
> > > > > in the guest:
> > > > >
> > > > > $ strace -T -e fsync /usr/bin/sync virtiofs/foo
> > > > > fsync(3)                                = 0 <10.371640>
> > > > > +++ exited with 0 +++
> > > > >
> > > > > There are no good reasons not to honor the expected behavior of
> > > > > sync() actually : it gives an unrealistic impression that virtiofs
> > > > > is super fast and that data has safely landed on HW, which isn't
> > > > > the case obviously.
> > > > >
> > > > > Implement a ->sync_fs() superblock operation that sends a new
> > > > > FUSE_SYNCFS request type for this purpose. Provision a 64-bit
> > > > > placeholder for possible future extensions. Since the file
> > > > > server cannot handle the wait == 0 case, we skip it to avoid a
> > > > > gratuitous roundtrip. Note that this is per-superblock : a
> > > > > FUSE_SYNCFS is send for the root mount and for each submount.
> > > > >
> > > > > Like with FUSE_FSYNC and FUSE_FSYNCDIR, lack of support for
> > > > > FUSE_SYNCFS in the file server is treated as permanent success.
> > > > > This ensures compatibility with older file servers : the client
> > > > > will get the current behavior of sync() not being propagated to
> > > > > the file server.
> > > >
> > > > I wonder - even if the server does not support SYNCFS or if the kernel
> > > > does not trust the server with SYNCFS, fuse_sync_fs() can wait
> > > > until all pending requests up to this call have been completed, either
> > > > before or after submitting the SYNCFS request. No?
> > >
> > > >
> > > > Does virtiofsd track all requests prior to SYNCFS request to make
> > > > sure that they were executed on the host filesystem before calling
> > > > syncfs() on the host filesystem?
> > >
> > > Hi Amir,
> > >
> > > I don't think virtiofsd has any such notion. I would think, that
> > > client should make sure all pending writes have completed and
> > > then send SYNCFS request.
> > >
> > > Looking at the sync_filesystem(), I am assuming vfs will take care
> > > of flushing out all dirty pages and then call ->sync_fs.
> > >
> > > Having said that, I think fuse queues the writeback request internally
> > > and signals completion of writeback to mm(end_page_writeback()). And
> > > that's why fuse_fsync() has notion of waiting for all pending
> > > writes to finish on an inode (fuse_sync_writes()).
> > >
> > > So I think you have raised a good point. That is if there are pending
> > > writes at the time of syncfs(), we don't seem to have a notion of
> > > first waiting for all these writes to finish before we send
> > > FUSE_SYNCFS request to server.
> >
> > Maybe, but I was not referring to inode writeback requests.
> > I had assumed that those were handled correctly.
> > I was referring to pending metadata requests.
> >
> > ->sync_fs() in local fs also takes care of flushing metadata
> > (e.g. journal). I assumed that virtiofsd implements FUSE_SYNCFS
> > request by calling syncfs() on host fs,
>
> Yes virtiofsd calls syncfs() on host fs.
>
> > but it is does that than
> > there is no guarantee that all metadata requests have reached the
> > host fs from virtiofs unless client or server take care of waiting
> > for all pending metadata requests before issuing FUSE_SYNCFS.
>
> We don't have any journal in virtiofs. In fact we don't seem to
> cache any metadta. Except probably the case when "-o writeback"
> where we can trust local time stamps.
>
> If "-o writeback" is not enabled, i am not sure what metadata
> we will be caching that we will need to worry about. Do you have
> something specific in mind. (Atleast from virtiofs point of view,
> I can't seem to think what metadata we are caching which we need
> to worry about).

No, I don't see a problem.
I guess I was confused by the semantics.
Thanks for clarifying.

Amir.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux