Re: [PATCH 19/26] netfs: New writeback implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29/03/2024 10:34, Naveen Mamindlapalli wrote:
-----Original Message-----
From: David Howells <dhowells@xxxxxxxxxx>
Sent: Thursday, March 28, 2024 10:04 PM
To: Christian Brauner <christian@xxxxxxxxxx>; Jeff Layton <jlayton@xxxxxxxxxx>;
Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx>; Dominique Martinet
<asmadeus@xxxxxxxxxxxxx>
Cc: David Howells <dhowells@xxxxxxxxxx>; Matthew Wilcox
<willy@xxxxxxxxxxxxx>; Steve French <smfrench@xxxxxxxxx>; Marc Dionne
<marc.dionne@xxxxxxxxxxxx>; Paulo Alcantara <pc@xxxxxxxxxxxxx>; Shyam
Prasad N <sprasad@xxxxxxxxxxxxx>; Tom Talpey <tom@xxxxxxxxxx>; Eric Van
Hensbergen <ericvh@xxxxxxxxxx>; Ilya Dryomov <idryomov@xxxxxxxxx>;
netfs@xxxxxxxxxxxxxxx; linux-cachefs@xxxxxxxxxx; linux-afs@xxxxxxxxxxxxxxxxxxx;
linux-cifs@xxxxxxxxxxxxxxx; linux-nfs@xxxxxxxxxxxxxxx; ceph-
devel@xxxxxxxxxxxxxxx; v9fs@xxxxxxxxxxxxxxx; linux-erofs@xxxxxxxxxxxxxxxx; linux-
fsdevel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; netdev@xxxxxxxxxxxxxxx; linux-
kernel@xxxxxxxxxxxxxxx; Latchesar Ionkov <lucho@xxxxxxxxxx>; Christian
Schoenebeck <linux_oss@xxxxxxxxxxxxx>
Subject: [PATCH 19/26] netfs: New writeback implementation

The current netfslib writeback implementation creates writeback requests of
contiguous folio data and then separately tiles subrequests over the space
twice, once for the server and once for the cache.  This creates a few
issues:

  (1) Every time there's a discontiguity or a change between writing to only
      one destination or writing to both, it must create a new request.
      This makes it harder to do vectored writes.

  (2) The folios don't have the writeback mark removed until the end of the
      request - and a request could be hundreds of megabytes.

  (3) In future, I want to support a larger cache granularity, which will
      require aggregation of some folios that contain unmodified data (which
      only need to go to the cache) and some which contain modifications
      (which need to be uploaded and stored to the cache) - but, currently,
      these are treated as discontiguous.

There's also a move to get everyone to use writeback_iter() to extract
writable folios from the pagecache.  That said, currently writeback_iter()
has some issues that make it less than ideal:

  (1) there's no way to cancel the iteration, even if you find a "temporary"
      error that means the current folio and all subsequent folios are going
      to fail;

  (2) there's no way to filter the folios being written back - something
      that will impact Ceph with it's ordered snap system;

  (3) and if you get a folio you can't immediately deal with (say you need
      to flush the preceding writes), you are left with a folio hanging in
      the locked state for the duration, when really we should unlock it and
      relock it later.

In this new implementation, I use writeback_iter() to pump folios,
progressively creating two parallel, but separate streams and cleaning up
the finished folios as the subrequests complete.  Either or both streams
can contain gaps, and the subrequests in each stream can be of variable
size, don't need to align with each other and don't need to align with the
folios.

Indeed, subrequests can cross folio boundaries, may cover several folios or
a folio may be spanned by multiple folios, e.g.:

          +---+---+-----+-----+---+----------+
Folios:  |   |   |     |     |   |          |
          +---+---+-----+-----+---+----------+

            +------+------+     +----+----+
Upload:    |      |      |.....|    |    |
            +------+------+     +----+----+

          +------+------+------+------+------+
Cache:   |      |      |      |      |      |
          +------+------+------+------+------+

The progressive subrequest construction permits the algorithm to be
preparing both the next upload to the server and the next write to the
cache whilst the previous ones are already in progress.  Throttling can be
applied to control the rate of production of subrequests - and, in any
case, we probably want to write them to the server in ascending order,
particularly if the file will be extended.

Content crypto can also be prepared at the same time as the subrequests and
run asynchronously, with the prepped requests being stalled until the
crypto catches up with them.  This might also be useful for transport
crypto, but that happens at a lower layer, so probably would be harder to
pull off.

The algorithm is split into three parts:

  (1) The issuer.  This walks through the data, packaging it up, encrypting
      it and creating subrequests.  The part of this that generates
      subrequests only deals with file positions and spans and so is usable
      for DIO/unbuffered writes as well as buffered writes.

  (2) The collector. This asynchronously collects completed subrequests,
      unlocks folios, frees crypto buffers and performs any retries.  This
      runs in a work queue so that the issuer can return to the caller for
      writeback (so that the VM can have its kswapd thread back) or async
      writes.

  (3) The retryer.  This pauses the issuer, waits for all outstanding
      subrequests to complete and then goes through the failed subrequests
      to reissue them.  This may involve reprepping them (with cifs, the
      credits must be renegotiated, and a subrequest may need splitting),
      and doing RMW for content crypto if there's a conflicting change on
      the server.

[!] Note that some of the functions are prefixed with "new_" to avoid
clashes with existing functions.  These will be renamed in a later patch
that cuts over to the new algorithm.

Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
cc: Jeff Layton <jlayton@xxxxxxxxxx>
cc: Eric Van Hensbergen <ericvh@xxxxxxxxxx>
cc: Latchesar Ionkov <lucho@xxxxxxxxxx>
cc: Dominique Martinet <asmadeus@xxxxxxxxxxxxx>
cc: Christian Schoenebeck <linux_oss@xxxxxxxxxxxxx>
cc: Marc Dionne <marc.dionne@xxxxxxxxxxxx>
cc: v9fs@xxxxxxxxxxxxxxx
cc: linux-afs@xxxxxxxxxxxxxxxxxxx
cc: netfs@xxxxxxxxxxxxxxx
cc: linux-fsdevel@xxxxxxxxxxxxxxx

[..snip..]

+/*
+ * Begin a write operation for writing through the pagecache.
+ */
+struct netfs_io_request *new_netfs_begin_writethrough(struct kiocb *iocb, size_t
len)
+{
+	struct netfs_io_request *wreq = NULL;
+	struct netfs_inode *ictx = netfs_inode(file_inode(iocb->ki_filp));
+
+	mutex_lock(&ictx->wb_lock);
+
+	wreq = netfs_create_write_req(iocb->ki_filp->f_mapping, iocb->ki_filp,
+				      iocb->ki_pos, NETFS_WRITETHROUGH);
+	if (IS_ERR(wreq))
+		mutex_unlock(&ictx->wb_lock);
+
+	wreq->io_streams[0].avail = true;
+	trace_netfs_write(wreq, netfs_write_trace_writethrough);

Missing mutex_unlock() before return.


mutex_unlock() happens in new_netfs_end_writethrough()

Thanks,
Naveen






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux