Re: [PATCH 3/3] ceph: do no update snapshot context when there is no new snapshot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2022-02-15 at 20:23 +0800, xiubli@xxxxxxxxxx wrote:
> From: Xiubo Li <xiubli@xxxxxxxxxx>
> 
> No need to update snapshot context when any of the following two
> cases happens:
> 1: if my context seq matches realm's seq and realm has no parent.
> 2: if my context seq equals or is larger than my parent's, this
>    works because we rebuild_snap_realms() works _downward_ in
>    hierarchy after each update.
> 
> This fix will avoid those inodes which accidently calling
> ceph_queue_cap_snap() and make no sense, for exmaple:
> 
> There have 6 directories like:
> 
> /dir_X1/dir_X2/dir_X3/
> /dir_Y1/dir_Y2/dir_Y3/
> 
> Firstly, make a snapshot under /dir_X1/dir_X2/.snap/snap_X2, then
> make a root snapshot under /.snap/root_snap. And every time when
> we make snapshots under /dir_Y1/..., the kclient will always try
> to rebuild the snap context for snap_X2 realm and finally will
> always try to queue cap snaps for dir_Y2 and dir_Y3, which makes
> no sense.
> 
> That's because the snap_X2's seq is 2 and root_snap's seq is 3.
> So when creating a new snapshot under /dir_Y1/... the new seq
> will be 4, and then the mds will send kclient a snapshot backtrace
> in _downward_ in hierarchy: seqs 4, 3. Then in ceph_update_snap_trace()
> it will always rebuild the from the last realm, that's the root_snap.
> So later when rebuilding the snap context it will always rebuild
> the snap_X2 realm and then try to queue cap snaps for all the inodes
> related in snap_X2 realm, and we are seeing the logs like:
> 
> "ceph:  queue_cap_snap 00000000a42b796b nothing dirty|writing"
> 
> URL: https://tracker.ceph.com/issues/44100
> Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx>
> ---
>  fs/ceph/snap.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
> index d075d3ce5f6d..1f24a5de81e7 100644
> --- a/fs/ceph/snap.c
> +++ b/fs/ceph/snap.c
> @@ -341,14 +341,16 @@ static int build_snap_context(struct ceph_snap_realm *realm,
>  		num += parent->cached_context->num_snaps;
>  	}
>  
> -	/* do i actually need to update?  not if my context seq
> -	   matches realm seq, and my parents' does to.  (this works
> -	   because we rebuild_snap_realms() works _downward_ in
> -	   hierarchy after each update.) */
> +	/* do i actually need to update? No need when any of the following
> +	 * two cases:
> +	 * #1: if my context seq matches realm's seq and realm has no parent.
> +	 * #2: if my context seq equals or is larger than my parent's, this
> +	 *     works because we rebuild_snap_realms() works _downward_ in
> +	 *     hierarchy after each update.
> +	 */
>  	if (realm->cached_context &&
> -	    realm->cached_context->seq == realm->seq &&
> -	    (!parent ||
> -	     realm->cached_context->seq >= parent->cached_context->seq)) {
> +	    ((realm->cached_context->seq == realm->seq && !parent) ||
> +	     (parent && realm->cached_context->seq >= parent->cached_context->seq))) {
>  		dout("build_snap_context %llx %p: %p seq %lld (%u snaps)"
>  		     " (unchanged)\n",
>  		     realm->ino, realm, realm->cached_context,

I've never had a good feel for the snaprealm handling code, so I'll
leave it to others that do to comment on whether your logic makes sense.

Either way, I don't think this patch depends on the earlier two, does
it? The comment is a nice addition though.

Acked-by: Jeff Layton <jlayton@xxxxxxxxxx>



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux