Re: [PATCH v3 4/4] ceph: add truncate size handling support for fscrypt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





[...]

@@ -2473,7 +2621,23 @@ int __ceph_setattr(struct inode *inode, struct iattr *attr, struct ceph_iattr *c
  		req->r_args.setattr.mask = cpu_to_le32(mask);
  		req->r_num_caps = 1;
  		req->r_stamp = attr->ia_ctime;
+		if (fill_fscrypt) {
+			err = fill_fscrypt_truncate(inode, req, attr);
+			if (err)
+				goto out;
+		}
+
+		/*
+		 * The truncate will return -EAGAIN when some one
+		 * has updated the last block before the MDS hold
+		 * the xlock for the FILE lock. Need to retry it.
+		 */
  		err = ceph_mdsc_do_request(mdsc, NULL, req);
+		if (err == -EAGAIN) {
+			dout("setattr %p result=%d (%s locally, %d remote), retry it!\n",
+			     inode, err, ceph_cap_string(dirtied), mask);
+			goto retry;
+		}
The rest looks reasonable. We may want to cap the number of retries in
case something goes really wrong or in the case of a livelock with a
competing client. I'm not sure what a reasonable number of tries would
be though -- 5? 10? 100? We may want to benchmark out how long this rmw
operation takes and then we can use that to determine a reasonable
number of tries.

<7>[  330.648749] ceph:  setattr 00000000197f0d87 issued pAsxLsXsxFsxcrwb
<7>[  330.648752] ceph:  setattr 00000000197f0d87 size 11 -> 2
<7>[  330.648756] ceph:  setattr 00000000197f0d87 mtime 1635574177.43176541 -> 1635574210.35946684 <7>[  330.648760] ceph:  setattr 00000000197f0d87 ctime 1635574177.43176541 -> 1635574210.35946684 (ignored)
<7>[  330.648765] ceph:  setattr 00000000197f0d87 ATTR_FILE ... hrm!
...

<7>[  330.653696] ceph:  fill_fscrypt_truncate 00000000197f0d87 size dropping cap refs on Fr
...

<7>[  330.697464] ceph:  setattr 00000000197f0d87 result=0 (Fx locally, 4128 remote)

It takes around 50ms.

Shall we retry 20 times ?


If you run out of tries, you could probably  just return -EAGAIN in that
case. That's not listed in the truncate(2) manpage, but it seems like a
reasonable way to handle that sort of problem.

[...]




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux