Hi Jeff, This could resolve the issue I mentioned in the fscrypt mail thread. cp: cannot access './dir___683': No buffer space available cp: cannot access './dir___686': No buffer space available cp: cannot access './dir___687': No buffer space available cp: cannot access './dir___688': No buffer space available cp: cannot access './dir___689': No buffer space available cp: cannot access './dir___693': No buffer space available ... [root@lxbceph1 kcephfs]# diff ./dir___997 /data/backup/kernel/dir___997 diff: ./dir___997: No buffer space available The dmesg logs: <7>[ 1256.918228] ceph: do_getattr inode 0000000089964a71 mask AsXsFs mode 040755 <7>[ 1256.918232] ceph: __ceph_caps_issued_mask ino 0x100000009be cap 0000000014f1c64b issued pAsLsXsFs (mask AsXsFs) <7>[ 1256.918237] ceph: __touch_cap 0000000089964a71 cap 0000000014f1c64b mds0 <7>[ 1256.918250] ceph: readdir 0000000089964a71 file 00000000065cb689 pos 0 <7>[ 1256.918254] ceph: readdir off 0 -> '.' <7>[ 1256.918258] ceph: readdir off 1 -> '..' <4>[ 1256.918262] fscrypt (ceph, inode 1099511630270): Error -105 getting encryption context <7>[ 1256.918269] ceph: readdir 0000000089964a71 file 00000000065cb689 pos 2 <4>[ 1256.918273] fscrypt (ceph, inode 1099511630270): Error -105 getting encryption context <7>[ 1256.918288] ceph: release inode 0000000089964a71 dir file 00000000065cb689 <7>[ 1256.918310] ceph: __ceph_caps_issued_mask ino 0x1 cap 00000000aa2afb8b issued pAsLsXsFs (mask Fs) <7>[ 1257.574593] ceph: mdsc delayed_work On 2/17/22 4:15 PM, xiubli@xxxxxxxxxx wrote:
From: Xiubo Li <xiubli@xxxxxxxxxx> This potentially will cause bug in future if using the old ceph version and some members may skipped initialized in handle_reply. Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx> --- fs/ceph/mds_client.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 93e5e3c4ba64..c3b1e73c5fbf 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -2286,7 +2286,8 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req, order = get_order(size * num_entries); while (order >= 0) { rinfo->dir_entries = (void*)__get_free_pages(GFP_KERNEL | - __GFP_NOWARN, + __GFP_NOWARN | + __GFP_ZERO, order); if (rinfo->dir_entries) break;