Re: Read operation gets EOF return when there is multi-client read/write after linux 5.16-rc1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 1/24/24 11:25, Frank Hsiao 蕭法宣 wrote:
When multiple ceph kernel clients perform read/write on the same file, the read
operation(ceph_sync_read) returns EOF(ret = 0) even though the file has been
written by another client.

My envs use Ceph quincy(v17.2.6) and mount cephfs by ceph kernel client. For the
client side, I use Samba(v4.18.8) to export the folder as smb share and test it
with smbtorture. The test case is smb2.rw.rw1 with the following failure
message:

test: samba4.smb2.rw.rw1
Checking data integrity over 10 ops
read failed(NT_STATUS_END_OF_FILE)
failure: samba4.smb2.rw.rw1 [
Exception: read 0, expected 440
]

After some testing, I figured out that the failure only happens when I have
linux kernel version>=5.16-rc1, specifically after commit
c3d8e0b5de487a7c462781745bc17694a4266696. Kernel logs as below(on 5.16-rc1):


[Wed Jan 10 09:44:56 2024] [153221] ceph_read_iter:1559: ceph:  aio_sync_read
00000000789dccee 100000010ef.fffffffffffffffe 0~440 got cap refs on Fr
[Wed Jan 10 09:44:56 2024] [153221] ceph_sync_read:852: ceph:  sync_read on file
00000000d9e861fb 0~440
[Wed Jan 10 09:44:56 2024] [153221] ceph_sync_read:913: ceph:  sync_read 0~440 got 440 i_size 0
[Wed Jan 10 09:44:56 2024] [153221] ceph_sync_read:966: ceph:  sync_read result 0 retry_op 2

...

[Wed Jan 10 09:44:57 2024] [153221] ceph_read_iter:1559: ceph:  aio_sync_read
00000000789dccee 100000010ef.fffffffffffffffe 0~440 got cap refs on Fr
[Wed Jan 10 09:44:57 2024] [153221] ceph_sync_read:852: ceph:  sync_read on file
00000000d9e861fb 0~0

Locally I tried to reproduce it but failed, the following is my logs:

<7>[ 3524.212187] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_read_iter: sync 00000000b6ea7952 10000000000.fffffffffffffffe 0~8192 got cap refs on Fr <7>[ 3524.212194] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_sync_read: on file 00000000a06cac30 0~2000 <7>[ 3524.212201] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: on inode 00000000b6ea7952 10000000000.fffffffffffffffe 0~2000 <7>[ 3524.212209] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: orig 0~8192 reading 0~8192 <7>[ 3524.213008] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: 0~8192 got 97 i_size 90 <7>[ 3524.213100] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: result 90 retry_op 2 <7>[ 3524.213107] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_read_iter: 00000000b6ea7952 10000000000.fffffffffffffffe dropping cap refs on Fr = 90
...

<7>[ 3524.213151] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_do_getattr: inode 00000000b6ea7952 10000000000.fffffffffffffffe mask Fsr mode 0100644 <7>[ 3524.213159] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_caps_issued: 00000000b6ea7952 10000000000.fffffffffffffffe cap 0000000067201935 issued pAsLsXsFr <7>[ 3524.213193] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_mdsc_do_request: do_request on 0000000008917ba4 <7>[ 3524.213201] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_mdsc_submit_request: submit_request on 0000000008917ba4 for inode 0000000000000000
...

<7>[ 3524.345996] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_fill_inode: 00000000b6ea7952 10000000000.fffffffffffffffe mode 0100644 uid.gid 1000.1000 <7>[ 3524.346004] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_fill_file_size: size 90 -> 97
...

<7>[ 3524.346235] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_do_getattr: result=0 <7>[ 3524.346236] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_check_caps:  mds0 cap 0000000067201935 used - issued pAsLsXsFr implemented pAsLsXsFscr revoking Fsc <7>[ 3524.346241] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] ceph_read_iter: hit hole, ppos 90 < size 97, reading more
...
<7>[ 3524.346307] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: on inode 00000000b6ea7952 10000000000.fffffffffffffffe 5a~1f9f <7>[ 3524.346313] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: orig 90~8095 reading 90~8095
...

<7>[ 3524.370645] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: 90~8095 got 7 i_size 97 <7>[ 3524.370776] ceph: [49172eb5-aa13-4408-956b-e10a6d02b0f6 4228] __ceph_sync_read: result 7 retry_op 2

As you can see that the first time it will read only 90 bytes, actually the file size is 97. Then the 'ceph_read_iter()' will call '__ceph_do_getattr()' to get the real i_size from MDS, and finally it will retry to read more and read the last 7 bytes.

Locally my test succeeded.

Could you upload more detail debug logs ? Have you seen the 'ceph_read_iter()' was retried ?

Thanks

- Xiubo





The logs indicate that:
1. ceph_sync_read may read data but i_size is obsolete in simultaneous rw situation
2. The commit in 5.16-rc1 cap ret to i_size and set retry_op = CHECK_EOF
3. When retrying, ceph_sync_read gets len=0 since iov count has modified in
copy_page_to_iter
4. ceph_read_iter return 0

I'm not sure if my understanding is correct. As a reference, here is my simple
patch and I need more comments. The purpose of the patch is to prevent
sync read handler from doing copy page when ret > i_size.

Thanks.


diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 220a41831b46..5897f52ee998 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -926,6 +926,9 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to,

                 idx = 0;
                 left = ret > 0 ? ret : 0;
+               if (left > i_size) {
+                       left = i_size;
+               }
                 while (left > 0) {
                         size_t len, copied;
                         page_off = off & ~PAGE_MASK;
@@ -952,7 +955,7 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to,
                         break;
         }

-       if (off > iocb->ki_pos) {
+       if (off > iocb->ki_pos || i_size == 0) {
                 if (off >= i_size) {
                         *retry_op = CHECK_EOF;
                         ret = i_size - iocb->ki_pos;





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux