Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > ++ netfs_read_subreq_terminated(&rdata->subreq, rdata->result, false); > > So here, I have > > ++ netfs_read_subreq_terminated(&rdata->subreq, rdata->result, true); > > with the third argument being 'true' instead of 'false' as in yours. > > The reason? That's what commit a68c74865f51 ("cifs: Fix SMB1 > readv/writev callback in the same way as SMB2/3") did when it moved > the (originally) netfs_subreq_terminated() into the worker, and it > changed the 'was_async' argument from "false" to a "true". As part of these changes, the callback to netfslib from the SMB1 transport variant is now delegated to a separate worker thread by cifs_readv_callback() rather than being done in the cifs network processing thread (e.g. as is done by the SMB2/3 smb2_readv_worker() in smb2pdu.c), so it's better to pass "false" here. All that argument does is tell netfslib whether it can do cleanup processing and retrying in the calling thread (if "false") or whether it needs to offload it to another thread (if "true"). I should probably rename the argument from "was_async" to something more explanatory. By putting "true" here, it causes the already offloaded processing to further offload unnecessarily. It shouldn't break things though. > > + rdata->subreq.transferred += rdata->got_bytes; > > - netfs_read_subreq_terminated(&rdata->subreq, rdata->result, false); > > ++ trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_progress); > > where did this trace_netfs_sreq() come from? It got copied across with other lines when sync'ing the code with smb2_readv_callback() whilst attempting the merge resolution. It's something that got missed out when porting the changes I'd made to SMB2/3 to SMB1. It should have been deferred to a follow up patch. > > --- a/fs/smb/client/smb2pdu.c > > +++ b/fs/smb/client/smb2pdu.c > > @@@ -4614,6 -4613,10 +4613,8 @@@ smb2_readv_callback(struct mid_q_entry > > server->credits, server->in_flight, > > 0, cifs_trace_rw_credits_read_response_clear); > > rdata->credits.value = 0; > > + rdata->subreq.transferred += rdata->got_bytes; > > - if (rdata->subreq.start + rdata->subreq.transferred >= rdata->subreq.rreq->i_size) > > - __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); > > + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_progress); > > And where did this conflict resolution come from? I'm not seeing why > it removes that NETFS_SREQ_HIT_EOF bit logic.. A fix that went upstream via SteveF's tree rather than Christian's tree added NETFS_SREQ_HIT_EOF separately: 1da29f2c39b67b846b74205c81bf0ccd96d34727 netfs, cifs: Fix handling of short DIO read The code that added to twiddle NETFS_SREQ_HIT_EOF is in the source, just above the lines in the hunk above: if (rdata->result == -ENODATA) { __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); rdata->result = 0; } else { size_t trans = rdata->subreq.transferred + rdata->got_bytes; if (trans < rdata->subreq.len && rdata->subreq.start + trans == ictx->remote_i_size) { __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); rdata->result = 0; } } The two lines removed in the example resolution are therefore redundant and should have been removed, but weren't. David