Hi Rohith, Take a look at the patches at the top of my cifs-experimental branch. I put in a ->clamp_length() method and used that to chop up read requests into subrequests according to credits and rsize rather than trying to limit the size of a read in ->issue_read(). It behaves a lot better now. I see the VM creating 8M requests that then get broken up into pairs of 4M read RPCs that then take place in parallel. I've taken the idea of allowing the netfs to propose larger allocations for the request and subrequest structs and, in effect, merged the cifs_readdata struct with the netfs_io_subrequest struct reducing the amount of overhead a bit. I moved the cifsFileInfo field out from the subrequest to a wrapper for the request struct, so that it's carried for all the subreqs in common. Possibly some other readdata fields could be eliminated too as being superfluous to the fields in the netfs_io_subrequest struct. offset, got_bytes, bytes and result for example. There are a couple of problems with splice write, though, at least one of which is probably due to the iteratorisation. Firstly, xfstests now gets as far as generic/013 and then gets to a point where one of the threads is just sitting there sending the same splice write over and over again and getting a zero result back. Running splice by hand, however, shows no problem and because it's in fsstress, I think, it's really hard to work out how it gets to this point:-/. The other issue is that if I run splice to an empty file, it works; running another splice to the same file will result in the server giving STATUS_ACCESS_DENIED when cifs_write_begin() tries to read from the file: 7 0.009485249 192.168.6.2 → 192.168.6.1 SMB2 183 Read Request Len:65536 Off:0 File: x 8 0.009674245 192.168.6.1 → 192.168.6.2 SMB2 143 Read Response, Error: STATUS_ACCESS_DENIED Actually - that might be because the file is only 65536 bytes long because the first splice finished short. David