Hi Pavel, In receive_encrypted_standard function also, server->total_read is updated properly before calling decrypt_raw_data. So, no need to update the same field again. I have checked all instances where decrypt_raw_data is used and didn’t find any issue. Regards, Rohith On Thu, Oct 15, 2020 at 4:18 AM Pavel Shilovsky <piastryyy@xxxxxxxxx> wrote: > > Hi Rohith, > > Thanks for catching the problem and proposing the patch! > > I think there is a problem with just removing server->total_read > updates inside decrypt_raw_data(): > > The same function is used in receive_encrypted_standard() which then > calls cifs_handle_standard(). The latter uses server->total_read in at > least two places: in server->ops->check_message and cifs_dump_mem(). > > There may be other places in the code that assume server->total_read > to be correct. I would avoid simply removing this in all code paths > and would rather make a more specific fix for the offloaded reads. > > -- > Best regards, > Pavel Shilovsky > > чт, 8 окт. 2020 г. в 13:36, Steve French <smfrench@xxxxxxxxx>: > > > > Fixed up 2 small checkpatch warnings and merged into cifs-2.6.git for-next > > > > On Thu, Oct 8, 2020 at 9:40 AM Rohith Surabattula > > <rohiths.msft@xxxxxxxxx> wrote: > > > > > > Hi All, > > > > > > With the "esize" mount option, I observed data corruption and cifs > > > reconnects during performance tests. > > > > > > TCP server info field server->total_read is modified parallely by > > > demultiplex thread and decrypt offload worker thread. server->total_read > > > is used in calculation to discard the remaining data of PDU which is > > > not read into memory. > > > > > > Because of parallel modification, “server->total_read” value got > > > corrupted and instead of discarding the remaining data, it discarded > > > some valid data from the next PDU. > > > > > > server->total_read field is already updated properly during read from > > > socket. So, no need to update the same field again after decryption. > > > > > > Regards, > > > Rohith > > > > > > > > -- > > Thanks, > > > > Steve