---------- Forwarded message ---------- From: Pavel Shilovsky <piastryyy@xxxxxxxxx> Date: 2010/9/8 Subject: CIFS data coherency problem To: Steve French <smfrench@xxxxxxxxx>, Jeff Layton <jlayton@xxxxxxxxx> Копия: linux-cifs@xxxxxxxxxxxxxxx Hello! I faced with a problem of the wrong cifs cache behavior while adapting CIFS VFS client for working with the application which uses file system as a mechanism for storing data and organizing paralell access from several client. If we look at CIFS code, we can see that it uses kernel cache mechanism all the time (do_sync_read, do_sync_write, etc) and delegate all the checking for validating data to cifs_revalidate call. cifs_revalidate call uses QueryInfo protocol command for checking mtime and file size. I noticed that the server doesn't update mtime every time we writng to the server - that's why we can't use it. On another hand CIFS spec says that the client can't use cache for if it doesn't have an oplock - if we don't follow the spec, we can faced with other problems. Even more: if we use a Windows server and the mandatory locking style, now we can read from locking by other clients range (if we have this data in cache) - it's not right. As the solution, I suggest to follow the spec in every it's part: to do cache write/read if we have Exclusive oplock, to do cache read if we have Oplock Level II and as for other cases - use direct operations with the server. I attached the test (cache_problem.py) that shows the problem. What do you think about it? I have the code that do read/write according to the spec but I want to discuss this question before posting the patch because I think it's rather important -- Best regards, Pavel Shilovsky. -- Best regards, Pavel Shilovsky.
Attachment:
cache_problem.py
Description: Binary data