Hi All, We discussed the problem $subject in the mail thread [1]. Based on the comments and suggestions I will summarize the design (Made as points for simplicity.) 1) As part of each fop, top layer will generate a time stamp and pass it to the down along with other param. 1.1) This will bring a dependency for NTP synced clients along with servers 1.2) There can be a diff in time if the fop stuck in the xlator for various reason, for ex: because of locks. 2) On the server posix layer stores the value in the memory (inode ctx) and will sync the data periodically to the disk as an extended attr 2.1) of course sync call also will force it. And fop comes for an inode which is not linked, we do the sync immediately. 3) Each time when inodes are created or initialized it read the data from disk and store it. 4) Before setting to inode_ctx we compare the timestamp stored and the timestamp received, and only store if the stored value is lesser than the current value. 5) So in best case data will be stored and retrieved from the memory. We replace the values in iatt with the values in inode_ctx. 6) File ops that changes the parent directory attr time need to be consistent across all the distributed directories across the subvolumes. (for eg: a create call will change ctime and mtime of parent dir) 6.1) This has to handle separately because we only send the fop to the hashed subvolume. 6.2) We can asynchronously send the timeupdate setattr fop to the other subvoumes and change the values for parent directory if the file fops is successful on hashed subvolume. 6.3) This will have a window where the times are inconsistent across dht subvolume (Please provide your suggestions) 7) Currently we have couple of mount options for time attributes like noatime, relatime , nodiratime etc. But we are not explicitly handled those options even if it is given as mount option when gluster mount. [2] 7.1) We always relay on back end storage layer behavior, if you have given those mount options when you mount your disk, you will get this behaviour 7.2) Now if we are taking effort to fix the consistency issue, do we need to honour those options by our own ? Please provide your comments and suggestions. [1] : http://lists.gluster.org/pipermail/gluster-devel/2016-January/048003.html Regards Rafi KC _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-devel