On Wed, Feb 03, 2016 at 09:24:06AM -0500, Jeff Darcy wrote: > > Problem is with workloads which know the files that need to be read > > without readdir, like hyperlinks (webserver), swift objects etc. These > > are two I know of which will have this problem, which can't be improved > > because we don't have metadata, data co-located. I have been trying to > > think of a solution for past few days. Nothing good is coming up :-/ > > In those cases, caching (at the MDS) would certainly help a lot. Some > variation of the compounding infrastructure under development for Samba > etc. might also apply, since this really is a compound operation. When a client is done modifying a file, MDS would refresh it's size, mtime attributes by fetching it from the DS. As part of this refresh, DS could additionally send back the content if the file size falls in range, with MDS persisting it, sending it back for subsequent lookup calls as it does now. The content (on MDS) can be zapped once the file size crosses the defined limit. But, when there are open file descriptors on an inode (O_RDWR || O_WRONLY on a file), the size cannot be trusted (as MDS only knows about the updated size after last close), which would be the degraded case. Thanks, Venky _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel