On Fri, Apr 22, 2016 at 1:46 AM, Mohammed Rafi K C <rkavunga@xxxxxxxxxx> wrote: > comments are inline. > > On 04/22/2016 09:41 AM, Vijay Bellur wrote: >> On Mon, Apr 18, 2016 at 3:28 AM, Mohammed Rafi K C <rkavunga@xxxxxxxxxx> wrote: >>> But the problem comes when we use gf_api, where we don't have any >>> control over client behavior. So to fix this issue we have to give stat >>> information for all the entries. >>> >> Apart from Samba, what other consumers of gfapi have this problem? > > In nfs-ganesha, What I understand is, they are not sending readdirp. So > there we are good. But any other app which always expect a valid > response from readdirp will fail. For such consumers that need strict readdirplus from tiering, we can make this behavior optional. Exposing a tunable that can be either set by the administrators for the volume that the consumer acts on or letting the application invoke glfs_set_xlator_option() would be nice. >> >> >>> 4. Revert to dht_readdirp and then instead of taking all entries from >>> hot tier, just take only entries which has T file in cold tier. (We can >>> delay deleting of data file after demotion, so that we will get the stat >>> from hot tier) >>> >> Going by the architectural model of xlators, tier should provide the >> right entries with attributes to the upper layers (xlators/vfs etc.). >> Relying on a specific behavior from layers above us to mask a problem >> in our layer does not seem ideal. I would go with something like 2 or >> 3. If we want to retain the current behavior, we should make it >> conditional as I am not certain that this behavior is foolproof too. > > If we make the changes in tier_readdirp, then it effects the performance > of plane readdir (if md-cache was on). we may need to turn off volume > option "performance.force-readdirp". What do you think here ? > If we make the behavior optional as I describe above, then tier would not have to impact performance in readdir/readdirplus for accesses through fuse/NFS etc and let the current behavior remain. Regards, Vijay _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel