On Fri, 14 Feb 2014 02:14:56 +0000 "Suresh Jayaraman" <sjayaraman@xxxxxxxxxx> wrote: > >>> On 2/14/2014 at 01:10 AM, Jeff Layton <jlayton@xxxxxxxxx> wrote: > > On Thu, 13 Feb 2014 18:29:45 +0100 > > Gionatan Danti <g.danti@xxxxxxxxxx> wrote: > > > >> On 02/13/2014 12:37 PM, Jeff Layton wrote: > >> > > >> > Using cache=none sort of defeats the purpose. After all Gionatan said > >> > that he was doing this specifically to use fscache, and that won't work > >> > with cache=none. > >> > > >> > >> Surely my idea was to use FSCACHE to speed up remote access. Without it, > >> the entire discussion is pointless... > >> > >> > But, lets leave that aside for a moment and consider whether this could > >> > work at all. Assume we have samba set up re-share a cifs mount: > >> > > >> > Client sends an open to samba and requests an oplock. Samba then opens > >> > a file on the cifs mount, and does not request an oplock (because of > >> > cache=none). We then attempt to set a lease, which will fail because we > >> > don't have an oplock. Now you're no better off (and probably worse off) > >> > since you have zero caching going on and are having to bounce each > >> > request through an extra hop. > >> > > >> > So, suppose you disable "kernel oplocks" in samba in order to get samba > >> > to hand out L2 oplocks in this situation. Another client then comes > >> > along on the main (primary) server and changes a file. Samba is then > >> > not aware of that change and hilarity (aka data corruption) ensues. > >> > > >> > >> Are you of the same advice for low-frequency file changes (eg: office > >> files)? > >> > >> What about using NFS to export the Fileserver directory, mount it (via > >> mount.nfs) on the remote Linux box and then sharing via Samba? It is a > >> horrible frankenstein? > >> > > > > You'll have similar problems with NFS. > > > > You can't acquire leases on NFS either, so with kernel oplocks enabled > > on samba you won't ever get oplocks on there. If you turn them off (so > > that oplocks are tracked internally) you won't be aware of changes that > > occur outside of samba. > > > >> > I just don't see how re-sharing a cifs mount is a good idea, unless you > >> > are absolutely certain that the data you're resharing won't ever > >> > change. If that's the case, then you're almost certainly better off > >> > keeping a local copy on the samba server and sharing that out. > >> > > >> > >> After many tests, I tend to agree. Using a Fedora 20 test machine with > >> fscache+cachefilesd as the remote Linux box, I had one kernel panic and > >> multiple failed file copies (with Windows complaing about a "bad > >> signature"). > >> > >> I also found this: https://bugzilla.redhat.com/show_bug.cgi?id=646224 > >> Maybe the CIFS FSCACHE is not really production-grade on latest distros > >> also? > >> > > > > I don't recall whether Suresh ever fixed those bugs. cifs+fsc is > > If you are referring to this oops http://thread.gmane.org/gmane.linux.file-systems.cachefs.general/2961 > it was fixed by the below commit > > commit c902ce1bfb40d8b049bd2319b388b4b68b04bc27 > Author: David Howells <dhowells@xxxxxxxxxx> > Date: Thu Jul 7 12:19:48 2011 +0100 > > FS-Cache: Add a helper to bulk uncache pages on an inode > > I remember verifying it by running fsstress for many hours then. I'm not sure what other bugs you are referring to. > Ahh thanks. I don't think we ever turned on CONFIG_CIFS_FSCACHE in rhel6, so I'm not sure what sort of problem Gionatan was hitting. > > certainly not widely used, and it wouldn't surprise me if it were still > > horribly buggy. > > Just curious, why would you say so? > > I haven't heard of many people using it, and features that don't get widely used don't tend to be widely tested. Not a reflection on your work, but more of a statement that it was more of a niche feature that hasn't been widely deployed. I certainly could be wrong on that point however. I haven't played with it in quite some time. -- Jeff Layton <jlayton@xxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-cifs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html