On Fri, 2010-12-31 at 20:03 -0500, George Spelvin wrote: > > ...and your point would be that an exponentially increasing addition to > > the existing number of tests is an acceptable tradeoff in a situation > > where the >99.999999999999999% case is that of sane servers with no > > looping? I don't think so... > > 1) Look again; it's O(1) work per entry, or O(n) work for an n-entry > directory. And O(1) space. With very small constant factors, and > very little code. The only thing exponentially increasing is the > interval at which you save the current cookie for future comparison. > 2) You said it *was* a problem, so it seemed worth presenting a > practical solution. If you don't think it's worth it, I'm not > going to disagree. But it's not impossible, or even difficult. Yes. I was thinking about it this morning (after coffee). One variant on those algorithms that might make sense here is to save the current cookie each time we see that the result of a cookie search is a filp->f_pos offset < the current filp->f_pos offset. That means we will in general only detect the loop after going through an entire cycle, but that should be sufficient... Trond -- Trond Myklebust Linux NFS client maintainer NetApp Trond.Myklebust@xxxxxxxxxx www.netapp.com -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html