On Wed, Feb 11, 2015 at 1:17 PM, Tom Haynes <thomas.haynes@xxxxxxxxxxxxxxx> wrote: > On Wed, Feb 11, 2015 at 12:47:26PM -0500, Trond Myklebust wrote: >> On Wed, Feb 11, 2015 at 12:39 PM, Marc Eshel <eshel@xxxxxxxxxx> wrote: >> > >> > A good hint that we are dealing with a sparse file is the the number of >> > blocks don't add up to the reported filesize >> >> Sure, but that still adds up to an unnecessary inefficiency on each >> READ_PLUS call to that file. >> >> My point is that the best way for the client to process this >> information (and for the server too) is to read the sparseness map in >> once as a bulk operation on a very large chunk of the file, and then >> to use that map as a guide for when it needs to call READ. > > I thought we agreed that the sparseness map made no sense because > the information was immeadiately stale? Right now, we're caching the zero reads, aren't we? What makes caching hole information so special? > Anyway, the client could build that map if it wanted to using SEEK. I'm > not arguing that it would be efficient, but that it could be done > with a cycle of looking for holes. Sure, however that's not necessarily useful as an optimiser either. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html