J. Bruce Fields wrote:
On Thu, Aug 28, 2008 at 01:27:53PM -0700, Andrew Morton wrote:
(switched to email. Please respond via emailed reply-to-all, not via the
bugzilla web interface).
On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
bugme-daemon@xxxxxxxxxxxxxxxxxxx wrote:
NFS client writes to Sun Solaris 10 U4 server.
at some point in time, there is an empty portion of the output file from the
writer containing missing data (shows as NULL bytes from another NFS client
issuing a tail -f on the file being written).
confirmed that the file as exists on the NFS server is sparse, missing bytes
(not necessarily multiple of 512 or 1024, one sample is a gap of 3818 bytes,
another is 1895 bytes, another is 423 bytes)
Seems like something that could happen if for example two write rpc's
got reordered on the network. That's not necessarily a bug--the nfs
client isn't required to wait for confirmation of every previous write
before sending the next one.
However if the client isn't flushing dirty data to the server before
returning from close, then that's a violation of NFS's close-to-open
semantics:...
if you do a read of the entire file from the NFS client doing the writing, it
causes the non-flushed writes to be instantly flushed to the server followed by
a NFS3 commit operation. The data then can be seen on all other NFS clients.
If you do an open of the file alone, no flush
if you do an open and a close, no flush
... so this "close, no flush" could be a bug (depending on who is doing
that close when--I don't completely understand the described situation).
I suspect that this last might depend upon 1) what options were used
when the file system was mounted and 2) how the file was opened. The
flush-on-close wouldn't be needed if the file was opened read-only.
It seems a little odd that the holes aren't page aligned or page
sized multiples.
What application is being used to generate the file which is showing
these holes?
Thanx...
ps
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html