These problems poped up when we did some bug fixes in our hashing algorithm. (it was alright in 1.4.0preX releases, and got corrected from rc2). This behavior can happen for all those who used dht in rc1, and now shifted to higher rcX releases.
If you notice the name length is multiple of 16 (32 in this case), and to fix this, we have two approach as of now. Have a separate '/mnt/debug' mountpoint with 'option lookup-unhashed yes' in dht. Now stat the files which are of filename length 16 or multiple of 16 over this debug mountpoint. This should fix your missing file problem on main mountpoint also (as it creates a proper linkfile in proper hashed volume).
Sorry for this in-convenience.
Regards,
Amar
On Mon, Mar 9, 2009 at 10:40 PM, Dan Parsons <dparsons@xxxxxxxx> wrote:
I'm getting the below error messages in rc4. Like my previous email, there doesn't seem to be any pattern as to which server/client it's happening on, though the errors are occurring fairly frequently.
2009-03-09 17:32:26 E [unify.c:585:unify_lookup] unify: returning ESTALE for /bio/data/fast-hmmsearch-all/tmpP986E__fast-hmmsearch-all_job/result.tigrfam.TIGR02622.hmmhits: file count is 12009-03-09 17:32:26 E [unify.c:591:unify_lookup] unify: /bio/data/fast-hmmsearch-all/tmpP986E__fast-hmmsearch-all_job/result.tigrfam.TIGR02622.hmmhits: found on unify-switch-ns2009-03-09 17:32:26 W [fuse-bridge.c:301:need_fresh_lookup] fuse-bridge: revalidate of /bio/data/fast-hmmsearch-all/tmpP986E__fast-hmmsearch-all_job/result.tigrfam.TIGR02622.hmmhits failed (Stale NFS file handle)2009-03-09 17:32:28 E [unify.c:360:unify_lookup_cbk] unify: child(dht0): path(/bio/data/fast-hmmsearch-all/tmpP986E__fast-hmmsearch-all_job/result.tigrfam.TIGR01420.hmmhits): No such file or directory2009-03-09 17:32:28 E [unify.c:360:unify_lookup_cbk] unify: child(unify-switch-ns): path(/bio/data/fast-hmmsearch-all/tmpP986E__fast-hmmsearch-all_job/result.tigrfam.TIGR01420.hmmhits): No such file or directoryAs you can see, there are two separate sets of errors for two different files, though both errors are troubling. This problem has persisted from rc2 to rc4, though I can't say for certain that it was introduced in rc2 (I think it was there prior to that as well). No matching errors in server logs.Any suggestions? My configs are below. Thanks!CLIENT CONFIG:volume unify-switch-nstype protocol/clientoption transport-type tcpoption remote-host 10.8.101.51option remote-subvolume posix-unify-switch-nsend-volume#volume distfs01-ns-readahead# type performance/read-ahead# option page-size 1MB# option page-count 8# subvolumes distfs01-ns-brick#end-volume#volume unify-switch-ns# type performance/write-behind# option block-size 1MB# option cache-size 3MB# subvolumes distfs01-ns-readahead#end-volumevolume distfs01-unifytype protocol/clientoption transport-type tcpoption remote-host 10.8.101.51option remote-subvolume posix-unifyend-volumevolume distfs02-unifytype protocol/clientoption transport-type tcpoption remote-host 10.8.101.52option remote-subvolume posix-unifyend-volumevolume distfs03-unifytype protocol/clientoption transport-type tcpoption remote-host 10.8.101.53option remote-subvolume posix-unifyend-volumevolume distfs04-unifytype protocol/clientoption transport-type tcpoption remote-host 10.8.101.54option remote-subvolume posix-unifyend-volumevolume distfs01-stripetype protocol/clientoption transport-type tcpoption remote-host 10.8.101.51option remote-subvolume posix-stripeend-volumevolume distfs02-stripetype protocol/clientoption transport-type tcpoption remote-host 10.8.101.52option remote-subvolume posix-stripeend-volumevolume distfs03-stripetype protocol/clientoption transport-type tcpoption remote-host 10.8.101.53option remote-subvolume posix-stripeend-volumevolume distfs04-stripetype protocol/clientoption transport-type tcpoption remote-host 10.8.101.54option remote-subvolume posix-stripeend-volumevolume stripe0type cluster/stripeoption block-size *.jar,*.pin:1MB,*:2MBsubvolumes distfs01-stripe distfs02-stripe distfs03-stripe distfs04-stripeend-volumevolume dht0type cluster/dht# option lookup-unhashed yessubvolumes distfs01-unify distfs02-unify distfs03-unify distfs04-unifyend-volumevolume unifytype cluster/unifyoption namespace unify-switch-nsoption self-heal offoption scheduler switch# send *.phr/psq/pnd etc to stripe0, send the rest to hash# extensions have to be *.foo* and not simply *.foo or rsync's tmp file naming will prevent files from being matchedoption scheduler.switch.case *.phr*:stripe0;*.psq*:stripe0;*.pnd*:stripe0;*.psd*:stripe0;*.pin*:stripe0;*.nsi*:stripe0;*.nin*:stripe0;*.nsd*:stripe0;*.nhr*:stripe0;*.nsq*:stripe0;*.tar*:stripe0;*.tar.gz*:stripe0;*.jar*:stripe0;*.img*:stripe0;*.perf*:stripe0;*.tgz*:stripe0;*.fasta*:stripe0;*.huge*:stripe0subvolumes stripe0 dht0end-volumevolume ioctype performance/io-cachesubvolumes unifyoption cache-size 3000MBoption cache-timeout 3600end-volumevolume filtertype features/filteroption fixed-uid 0option fixed-gid 900subvolumes iocend-volumeSERVER CONFIG:volume posix-unify-bricktype storage/posixoption directory /distfs-storage-space/glusterfs/unify# the below line is here to make the output of 'df' accurate, as both volumes are served from the same local driveoption export-statfs-size offend-volumevolume posix-stripe-bricktype storage/posixoption directory /distfs-storage-space/glusterfs/stripeend-volumevolume posix-unify-switch-ns-bricktype storage/posixoption directory /distfs-storage-space/glusterfs/unify-switch-nsend-volumevolume posix-unifytype performance/io-threadsoption thread-count 4subvolumes posix-unify-brickend-volumevolume posix-stripetype performance/io-threadsoption thread-count 4subvolumes posix-stripe-brickend-volumevolume posix-unify-switch-nstype performance/io-threadsoption thread-count 2subvolumes posix-unify-switch-ns-brickend-volumevolume servertype protocol/serveroption transport-type tcpoption auth.addr.posix-unify.allow 10.8.101.*,10.8.15.50option auth.addr.posix-stripe.allow 10.8.101.*,10.8.15.50option auth.addr.posix-unify-switch-ns.allow 10.8.101.*,10.8.15.50subvolumes posix-unify posix-stripe posix-unify-switch-nsend-volume
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
--
Amar Tumballi