missing files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeremy Enos wrote:
> Krzysztof Strasburger wrote:
>> On Mon, Nov 23, 2009 at 07:39:09PM -0600, Jeremy Enos wrote:
>>   
>>> I have another clue to report:
>>> So I have my export directory as:
>>> /export
>>> Mounted as:
>>> /scratch
>>>
>>> If I do "ls -lR /scratch", it's supposed to synchronize all files and
>>> metadata, right?  Well, it doesn't seem to be doing that.
>>>
>>> I have approx 100 files in one problematic folder.  Only 50 show up to
>>> ls.  That is, until I list it specifically.  They also don't show up in
>>> the export directory until ls'd by name in /scratch.
>>>
>>> ls /scratch/file* # results in files1-49 being listed
>>> ls /export/file*  # same result as above
>>> ls /export/file50.dat  #  no such file or directory
>>> ls /scratch/file50.dat  # lists file as if nothing was ever wrong
>>> ls /export/file50.dat  # shows up now after specific ls call in /scratch
>>> ls /scratch/file*  # results in files 1-50 being listed now  (magic?)
>>> ls /export/file*  # also results in files 1-50 being listed now
>>>     
>> OK, this seems to be the same problem (as mine) in a different configuration.
>> Only the first subvolume of replicated data is checked.
>>   
>>>>>> volume stripe1
>>>>>>   type cluster/stripe
>>>>>>   option block-size 1MB
>>>>>>   subvolumes remote1 remote2 remote3 remote4 remote5
>>>>>> end-volume
>>>>>>
>>>>>> volume stripe2
>>>>>>   type cluster/stripe
>>>>>>   option block-size 1MB
>>>>>>   subvolumes remote6 remote7 remote8 remote9 remote10
>>>>>> end-volume
>>>>>>
>>>>>> volume replicate
>>>>>>   type cluster/replicate
>>>>>>   option metadata-self-heal on
>>>>>>   subvolumes stripe1 stripe2
>>>>>> end-volume
>>>>>>
>>>>>>           
>> Glusterfs developers claim that it is unsafe, to shuffle subvolumes, as the
>> first one is used as the lock server.
>> But it should be safe (IMHO) to workaround in the following manner:
>> 1. umount the replicated volume on all clients,
>> 2. modify the config file (everywhere!):
>>    subvolumes stripe2 stripe1
>> 3. mount it again.
>> Now stripe2 appears as the first subvolume and ls -R should do the
>> synchronization, as expected.
>> Krzysztof
>>
>>   
> Thank you for the suggestion!  I can't wait to try it out.  One 
> question though- if I had it unmounted everywhere, and just used one 
> client to mount the fs with shuffled volumes, then ls -lR, then 
> unmount, then remount with unshuffled volumes, then remount 
> everywhere- would that be expected to have the same effect?  i.e.  
> Just using a single client in a temporarily shuffled config to force 
> the sync?
> thx-
>
>     Jeremy
>
Hi Krzysztof-
I tried your suggestion (on all clients), but the symptom I described 
above still exists, mostly.  It used to be that once a missing (hidden) 
file was ls'd by name, it would appear.  That's not the case anymore.  
It still shows up to the explicit "ls file51.dat", but doesn't show up 
to "ls file*". 
Another clue though-
On one of 33 clients, a missing file shows a file size.  On the other 
32, it is listed as a zero byte file. 
thanks for your suggestions.

    Jeremy



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux