Client and server file "view", different results?! Client can't see the right file.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, all files are running VMs. No-one alters them manually (which would kill
the VM...)

So, all was done by the replicate mechanism and the sync. We have to reboot
servers from time to time for upgrades, but we do bring the back up with the
Gluster running before  tackling a second server.

Best, Martin

-----Original Message-----
From: Mohit Anchlia [mailto:mohitanchlia at gmail.com] 
Sent: Thursday, May 19, 2011 7:05 PM
To: Pranith Kumar. Karampuri; Martin Schenker
Cc: gluster-users at gluster.org
Subject: Re: Client and server file "view", different
results?! Client can't see the right file.

What's more interesting is that pserver3 shows "0" bytes and rest 3 of
them show the same "size". While pserver12 & 13 has
"trusted.glusterfs.dht.linkto="storage0-replicate-0" set.

Was there every any manual operation done with these files?

On Thu, May 19, 2011 at 5:16 AM, Pranith Kumar. Karampuri
<pranithk at gluster.com> wrote:
> Need the logs from May 13th to 17th.
>
> Pranith.
> ----- Original Message -----
> From: "Martin Schenker" <martin.schenker at profitbricks.com>
> To: "Pranith Kumar. Karampuri" <pranithk at gluster.com>
> Cc: gluster-users at gluster.org
> Sent: Thursday, May 19, 2011 5:28:06 PM
> Subject: RE: Client and server file "view", ? ? different
results?! Client can't see the right file.
>
> Hi Pranith!
>
> That's what I would have expected as well! The files should be on one
brick. But they appear on both.
> I'm quite stumped WHY the files show up on the other brick, this isn't
what I understood from the manual/setup! The vol-file doesn't seem to be
wrong so any ideas?
>
> Best, Martin
>
>
>
> -----Original Message-----
> From: Pranith Kumar. Karampuri [mailto:pranithk at gluster.com]
> Sent: Thursday, May 19, 2011 1:52 PM
> To: Martin Schenker
> Cc: gluster-users at gluster.org
> Subject: Re: Client and server file "view", different
results?! Client can't see the right file.
>
> Martin,
> ? ? The output suggests that there are 2 replicas per 1 volume. So it
should be present on only 2 bricks. Why is the file present in 4 bricks?. It
should either be present on pserver12&13 or pserver3 & 5. I am not sure why
you are expecting it to be there on 4 bricks.
> Am I missing any info here?.
>
> Pranith
>
> ----- Original Message -----
> From: "Martin Schenker" <martin.schenker at profitbricks.com>
> To: gluster-users at gluster.org
> Sent: Wednesday, May 18, 2011 2:23:09 PM
> Subject: Re: Client and server file "view", ? ? different
results?! Client can't see the right file.
>
> Here is another occurrence:
>
> The file 20819 is shown twice, different timestamps and attributes. 0
> filesize on pserver3, outdated on pserver5, just 12&13 seems to be in
sync.
> So what's going on?
>
>
> 0 root at de-dc1-c1-pserver13:~ # ls -al
>
/opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/h
> dd-images/2081*
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:44
>
/opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/h
> dd-images/20819
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:44
>
/opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/h
> dd-images/20819
>
> 0 root at de-dc1-c1-pserver3:~ # find /mnt/gluster/brick?/ -name 20819 |
xargs
> -i ls -al {}
> -rwxrwx--- 1 libvirt-qemu vcb 0 May 14 17:00
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root at de-dc1-c1-pserver3:~ # getfattr -dm -
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
>
> 0 root at pserver5:~ # find /mnt/gluster/brick?/ -name 20819 | xargs -i ls
-al
> {}
> -rwxrwx--- 1 libvirt-qemu vcb 53687091200 May 14 17:00
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root at pserver5:~ # getfattr -dm -
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.afr.storage0-client-0=0sAAAAAgAAAAIAAAAA
> trusted.afr.storage0-client-1=0sAAAAAAAAAAAAAAAA
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
>
> 0 root at pserver12:~ # find /mnt/gluster/brick?/ -name 20819 | xargs -i ls
-al
> {}
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:41
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root at pserver12:~ # getfattr -dm -
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.afr.storage0-client-6=0sAAAAAAAAAAAAAAAA
> trusted.afr.storage0-client-7=0sAAAAAAAAAAAAAAAA
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
> trusted.glusterfs.dht.linkto="storage0-replicate-0
>
> 0 root at de-dc1-c1-pserver13:~ # find /mnt/gluster/brick?/ -name 20819 |
xargs
> -i ls -al {}
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:39
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root at de-dc1-c1-pserver13:~ # getfattr -dm -
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.afr.storage0-client-6=0sAAAAAAAAAAAAAAAA
> trusted.afr.storage0-client-7=0sAAAAAAAAAAAAAAAA
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
> trusted.glusterfs.dht.linkto="storage0-replicate-0
>
> Only entrance in log file on pserver5, no references in the other three
> logs/servers:
>
> 0 root at pserver5:~ # grep 20819
> /var/log/glusterfs/opt-profitbricks-storage.log
> [2011-05-17 20:37:30.52535] I
[client-handshake.c:407:client3_1_reopen_cbk]
> 0-storage0-client-7: reopen on
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819
succeeded
> (remote-fd = 6)
> [2011-05-17 20:37:34.824934] I [afr-open.c:435:afr_openfd_sh]
> 0-storage0-replicate-3: ?data self-heal triggered. path:
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819,
reason:
> Replicate up down flush, data lock is held
> [2011-05-17 20:37:34.825557] E
> [afr-self-heal-common.c:1214:sh_missing_entries_create]
> 0-storage0-replicate-3: no missing files -
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819.
> proceeding to metadata check
> [2011-05-17 21:08:59.241203] I
> [afr-self-heal-algorithm.c:526:sh_diff_loop_driver_done]
> 0-storage0-replicate-3: diff self-heal on
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819: 6
blocks
> of 409600 were different (0.00%)
> [2011-05-17 21:08:59.275873] I
> [afr-self-heal-common.c:1527:afr_self_heal_completion_cbk]
> 0-storage0-replicate-3: background ?data self-heal completed on
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux