Fwd: AFR healing problem after returning one node.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



sending to gluster-users.

---------- Forwarded message ----------
From: Raghavendra G <raghavendra.hg at gmail.com>
Date: 2008/12/11
Subject: Re: AFR healing problem after returning one node.
To: Rash <rash at konto.pl>


Hi Rash,

self heal is triggered when a lookup from pathname to inode happens. Both ls
and cat file dont result in lookup on the directoy test. Hence self-heal is
not perfromed. Can you try the following sequence of commands on client?

/home/raghu/mnt/# ls
file

/home/raghu/mnt # rm file
/home/raghu/mnt #cd ../
/home/raghu # ls mnt/

when you cd to parent directory, lookup happens on mnt and self heal should
work.
regards,

2008/12/11 Rash <rash at konto.pl>

czw, 11 gru 2008, 08:31:16 +0100, Rash napisa?(a):
> > ?ro, 10 gru 2008, 17:16:33 +0100, Rash napisa?(a):
> > > I've got configuration which in simple includes combination of afrs and
> > > unify - servers exports n[1-3]-brick[12] and n[1-3]-ns
> > I forget to tell that my version of glusterfs is 1.4.0pre13.
>
> There is debug log on moment when file is deleted, one node is back, and
> cat file had been performed:
>
> 2008-12-11 11:26:15 D [inode.c:280:__inode_activate] fuse/inode: activating
> inode(11157510), lru=0/0 active=2 purge=0
> 2008-12-11 11:26:15 D [fuse-bridge.c:610:fuse_getattr] glusterfs-fuse: 170:
> GETATTR 11157510 (/a)
> 2008-12-11 11:26:15 D [fuse-bridge.c:530:fuse_attr_cbk] glusterfs-fuse:
> 170: STAT() /a => 11157510
> 2008-12-11 11:26:15 D [inode.c:299:__inode_passivate] fuse/inode:
> passivating inode(11157510) lru=1/0 active=1 purge=0
> 2008-12-11 11:26:15 D [inode.c:280:__inode_activate] fuse/inode: activating
> inode(11157510), lru=0/0 active=2 purge=0
> 2008-12-11 11:26:15 D [fuse-bridge.c:463:fuse_lookup] glusterfs-fuse: 171:
> LOOKUP /a/file
> 2008-12-11 11:26:15 D [inode.c:455:__inode_create] fuse/inode: create
> inode(0)
> 2008-12-11 11:26:15 D [inode.c:280:__inode_activate] fuse/inode: activating
> inode(0), lru=0/0 active=3 purge=0
> 2008-12-11 11:26:15 W [afr-self-heal-common.c:955:afr_self_heal] afr1:
> performing self heal on /a/file (metadata=1 data=1 entry=1)
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:910:afr_self_heal_missing_entries] afr1: attempting
> to recreate missing entries for path=/a/file
> 2008-12-11 11:26:15 W [afr-self-heal-common.c:955:afr_self_heal] afr-ns:
> performing self heal on /a/file (metadata=1 data=1 entry=1)
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:910:afr_self_heal_missing_entries] afr-ns:
> attempting to recreate missing entries for path=/a/file
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:851:sh_missing_entries_lookup] afr1: looking up
> /a/file on subvolume n1-brick2
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:851:sh_missing_entries_lookup] afr1: looking up
> /a/file on subvolume n2-brick1
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:851:sh_missing_entries_lookup] afr-ns: looking up
> /a/file on subvolume n1-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:851:sh_missing_entries_lookup] afr-ns: looking up
> /a/file on subvolume n2-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:851:sh_missing_entries_lookup] afr-ns: looking up
> /a/file on subvolume n3-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:808:sh_missing_entries_lookup_cbk] afr1: path
> /a/file on subvolume n2-brick1 is of mode 0100644
> 2008-12-11 11:26:15 W
> [afr-self-heal-common.c:816:sh_missing_entries_lookup_cbk] afr1: path
> /a/file on subvolume n1-brick2 => -1 (No such file or directory)
> 2008-12-11 11:26:15 D [afr-self-heal-common.c:551:sh_missing_entries_mknod]
> afr1: mknod /a/file mode 0100644 on 1 subvolumes
> 2008-12-11 11:26:15 W
> [afr-self-heal-common.c:816:sh_missing_entries_lookup_cbk] afr-ns: path
> /a/file on subvolume n3-ns => -1 (No such file or directory)
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:808:sh_missing_entries_lookup_cbk] afr-ns: path
> /a/file on subvolume n2-ns is of mode 0100644
> 2008-12-11 11:26:15 W
> [afr-self-heal-common.c:816:sh_missing_entries_lookup_cbk] afr-ns: path
> /a/file on subvolume n1-ns => -1 (No such file or directory)
> 2008-12-11 11:26:15 D [afr-self-heal-common.c:551:sh_missing_entries_mknod]
> afr-ns: mknod /a/file mode 0100644 on 2 subvolumes
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:499:sh_missing_entries_newentry_cbk] afr-ns: chown
> /a/file to 0 0 on subvolume n3-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:499:sh_missing_entries_newentry_cbk] afr1: chown
> /a/file to 0 0 on subvolume n1-brick2
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:446:sh_missing_entries_finish] afr1: unlocking
> 11157510/file on subvolume n1-brick2
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:446:sh_missing_entries_finish] afr1: unlocking
> 11157510/file on subvolume n2-brick1
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:499:sh_missing_entries_newentry_cbk] afr-ns: chown
> /a/file to 0 0 on subvolume n1-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:446:sh_missing_entries_finish] afr-ns: unlocking
> 11157510/file on subvolume n1-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:446:sh_missing_entries_finish] afr-ns: unlocking
> 11157510/file on subvolume n2-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:446:sh_missing_entries_finish] afr-ns: unlocking
> 11157510/file on subvolume n3-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:386:afr_sh_missing_entries_done] afr1: proceeding to
> metadata check on /a/file
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:709:afr_sh_metadata_lock]
> afr1: locking /a/file on subvolume n1-brick2
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:709:afr_sh_metadata_lock]
> afr1: locking /a/file on subvolume n2-brick1
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:386:afr_sh_missing_entries_done] afr-ns: proceeding
> to metadata check on /a/file
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:709:afr_sh_metadata_lock]
> afr-ns: locking /a/file on subvolume n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:709:afr_sh_metadata_lock]
> afr-ns: locking /a/file on subvolume n2-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:709:afr_sh_metadata_lock]
> afr-ns: locking /a/file on subvolume n3-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:643:afr_sh_metadata_lookup]
> afr1: looking up /a/file on n1-brick2
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:643:afr_sh_metadata_lookup]
> afr1: looking up /a/file on n2-brick1
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:643:afr_sh_metadata_lookup]
> afr-ns: looking up /a/file on n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:643:afr_sh_metadata_lookup]
> afr-ns: looking up /a/file on n2-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:643:afr_sh_metadata_lookup]
> afr-ns: looking up /a/file on n3-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:598:afr_sh_metadata_lookup_cbk] afr1: path /a/file
> on subvolume n2-brick1 is of mode 0100644
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:598:afr_sh_metadata_lookup_cbk] afr1: path /a/file
> on subvolume n1-brick2 is of mode 0100644
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr1:
> pending_matrix: [ 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr1:
> pending_matrix: [ 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:486:afr_sh_metadata_sync_prepare] afr1: syncing
> metadata of /a/file from subvolume n2-brick1 to 1 active sinks
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:598:afr_sh_metadata_lookup_cbk] afr-ns: path
> /a/file on subvolume n3-ns is of mode 0100644
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:598:afr_sh_metadata_lookup_cbk] afr-ns: path
> /a/file on subvolume n2-ns is of mode 0100644
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:598:afr_sh_metadata_lookup_cbk] afr-ns: path
> /a/file on subvolume n1-ns is of mode 0100644
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr-ns:
> pending_matrix: [ 0 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr-ns:
> pending_matrix: [ 0 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr-ns:
> pending_matrix: [ 0 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:486:afr_sh_metadata_sync_prepare] afr-ns: syncing
> metadata of /a/file from subvolume n2-ns to 2 active sinks
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:378:afr_sh_metadata_sync]
> afr1: syncing metadata of /a/file from n2-brick1 to n1-brick2
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:378:afr_sh_metadata_sync]
> afr-ns: syncing metadata of /a/file from n2-ns to n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:378:afr_sh_metadata_sync]
> afr-ns: syncing metadata of /a/file from n2-ns to n3-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:247:afr_sh_metadata_erase_pending] afr1: erasing
> pending flags from /a/file on n1-brick2
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:247:afr_sh_metadata_erase_pending] afr1: erasing
> pending flags from /a/file on n2-brick1
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:155:afr_sh_metadata_finish]
> afr1: unlocking /a/file on subvolume n1-brick2
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:155:afr_sh_metadata_finish]
> afr1: unlocking /a/file on subvolume n2-brick1
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:83:afr_sh_metadata_done]
> afr1: proceeding to data check on /a/file
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:936:afr_sh_data_lock] afr1:
> locking /a/file on subvolume n1-brick2
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:936:afr_sh_data_lock] afr1:
> locking /a/file on subvolume n2-brick1
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:893:afr_sh_data_lock_cbk] afr1:
> inode of /a/file on child 1 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:893:afr_sh_data_lock_cbk] afr1:
> inode of /a/file on child 0 locked
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr1:
> pending_matrix: [ 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr1:
> pending_matrix: [ 0 0 ]
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:717:afr_sh_data_sync_prepare]
> afr1: syncing data of /a/file from subvolume n2-brick1 to 1 active sinks
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:615:afr_sh_data_open_cbk] afr1:
> fd for /a/file opened, commencing sync
> 2008-12-11 11:26:15 W [afr-self-heal-data.c:559:afr_sh_data_read_write]
> afr1: sourcing file /a/file from n2-brick1 to other sinks
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:514:afr_sh_data_read_cbk] afr1:
> read 29 bytes of data from /a/file on child 1, offset 0
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:451:afr_sh_data_write_cbk]
> afr1: wrote 29 bytes of data from /a/file to child 0, offset 0
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:479:afr_sh_data_write_cbk]
> afr1: closing fd's of /a/file
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:381:afr_sh_data_trim_cbk] afr1:
> ftruncate of /a/file on subvolume n1-brick2 completed
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:329:afr_sh_data_erase_pending]
> afr1: erasing pending flags from /a/file on n1-brick2
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:329:afr_sh_data_erase_pending]
> afr1: erasing pending flags from /a/file on n2-brick1
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:253:afr_sh_data_finish] afr1:
> finishing data selfheal of /a/file
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:229:afr_sh_data_unlock] afr1:
> unlocking /a/file on subvolume n1-brick2
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:229:afr_sh_data_unlock] afr1:
> unlocking /a/file on subvolume n2-brick1
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:186:afr_sh_data_unlck_cbk]
> afr1: inode of /a/file on child 1 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:186:afr_sh_data_unlck_cbk]
> afr1: inode of /a/file on child 0 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:134:afr_sh_data_close] afr1:
> closing fd of /a/file on n2-brick1
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:149:afr_sh_data_close] afr1:
> closing fd of /a/file on n1-brick2
> 2008-12-11 11:26:15 W [afr-self-heal-data.c:70:afr_sh_data_done] afr1: self
> heal of /a/file completed
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:247:afr_sh_metadata_erase_pending] afr-ns: erasing
> pending flags from /a/file on n1-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:247:afr_sh_metadata_erase_pending] afr-ns: erasing
> pending flags from /a/file on n2-ns
> 2008-12-11 11:26:15 D
> [afr-self-heal-metadata.c:247:afr_sh_metadata_erase_pending] afr-ns: erasing
> pending flags from /a/file on n3-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:155:afr_sh_metadata_finish]
> afr-ns: unlocking /a/file on subvolume n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:155:afr_sh_metadata_finish]
> afr-ns: unlocking /a/file on subvolume n2-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:155:afr_sh_metadata_finish]
> afr-ns: unlocking /a/file on subvolume n3-ns
> 2008-12-11 11:26:15 D [afr-self-heal-metadata.c:83:afr_sh_metadata_done]
> afr-ns: proceeding to data check on /a/file
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:936:afr_sh_data_lock] afr-ns:
> locking /a/file on subvolume n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:936:afr_sh_data_lock] afr-ns:
> locking /a/file on subvolume n2-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:936:afr_sh_data_lock] afr-ns:
> locking /a/file on subvolume n3-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:893:afr_sh_data_lock_cbk]
> afr-ns: inode of /a/file on child 2 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:893:afr_sh_data_lock_cbk]
> afr-ns: inode of /a/file on child 1 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:893:afr_sh_data_lock_cbk]
> afr-ns: inode of /a/file on child 0 locked
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr-ns:
> pending_matrix: [ 0 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr-ns:
> pending_matrix: [ 0 0 0 ]
> 2008-12-11 11:26:15 D
> [afr-self-heal-common.c:112:afr_sh_print_pending_matrix] afr-ns:
> pending_matrix: [ 0 0 0 ]
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:717:afr_sh_data_sync_prepare]
> afr-ns: syncing data of /a/file from subvolume n2-ns to 2 active sinks
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:615:afr_sh_data_open_cbk]
> afr-ns: fd for /a/file opened, commencing sync
> 2008-12-11 11:26:15 W [afr-self-heal-data.c:559:afr_sh_data_read_write]
> afr-ns: sourcing file /a/file from n2-ns to other sinks
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:514:afr_sh_data_read_cbk]
> afr-ns: read 0 bytes of data from /a/file on child 1, offset 0
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:381:afr_sh_data_trim_cbk]
> afr-ns: ftruncate of /a/file on subvolume n3-ns completed
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:381:afr_sh_data_trim_cbk]
> afr-ns: ftruncate of /a/file on subvolume n1-ns completed
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:329:afr_sh_data_erase_pending]
> afr-ns: erasing pending flags from /a/file on n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:329:afr_sh_data_erase_pending]
> afr-ns: erasing pending flags from /a/file on n2-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:329:afr_sh_data_erase_pending]
> afr-ns: erasing pending flags from /a/file on n3-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:253:afr_sh_data_finish] afr-ns:
> finishing data selfheal of /a/file
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:229:afr_sh_data_unlock] afr-ns:
> unlocking /a/file on subvolume n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:229:afr_sh_data_unlock] afr-ns:
> unlocking /a/file on subvolume n2-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:229:afr_sh_data_unlock] afr-ns:
> unlocking /a/file on subvolume n3-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:186:afr_sh_data_unlck_cbk]
> afr-ns: inode of /a/file on child 2 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:186:afr_sh_data_unlck_cbk]
> afr-ns: inode of /a/file on child 1 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:186:afr_sh_data_unlck_cbk]
> afr-ns: inode of /a/file on child 0 locked
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:134:afr_sh_data_close] afr-ns:
> closing fd of /a/file on n2-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:149:afr_sh_data_close] afr-ns:
> closing fd of /a/file on n1-ns
> 2008-12-11 11:26:15 D [afr-self-heal-data.c:149:afr_sh_data_close] afr-ns:
> closing fd of /a/file on n3-ns
> 2008-12-11 11:26:15 W [afr-self-heal-data.c:70:afr_sh_data_done] afr-ns:
> self heal of /a/file completed
> 2008-12-11 11:26:15 D [fuse-bridge.c:384:fuse_entry_cbk] glusterfs-fuse:
> 171: LOOKUP() /a/file => 3833866(loc->ino:0)
> 2008-12-11 11:26:15 D [inode.c:94:__dentry_hash] fuse/inode: dentry hashed
> file (3833866)
> 2008-12-11 11:26:15 D [inode.c:299:__inode_passivate] fuse/inode:
> passivating inode(3833866) lru=1/0 active=2 purge=0
> 2008-12-11 11:26:15 D [inode.c:280:__inode_activate] fuse/inode: activating
> inode(3833866), lru=0/0 active=3 purge=0
> 2008-12-11 11:26:15 D [fuse-bridge.c:1505:fuse_open] glusterfs-fuse: 172:
> OPEN /a/file
> 2008-12-11 11:26:15 D [ioc-inode.c:142:ioc_inode_update] brick: locked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [ioc-inode.c:150:ioc_inode_update] brick: adding to
> inode_lru[0]
> 2008-12-11 11:26:15 D [ioc-inode.c:152:ioc_inode_update] brick: unlocked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [fuse-bridge.c:654:fuse_fd_cbk] glusterfs-fuse: 172:
> OPEN() /a/file => 0xb6b3a878
> 2008-12-11 11:26:15 D [fuse-bridge.c:1565:fuse_readv] glusterfs-fuse: 173:
> READ (0xb6b3a878, size=4096, offset=0)
> 2008-12-11 11:26:15 D [io-cache.c:63:ioc_get_inode] brick: locked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [io-cache.c:67:ioc_get_inode] brick: unlocked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [io-cache.c:988:ioc_readv] brick: NEW REQ
> (0xb6b39458) offset = 0 && size = 4096
> 2008-12-11 11:26:15 D [io-cache.c:992:ioc_readv] brick: locked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [io-cache.c:996:ioc_readv] brick: unlocked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [io-cache.c:866:dispatch_requests] brick: locked
> inode(0x83d5bf8)
> 2008-12-11 11:26:15 D [page.c:204:ioc_page_create] io-cache: returning new
> page 0xb6b39608
> 2008-12-11 11:26:15 D [page.c:230:ioc_wait_on_page] brick:
> frame(0xb6b39458) waiting on page = 0xb6b39608, offset=0, size=4096
> 2008-12-11 11:26:15 D [page.c:239:ioc_wait_on_page] brick: locked
> local(0xb6b37ac8)
> 2008-12-11 11:26:15 D [page.c:241:ioc_wait_on_page] brick: unlocked
> local(0xb6b37ac8)
> 2008-12-11 11:26:15 D [io-cache.c:899:dispatch_requests] brick: unlocked
> inode(0x83d5bf8)
> 2008-12-11 11:26:15 D [page.c:429:ioc_page_fault] brick: stack winding page
> fault for offset = 0 with frame 0xb6b39688
> 2008-12-11 11:26:15 D [page.c:623:ioc_frame_return] brick: locked
> local(0xb6b37ac8)
> 2008-12-11 11:26:15 D [page.c:625:ioc_frame_return] brick: unlocked
> local(0xb6b37ac8)
> 2008-12-11 11:26:15 D [io-cache.c:809:ioc_need_prune] brick: locked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [io-cache.c:813:ioc_need_prune] brick: unlocked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [page.c:302:ioc_fault_cbk] brick: locked
> inode(0x83d5bf8)
> 2008-12-11 11:26:15 D [page.c:324:ioc_fault_cbk] brick: op_ret = 29
> 2008-12-11 11:26:15 D [page.c:653:ioc_page_wakeup] brick: page is
> 0xb6b39608 && waitq = 0xb6b3a898
> 2008-12-11 11:26:15 D [page.c:453:ioc_frame_fill] brick: frame (0xb6b39458)
> offset = 0 && size = 4096 && page->size = 29 && wait_count = 1
> 2008-12-11 11:26:15 D [page.c:482:ioc_frame_fill] brick: copy_size = 29 &&
> src_offset = 0 && dst_offset = 0
> 2008-12-11 11:26:15 D [page.c:623:ioc_frame_return] brick: locked
> local(0xb6b37ac8)
> 2008-12-11 11:26:15 D [page.c:625:ioc_frame_return] brick: unlocked
> local(0xb6b37ac8)
> 2008-12-11 11:26:15 D [page.c:590:ioc_frame_unwind] brick:
> frame(0xb6b39458) unwinding with op_ret=29
> 2008-12-11 11:26:15 D [fuse-bridge.c:1530:fuse_readv_cbk] glusterfs-fuse:
> 173: READ => 29/4096,0/0
> 2008-12-11 11:26:15 D [page.c:366:ioc_fault_cbk] brick: unlocked
> inode(0x83d5bf8)
> 2008-12-11 11:26:15 D [page.c:370:ioc_fault_cbk] brick: locked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [page.c:372:ioc_fault_cbk] brick: unlocked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [io-cache.c:809:ioc_need_prune] brick: locked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [io-cache.c:813:ioc_need_prune] brick: unlocked
> table(0x83ccf08)
> 2008-12-11 11:26:15 D [page.c:386:ioc_fault_cbk] brick: fault frame
> 0xb6b39688 returned
> 2008-12-11 11:26:15 D [fuse-bridge.c:618:fuse_getattr] glusterfs-fuse: 174:
> FGETATTR 3833866 (/a/file/0xb6b3a878)
> 2008-12-11 11:26:15 D [fuse-bridge.c:530:fuse_attr_cbk] glusterfs-fuse:
> 174: FSTAT() /a/file => 3833866
> 2008-12-11 11:26:15 D [fuse-bridge.c:1648:fuse_flush] glusterfs-fuse: 175:
> FLUSH 0xb6b3a878
> 2008-12-11 11:26:15 D [fuse-bridge.c:902:fuse_err_cbk] glusterfs-fuse: 175:
> FLUSH() ERR => 0
> 2008-12-11 11:26:15 D [fuse-bridge.c:1668:fuse_release] glusterfs-fuse:
> 176: RELEASE 0xb6b3a878
> 2008-12-11 11:26:15 D [inode.c:299:__inode_passivate] fuse/inode:
> passivating inode(3833866) lru=1/0 active=2 purge=0
> 2008-12-11 11:26:15 D [fuse-bridge.c:2067:fuse_statfs] glusterfs-fuse: 177:
> STATFS
>
> Why file is bring back? Can anyone help?
>
> --
> rash at konto pl
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Raghavendra G




-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://zresearch.com/pipermail/gluster-users/attachments/20081211/73c8966f/attachment-0001.htm 


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux