Re: self-heal is not trigger and data incosistency?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The file is :
/mnt/search-prod/index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.index

One client's log
============================================================
[2014-02-16 04:02:02.089758] I [glusterfsd-mgmt.c:1565:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2014-02-18 11:06:04.621735] W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-search-prod-client-3: remote operation failed: Stale NFS file handle. Path: /index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.data (808a5a65-36b8-4fdc-814a-5b832b36c394)
[2014-02-18 11:06:04.621875] W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-search-prod-client-4: remote operation failed: Stale NFS file handle. Path: /index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.data (808a5a65-36b8-4fdc-814a-5b832b36c394)
[2014-02-18 11:06:04.621913] W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-search-prod-client-5: remote operation failed: Stale NFS file handle. Path: /index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.data (808a5a65-36b8-4fdc-814a-5b832b36c394)
[2014-02-18 11:18:04.070026] W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-search-prod-client-5: remote operation failed: Stale NFS file handle. Path: /index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.index (58a65eb7-8fb7-432b-a9f1-f27f99439f18)
[2014-02-18 11:18:04.070090] W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-search-prod-client-4: remote operation failed: Stale NFS file handle. Path: /index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.index (58a65eb7-8fb7-432b-a9f1-f27f99439f18)
[2014-02-18 11:18:04.070130] W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-search-prod-client-3: remote operation failed: Stale NFS file handle. Path: /index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.index (58a65eb7-8fb7-432b-a9f1-f27f99439f18)

The other client's log
============================================================
[2014-02-18 10:00:18.464929] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  entry self-heal completed on /index_pipeline_searchengine/root/data/2014-02-11
[2014-02-18 10:00:19.409340] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  entry self-heal triggered. path: /index_pipeline_searchengine/root/data, reason: lookup detected pending operations
[2014-02-18 10:00:19.503364] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  entry self-heal completed on /index_pipeline_searchengine/root/data
[2014-02-18 10:31:39.598236] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data self-heal triggered. path: /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done, reason: lookup detected pending operations
[2014-02-18 10:31:39.696935] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data self-heal completed on /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done
[2014-02-18 10:31:42.312547] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data self-heal triggered. path: /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done, reason: lookup detected pending operations
[2014-02-18 10:31:42.406079] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data self-heal completed on /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done
[2014-02-18 10:32:40.114804] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data self-heal triggered. path: /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done, reason: lookup detected pending operations
[2014-02-18 10:32:40.208238] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data self-heal completed on /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done
[2014-02-18 10:33:40.162889] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data self-heal triggered. path: /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done, reason: lookup detected pending operations
[2014-02-18 10:33:40.260236] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data self-heal completed on /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done
[2014-02-18 10:34:40.205376] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data self-heal triggered. path: /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done, reason: lookup detected pending operations
[2014-02-18 10:34:40.298777] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data self-heal completed on /index_pipeline_searchengine/channel_leaf/done/2014-02-18.done
[2014-02-18 10:43:52.959485] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data data self-heal triggered. path: /index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0/restriction_info, reason: lookup detected pending operations
[2014-02-18 10:43:53.177817] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-search-prod-replicate-4: no active sinks for performing self-heal on file /index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0/restriction_info
[2014-02-18 10:43:53.240084] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data data self-heal completed on /index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0/restriction_info
[2014-02-18 10:58:07.853097] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data self-heal triggered. path: /index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0, reason: lookup detected pending operations
[2014-02-18 10:58:07.948421] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data self-heal completed on /index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0
[2014-02-18 10:58:09.136128] I [afr-common.c:1340:afr_launch_self_heal] 0-search-prod-replicate-4: background  meta-data self-heal triggered. path: /index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0, reason: lookup detected pending operations
[2014-02-18 10:58:09.232103] I [afr-self-heal-common.c:2159:afr_self_heal_completion_cbk] 0-search-prod-replicate-4: background  meta-data self-heal completed on /index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0



On Tue, Feb 18, 2014 at 2:10 PM, Mingfan Lu <mingfan.lu@xxxxxxxxx> wrote:
Hi,
  We saw such a issue.
   One client (fuse mount) updated one file, then the other client (also fuse mount) copied the same file while the reader found that the copied file was out-of-dated.
   If the reader ran ls command to list the entries of the directory where the target file in,then it could copy the latest one.
   Two clients's version:
          glusterfs-3.3.0-1
   The server's version is glusterfs 3.3.0.5rhs
  
   I remember that 3.3 could suport automatic self-heal in the first lookup, when calling "cp", it should trigger the self-heal to get the lastest file, but why not?

   Any comments? I could try provide enough information what you need.

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux