Hello list,
I am seeing continuously repeated log entries in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log:[2014-06-20 08:26:32.273383] I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume applications
[2014-06-20 08:26:32.400642] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:32.400691] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:26:48.550989] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:48.551041] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:26:49.271236] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:49.271300] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:26:55.311658] I [glusterd-volume-ops.c:478:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume home
[2014-06-20 08:26:55.386682] I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume home
[2014-06-20 08:26:55.515313] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:55.515364] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:07.476962] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:07.477017] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:18.321956] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:18.322011] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:28.366934] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:28.366995] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:57.158702] I [glusterd-volume-ops.c:478:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume backup
[2014-06-20 08:27:57.231446] I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume backup
[2014-06-20 08:27:57.347860] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:57.347957] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:58.404337] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:58.404485] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:28:32.520949] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:28:32.521023] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:28:48.230856] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:28:48.230911] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:28:48.505597] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:28:48.505646] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
-------------- end snipp ------------
[2014-06-20 08:26:32.400642] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:32.400691] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:26:48.550989] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:48.551041] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:26:49.271236] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:49.271300] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:26:55.311658] I [glusterd-volume-ops.c:478:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume home
[2014-06-20 08:26:55.386682] I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume home
[2014-06-20 08:26:55.515313] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:26:55.515364] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:07.476962] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:07.477017] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:18.321956] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:18.322011] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:28.366934] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:28.366995] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:57.158702] I [glusterd-volume-ops.c:478:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume backup
[2014-06-20 08:27:57.231446] I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume backup
[2014-06-20 08:27:57.347860] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:57.347957] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:27:58.404337] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:27:58.404485] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:28:32.520949] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:28:32.521023] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:28:48.230856] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:28:48.230911] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2014-06-20 08:28:48.505597] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status
[2014-06-20 08:28:48.505646] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
-------------- end snipp ------------
On my research I came across a source code commit (https://forge.gluster.org/glusterfs-core/glusterfs/commit/3f81c44a03e9ab78be2b4a69e3e36d41a4de324a/diffs?diffmode=sidebyside&fragment=1) where the inode size from different fs types is retreived (glusterd: Used runner's RUN_PIPE to get inode size in xfs/ext3/ext4).
Is the gluster management doing this because of the received heal request (Received status volume req for volume applications)?
Is the gluster management doing this because of the received heal request (Received status volume req for volume applications)?
As these are errors, I worry about the consistency of the glusterfs volume. Anyone having had a similar issue?
Thanks,
Claudio
Thanks,
Claudio
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users