Hey Kaleb,
Thanks for your response. e2fsprogs package is installed. tune2fs command is also available.
We're running the glusterfs daemon within an LXC container. The LV is directly mounted into the container (through the fstab config file in /var/lib/lxc/lxcname/fstab).
Within the LXC, tune2fs doesn't work and returns "1":
# tune2fs -l /dev/lxc1/storage03-brick
tune2fs 1.42.9 (4-Feb-2014)
tune2fs: No such file or directory while trying to open /dev/lxc1/storage03-brick
Couldn't find valid filesystem superblock.
# echo $?
1
tune2fs 1.42.9 (4-Feb-2014)
tune2fs: No such file or directory while trying to open /dev/lxc1/storage03-brick
Couldn't find valid filesystem superblock.
# echo $?
1
When I run the same on the physical server, the tune2fs command works:
# tune2fs -l /dev/lxc1/storage03-brick
tune2fs 1.42.9 (4-Feb-2014)
Filesystem volume name: <none>
Last mounted on: /usr/lib/x86_64-linux-gnu/lxc
Filesystem UUID: 18f84853-4070-4c6a-af95-efb27fe3eace
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
[...]
# tune2fs -l /dev/lxc1/storage03-brick
tune2fs 1.42.9 (4-Feb-2014)
Filesystem volume name: <none>
Last mounted on: /usr/lib/x86_64-linux-gnu/lxc
Filesystem UUID: 18f84853-4070-4c6a-af95-efb27fe3eace
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
[...]
Thanks for pointing me in the right direction :).
cheers
cheers
On Fri, Jun 20, 2014 at 1:16 PM, Kaleb S. KEITHLEY <kkeithle@xxxxxxxxxx> wrote:
You probably need to add xfsprogs (rpm) or the dpkg equivalent, assuming your brick is an xfs volume.
Or if the brick is ext4, then you need to install e2fsprogs.
On 06/20/2014 04:33 AM, Claudio Kuenzler wrote:
_______________________________________________Hello list,
I am seeing continuously repeated log entries in
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log:
-------------- snipp ------------
[2014-06-20 08:26:32.273383] I
[glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume applications
[2014-06-20 08:26:32.400642] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:26:32.400691] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:26:48.550989] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:26:48.551041] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:26:49.271236] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:26:49.271300] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:26:55.311658] I
[glusterd-volume-ops.c:478:__glusterd_handle_cli_heal_volume]
0-management: Received heal vol req for volume home
[2014-06-20 08:26:55.386682] I
[glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume home
[2014-06-20 08:26:55.515313] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:26:55.515364] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:27:07.476962] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:27:07.477017] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:27:18.321956] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:27:18.322011] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:27:28.366934] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:27:28.366995] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:27:57.158702] I
[glusterd-volume-ops.c:478:__glusterd_handle_cli_heal_volume]
0-management: Received heal vol req for volume backup
[2014-06-20 08:27:57.231446] I
[glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume backup
[2014-06-20 08:27:57.347860] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:27:57.347957] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:27:58.404337] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:27:58.404485] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:28:32.520949] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:28:32.521023] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:28:48.230856] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:28:48.230911] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
[2014-06-20 08:28:48.505597] E
[glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status
[2014-06-20 08:28:48.505646] E
[glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size
-------------- end snipp ------------
On my research I came across a source code commit
(https://forge.gluster.org/glusterfs-core/glusterfs/commit/3f81c44a03e9ab78be2b4a69e3e36d41a4de324a/diffs?diffmode=sidebyside&fragment=1)
where the inode size from different fs types is retreived (glusterd:
Used runner's RUN_PIPE to get inode size in xfs/ext3/ext4).
Is the gluster management doing this because of the received heal
request (Received status volume req for volume applications)?
As these are errors, I worry about the consistency of the glusterfs
volume. Anyone having had a similar issue?
Thanks,
Claudio
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Kaleb
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users