tune2fs exited with non-zero exit status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

I am just looking through my logs and am seeing a lot of entries of the form:

 

[2015-03-16 16:02:55.553140] I [glusterd-handler.c:3530:__glusterd_handle_status_volume] 0-management: Received status volume req for volume wiki

[2015-03-16 16:02:55.561173] E [glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status

[2015-03-16 16:02:55.561204] E [glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size

 

Having had a rummage I *suspect* it is because gluster is trying to get the volume status by querying the superblock on the filesystem for a brick volume.  However this is an issue as when the volume was created it was done so in the form:

 

root@gfsi-rh-01:/mnt# gluster volume create gfs1 replica 2 transport tcp \

                                         gfsi-rh-01:/srv/hod/wiki \

                                         gfsi-isr-01:/srv/hod/wiki force

 

Where the those paths to the bricks are not the raw paths but instead are paths to the mount points on the local server.

 

Volume status returns:

gluster volume status wiki

Status of volume: wiki

Gluster process                                                                                Port       Online   Pid

------------------------------------------------------------------------------

Brick gfsi-rh-01.core.canterbury.ac.uk:/srv/hod/wiki       49157    Y              3077

Brick gfsi-isr-01.core.canterbury.ac.uk:/srv/hod/wiki      49156    Y              3092

Brick gfsi-cant-01.core.canterbury.ac.uk:/srv/hod/wiki  49152    Y              2908

NFS Server on localhost                                                                2049       Y              35065

Self-heal Daemon on localhost                                                  N/A        Y              35073

NFS Server on gfsi-cant-01.core.canterbury.ac.uk             2049       Y              2920

Self-heal Daemon on gfsi-cant-01.core.canterbury.ac.uk               N/A        Y              2927

NFS Server on gfsi-isr-01.core.canterbury.ac.uk                 2049       Y              32680

Self-heal Daemon on gfsi-isr-01.core.canterbury.ac.uk   N/A        Y              32687

Task Status of Volume wiki

------------------------------------------------------------------------------

There are no active volume tasks

 

Which is what I would expect.

 

Interestingly to check my thoughts:

 

# tune2fs -l /srv/hod/wiki/

tune2fs 1.42.5 (29-Jul-2012)

tune2fs: Attempt to read block from filesystem resulted in short read while trying to open /srv/hod/wiki/

Couldn't find valid filesystem superblock.

 

Does what I expect as it is checking a mount point and is what it looks like gluster is trying to do.

 

But:

 

# tune2fs -l /dev/mapper/bricks-wiki

tune2fs 1.42.5 (29-Jul-2012)

Filesystem volume name:   wiki

Last mounted on:          /srv/hod/wiki

Filesystem UUID:          a75306ac-31fa-447d-9da7-23ef66d9756b

Filesystem magic number:  0xEF53

Filesystem revision #:    1 (dynamic)

Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

Filesystem flags:         signed_directory_hash

Default mount options:    user_xattr acl

Filesystem state:         clean

<snipped>

 

This leaves me with a couple of questions:

 

Is there any way that I can get this sorted in the gluster configuration so that it actually checks the raw volume rather than the local mount point for that volume?

 

Should the volume have been created using the raw path /dev/mapper/…..   rather than the mount point? 

 

Or should I have created the volume (as I *now* see in the RedHat Storage Admin Guide) – under a sub directrory directory below the mounted filestore (ie:  /srv/hod/wiki/brick ?

 

If I need to move the data and recreate the bricks it is not an issue for me as this is still proof of concept for what we are doing, what I need to know whether doing so will stop the continual log churn.

 

Many thanks

 

Paul

 

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux