Re: heal info output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Emmanuel,

On Thu, Jul 2, 2020 at 3:05 AM Emmanuel Dreyfus <manu@xxxxxxxxxx> wrote:
Hello

gluster volume heal info show me questionable entries. I wonder if these
are bugs, or if I shoud handle them and how.

bidon# gluster volume heal gfs info
Brick bidon:/export/wd0e_tmp
Status: Connected
Number of entries: 0

Brick baril:/export/wd0e
/.attribute/system
<gfid:7f3a4aa5-7a49-4f50-a166-b345cdf0616c>
Status: Connected
Number of entries: 2

(...)
Brick bidon:/export/wd2e
<gfid:d616f804-0579-4649-8d8e-51ec4cf0e131>
<gfid:f4eb6db3-8341-454b-9700-81ad4ebca61e>
/owncloud/data
<gfid:43b80fd9-a577-4568-b400-2d80bb4d25ad>
<gfid:7253faad-6843-4321-a63f-17671237d607>
<gfid:da055475-43c0-4157-b4f1-30b3647bc0b6>
<gfid:02f4f38e-f351-4bb8-bd43-cad64ba5a4f5>

There are three cases:
1) /.attribute directory is special on NetBSD, it is where extended
attributes are stored for the filesystem. The posix xlator takes care of
screening it, but there must be some other softrware component that
should learn it must disregeard it. Hints are welcome about where I
should look at.

Is the '.attribute' directory only present on the root directory of a filesystem ? if so I strongly recommend to never use the root of a filesystem to place bricks. Always place the brick into a subdirectory.


2) /owncloud/data  is a directory. mode, owner and groups are the same
on bricks. Why is it listed here?

If files or subdirectories have been created or removed from that directory and the operation failed on some brick (or the brick was down), the directory is also marked as bad. You should also check the contents.


3) <gfid:...> What should I do with this?

These are files or directories for whose real path is not known. If gfid2path feature is enabled, you can check the trusted.gfid2path.xxxxxx xattr on the gfid. It shows the gfid of the parent directory and the file name. The full path can be retrieved by following the directory symlinks or using the gfid-to-dirname.sh script in the extras directory.

If gfid2path is not enabled, I fear that finding them will need to be done by bruteforce:

1. Get the inode number of one of the gfid entries on one brick.
2. Run 'find <brick root> -inum <inode number>

Once you find the entries, if you do an 'stat' on the mount point of the volume, the next "gluster volume heal info" should show the real path of the files instead its gfid

Regards,

Xavi

--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
manu@xxxxxxxxxx
_______________________________________________

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux