On Tue, Feb 14, 2017 at 1:01 PM, jayakrishnan mm <jayakrishnan.mm@xxxxxxxxx> wrote:
On Mon, Feb 13, 2017 at 7:07 PM, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:Hi JK,On Mon, Feb 13, 2017 at 1:06 PM, jayakrishnan mm <jayakrishnan.mm@xxxxxxxxx> wrote:Hi Krutika,Could you pls. tell me what is the meaning of base file name ? I mean the part after xattrop-* what do this number signify ?Ex: xattrop-a321b856-05b3-48d3- a393-805be83c6b73 . Is this gfid of some file ?No. a321b856-05b3-48d3-a393-805be83c6b73 is a randomly UUID.
So in order to not consume one on-disk inode per index, index translatorcreates this one base file called xattrop-xxxxxxxxx....xxx of size 0 bytes
where xxxxxxxx.....xxx is a randomly generated UUID and all indices that
need to be created to signify that certain gfids need heal will be hard-linked to
this base file. So the number of inodes consumed will remain 1, irrespective ofwhether 10 files need heal or a 100 or 1000 or even a million.You can do an `ls -li` under xattrop directory where a couple of files need healto see it for yourself.Understood. So all files will have gfid entries inside .glusterfs/xx/yy , where xx/yy is the initial part of gfid.Only those need healing are kept under xattrop directory, and in order to save inodes, they are hard linked to a base file .And self heal daemon can traverse this directory later for the purpose of healing.
The hard links under .glusterfs/xx/yy are not to be confused with the indices under indices/xattrop. Even files that *need* heal will have
hard links to the original file containing user-written data that resides under its normal parent dir, under .glusterfs/xx/yy.
GFID being a unique property of every inode, the indices under xattrop are named after the gfid of individual files.
I have encountered some extra gfidsVolume Name: rep-volType: ReplicateVolume ID: 8667044d-b75e-4fc0-9ae8-2d2b39b8558f Status: StartedNumber of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 192.168.36.220:/home/user/gluster/rep-brick1 Brick2: 192.168.36.220:/home/user/gluster/rep-brick2 Options Reconfigured:performance.readdir-ahead: onI kill brick1 process, so that status is as belowStatus of volume: rep-volGluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 192.168.36.220:/home/user/gluster/rep -brick1 N/A N/A N N/ABrick 192.168.36.220:/home/user/gluster/rep -brick2 49221 0 Y 24925NFS Server on localhost N/A N/A N N/ASelf-heal Daemon on localhost N/A N/A Y 24954Task Status of Volume rep-vol------------------------------------------------------------ ------------------ There are no active volume tasksCopy wish.txt to /mnt/gluster/rep (mount point)root@dhcp-192-168-36-220:/home/user/gluster/rep-brick2/.glus terfs/indices/xattrop# ls -li total 03670235 ---------- 3 root root 0 Feb 9 07:04 00000000-0000-0000-0000-000000000001 3670235 ---------- 3 root root 0 Feb 9 07:04 a3e23755-4ec6-42d2-ac2c-ad4bd682cdbd 3670235 ---------- 3 root root 0 Feb 9 07:04 xattrop-8263faed-cba8-4738-9197-93e4e7e103ff As expected.Now I create another file test.txt on the mount point.root@dhcp-192-168-36-220:/home/user/gluster/rep-brick2/.glus terfs/indices/xattrop# ls -li total 03670235 ---------- 7 root root 0 Feb 9 07:04 00000000-0000-0000-0000-000000000001 3670235 ---------- 7 root root 0 Feb 9 07:04 571ca3f1-5c1b-426d-990e-191aa62ea9c4 3670235 ---------- 7 root root 0 Feb 9 07:04 6b7f7823-b864-4f48-8a07-7c073f8d2ef5 3670235 ---------- 7 root root 0 Feb 9 07:04 7b4e97fc-8734-4dba-a72a-a750a22abd2d 3670235 ---------- 7 root root 0 Feb 9 07:04 96587bb0-2ff8-4d97-8470-69bb48be9fd2 3670235 ---------- 7 root root 0 Feb 9 07:04 a3e23755-4ec6-42d2-ac2c-ad4bd682cdbd 3670235 ---------- 7 root root 0 Feb 9 07:04 xattrop-8263faed-cba8-4738-9197-93e4e7e103ff root@dhcp-192-168-36-220:/home/user/gluster/rep-brick2/.glus terfs/indices/xattrop# getfattr -d -e hex -m . ../../../wish.txt | grep gfid trusted.gfid=0xa3e237554ec642d2ac2cad4bd682cdbd root@dhcp-192-168-36-220:/home/user/gluster/rep-brick2/.glus terfs/indices/xattrop# getfattr -d -e hex -m . ../../../test.txt | grep gfid trusted.gfid=0x571ca3f15c1b426d990e191aa62ea9c4 Why some extra gfids?
Did you try converting those extra gfids to path and figure out what entries those are?
If you haven't, could you please do that and get back?
-Krutika
RegardsJK-Krutika
RegardsJKOn Tue, Apr 19, 2016 at 12:31 PM, jayakrishnan mm <jayakrishnan.mm@xxxxxxxxx> wrote:OK.
On Apr 19, 2016 11:25 AM, "Krutika Dhananjay" <kdhananj@xxxxxxxxxx> wrote:The hardlink will be removed, yes, but the base file will stay.-KrutikaOn Tue, Apr 19, 2016 at 8:31 AM, jayakrishnan mm <jayakrishnan.mm@xxxxxxxxx> wrote:--JKHi,Is the hardlink not removed, after healing is done ?On Mon, Apr 18, 2016 at 6:22 PM, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:-Krutikato this file.with an identifiable name (xattrop-*) and then hard-link the actual gfid files representing pointers for healThat's just a base file that all the gfid files are hard-linked to.Since it is pointless to consume one inode for each gfid that needs a heal, we use a base fileOn Mon, Apr 18, 2016 at 11:00 AM, jayakrishnan mm <jayakrishnan.mm@xxxxxxxxx> wrote:______________________________JKBest RegardsWhat is the meaning ?Hi,I see some other ids also which are prefixed with xattrop- , for example :
Self healing daemon refers to .glusterfs/indices/xattrop directory to see the files which are to be healed, and these dir should contain gfids of those files.
root@ad3:/data/ssd/dsi-ec8-brick/.glusterfs/indices/xattrop# ll
total 8
drw------- 2 root root 4096 Apr 8 10:53 ./
drw------- 3 root root 4096 Apr 8 10:53 ../
---------- 1 root root 0 Apr 8 10:53 xattrop-a321b856-05b3-48d3-a393-805be83c6b73 _________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-devel