.glusterfs grown larger than volume content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

We’ve noticed that the .glusterfs directory is larger than the contents of the volume. Our application only has access through the client so I don’t suspect anything was deleted on the brick.

 

# du -sh .glusterfs

31G     .glusterfs/

# du -sh *

13G     dir1

31M     dir2

 

How could we have come into this state? Is there a way to find what is orphaned?

 

We tried looking for any references to deleted files but it didn’t seem to yield much:

# find .glusterfs -links 1 –ls

221163    0 lrwxrwxrwx   1 root     root           51 Mar  4 14:34 .glusterfs/91/ff/91ffa-f20f-4933-a8d6-abx93074 -> ../../00/00/00000000-0000-0000-0000-000000000001/dir2

383515    0 lrwxrwxrwx   1 root     root           59 Mar  4 15:08 .glusterfs/b1/2d/bd5b5-e00c-4bd1-95c6-312a25 -> ../../7e/85/7cxxxxxxxxxxx90-88e9-4cdd-95fd-dd48/recyclebin

449405    0 lrwxrwxrwx   1 root     root           51 Mar  4 15:08 .glusterfs/21/28/2102-101e-4177-b775-74379ba -> ../../00/00/00000000-0000-0000-0000-000000000001/dir2

394150    0 lrwxrwxrwx   1 root     root           59 Apr  4 13:24 .glusterfs/c7/2b/c728-877-49a-b7d-3bxxxx3149c -> ../../e1/09/e10xx94e-c5xcd-4c1f-95f-48xxxx24106e/recyclebin

229934    0 lrwxrwxrwx   1 root     root           60 Mar  4 15:08 .glusterfs/00/00/00000000-0000-0000-0000-000000000006 -> ../../00/00/00000000-0000-0000-0000-000000000005/internal_op

212931    0 lrwxrwxrwx   1 root     root             8 Mar  4 15:08 .glusterfs/00/00/00000000-0000-0000-0000-000000000001 -> ../../..

477541    0 lrwxrwxrwx   1 root     root           58 Mar  4 15:08 .glusterfs/00/00/00000000-0000-0000-0000-000000000005 -> ../../00/00/00000000-0000-0000-0000-000000000001/.trashcan

385048    0 lrwxrwxrwx   1 root     root           55 Mar 23 12:02 .glusterfs/b3/21/b3xxb20-4b23-4e93-8db4-3dxx8x6e -> ../../e1/09/e10084e-c5cd-4c1f-95f-482106e/videos

219936    4 -rw-r--r--   1 root     root                19 Apr 27 10:54 .glusterfs/health_check

264027    0 ----------   1 root     root                    0 Apr 26 13:01 .glusterfs/indices/xattrop/xattrop-2198-d683-431-bxx2-103474

212941    0 lrwxrwxrwx   1 root     root           51 Mar  4 14:24 .glusterfs/e1/09/e1xxx4e-c5d-4c1f-95f-482xe -> ../../00/00/00000000-0000-0000-0000-000000000001/dir1

397665    0 lrwxrwxrwx   1 root     root           51 Mar  4 15:08 .glusterfs/7e/85/757c90-8e9-4cdd-95fd-dd48 -> ../../00/00/00000000-0000-0000-0000-000000000001/dir1

270337   20 -rw-r--r--   1 root     root        20480 Dec 14 23:03 .glusterfs/data.db

 

We are running on a single node but when I added a second node and perform a full heal, the .glusterfs directory size is the same as the volume content size which is that we expected.

 

Version: glusterfs 3.7.3

OS: CentOS 5

 

Any advice would be much appreciated!

 

Thanks!

Vincent-

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux