Re: [Gluster-devel] Glusterfs meta data space consumption issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:
Hi All,

Here we have below steps to reproduce the issue

Reproduction steps:

 

root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force ----- create the gluster volume

volume create: brick: success: please start the volume to access data

root@128:~# gluster volume set brick nfs.disable true

volume set: success

root@128:~# gluster volume start brick

volume start: brick: success

root@128:~# gluster volume info

Volume Name: brick

Type: Distribute

Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: 128.224.95.140:/tmp/brick

Options Reconfigured:

nfs.disable: true

performance.readdir-ahead: on

root@128:~# gluster volume status

Status of volume: brick

Gluster process TCP Port RDMA Port Online Pid

------------------------------------------------------------------------------

Brick 128.224.95.140:/tmp/brick 49155 0 Y 768

 

Task Status of Volume brick

------------------------------------------------------------------------------

There are no active volume tasks

 

root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/

root@128:~# cd gluster/

root@128:~/gluster# du -sh

0 .

root@128:~/gluster# mkdir -p test/

root@128:~/gluster# cp ~/tmp.file gluster/

root@128:~/gluster# cp tmp.file test

root@128:~/gluster# cd /tmp/brick

root@128:/tmp/brick# du -sh *

768K test

768K tmp.file

root@128:/tmp/brick# rm -rf test --------- delete the test directory and data in the server side, not reasonable

root@128:/tmp/brick# ls

tmp.file

root@128:/tmp/brick# du -sh *

768K tmp.file

root@128:/tmp/brick# du -sh (brick dir)

1.6M .

root@128:/tmp/brick# cd .glusterfs/

root@128:/tmp/brick/.glusterfs# du -sh *

0 00

0 2a

0 bb

768K c8

0 c9

0 changelogs

768K d0

4.0K health_check

0 indices

0 landfill

root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)

1.6M .

root@128:/tmp/brick# cd ~/gluster

root@128:~/gluster# ls

tmp.file

root@128:~/gluster# du -sh * (Mount dir)

768K tmp.file

 

In the reproduce steps, we delete the test directory in the server side, not in the client side. I think this delete operation is not reasonable. Please ask the customer to check whether they do this unreasonable operation.


What's the need of deleting data from backend (i.e bricks) directly?


It seems while deleting the data from BRICK, metadata will not deleted from .glusterfs directory.


I don't know whether it is a bug of limitations, please let us know about this?


Regards,

Abhishek



On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote:


On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:
yes it is ext4. but what is the impact of this.

Did you have a lot of data before and you deleted all that data? ext4 if I remember correctly doesn't decrease size of directory once it expands it. So in ext4 inside a directory if you create lots and lots of files and delete them all, the directory size would increase at the time of creation but won't decrease after deletion. I don't have any system with ext4 at the moment to test it now. This is something we faced 5-6 years back but not sure if it is fixed in ext4 in the latest releases.
 

On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote:
Yes

On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:

Means the fs where this brick has been created?

On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx> wrote:
Is your backend filesystem ext4?

On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:

No,we are not using sharding

On Apr 12, 2017 7:29 PM, "Alessandro Briosi" <ab1@xxxxxxxxxxx> wrote:
Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
I have did more investigation and find out that brick dir size is equivalent to gluster mount point but .glusterfs having too much difference


You are probably using sharding?


Buon lavoro.
Alessandro Briosi
 
METAL.it Nord S.r.l.
Via Maioliche 57/C - 38068 Rovereto (TN)
Tel.+39.0464.430130 - Fax +39.0464.437393
www.metalit.com

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



--
Pranith



--
Pranith



--




Regards
Abhishek Paliwal



--
Pranith



--




Regards
Abhishek Paliwal
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
--
- Atin (atinm)
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux