Re: gluster0:group1 not matching up with mounted directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Niels,

Thanks for your answer.  Can you look at the du examples below.  Right now I am concerned with gluster0:group0 and group1

They are not replicating properly.  They are supposed to replicate across 3 of my 5 nodes.  Not shown here are nodes 2 and 3. 

Thanks!


root@node0:/data/brick1# du -h -d 2
382G    ./group0/.glusterfs
8.0K    ./group0/images
0       ./group0/template
0       ./group0/dump
382G    ./group0
0       ./group1/.glusterfs
0       ./group1/images
0       ./group1/template
0       ./group1/dump
0       ./group1
382G    .

root@node1:/data/brick1# du -h -d 2
148G    ./gluster/.glusterfs
4.0K    ./gluster/images
0       ./gluster/template
0       ./gluster/dump
0       ./gluster/private
148G    ./gluster
0       ./safe/images
0       ./safe/template
0       ./safe/dump
0       ./safe
314G    ./group0/.glusterfs
4.0K    ./group0/images
0       ./group0/template
0       ./group0/dump
314G    ./group0
182G    ./group1/.glusterfs
0       ./group1/images
0       ./group1/template
0       ./group1/dump
182G    ./group1
643G    .
root@node4:/data/brick1# du -h -d 2
3.2T    ./machines0/.glusterfs
0       ./machines0/images
0       ./machines0/template
76K     ./machines0/dump
0       ./machines0/private
3.2T    ./machines0
196G    ./group1/.glusterfs
0       ./group1/images
0       ./group1/template
0       ./group1/dump
196G    ./group1
255G    ./group0/.glusterfs
4.0K    ./group0/images
0       ./group0/template
0       ./group0/dump
255G    ./group0
1.5T    ./backups/.glusterfs
0       ./backups/images
0       ./backups/template
28K     ./backups/dump
1.5T    ./backups
5.1T    .
root@node4:/data/brick1#

-----Original Message-----
From: Niels de Vos [mailto:ndevos@xxxxxxxxxx] 
Sent: Tuesday, October 18, 2016 1:28 AM
To: Cory Sanders <cory@xxxxxxxxxxxxxxxxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Subject: Re:  gluster0:group1 not matching up with mounted directory

On Tue, Oct 18, 2016 at 04:57:29AM +0000, Cory Sanders wrote:
> I have volumes set up like this:
> gluster> volume info
> 
> Volume Name: machines0
> Type: Distribute
> Volume ID: f602dd45-ddab-4474-8308-d278768f1e00
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: gluster4:/data/brick1/machines0
> 
> Volume Name: group1
> Type: Distribute
> Volume ID: cb64c8de-1f76-46c8-8136-8917b1618939
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/data/brick1/group1
> 
> Volume Name: backups
> Type: Replicate
> Volume ID: d7cb93c4-4626-46fd-b638-65fd244775ae
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gluster3:/data/brick1/backups
> Brick2: gluster4:/data/brick1/backups
> 
> Volume Name: group0
> Type: Distribute
> Volume ID: 0c52b522-5b04-480c-a058-d863df9ee949
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/data/brick1/group0
> 
> My problem is that when I do a disk free, group1 is filled up:
> 
> root@node0:~# df -h
> Filesystem              Size  Used Avail Use% Mounted on
> udev                     10M     0   10M   0% /dev
> tmpfs                   3.2G  492K  3.2G   1% /run
> /dev/mapper/pve-root     24G   12G   11G  52% /
> tmpfs                   5.0M     0  5.0M   0% /run/lock
> tmpfs                   6.3G   56M  6.3G   1% /run/shm
> /dev/mapper/pve-data     48G  913M   48G   2% /var/lib/vz
> /dev/sda1               495M  223M  248M  48% /boot
> /dev/sdb1               740G  382G  359G  52% /data/brick1
> /dev/fuse                30M   64K   30M   1% /etc/pve
> gluster0:group0         740G  382G  359G  52% /mnt/pve/group0
> 16.xx.xx.137:backups  1.9T  1.6T  233G  88% /mnt/pve/backups
> node4:machines0         7.3T  5.1T  2.3T  70% /mnt/pve/machines0
> gluster0:group1         740G  643G   98G  87% /mnt/pve/group1
> gluster2:/var/lib/vz    1.7T  182G  1.5T  11% /mnt/pve/node2local
> 
> When I do a du -h in the respective directories, this is what I get.
> They don't match up with what a df -h shows.  Gluster0:group0 shows 
> the right amount of disk free, but gluster0:group1 is too fat and does 
> not correspond to what is in /mnt/pve/group1

du and df work a little different:
 - du: crawl the directory structure and calculate the size
 - df: call the statfs() function that resturns information directly
       from the (superblock of the) filesystem

This means, that all 'df' calls are routed to the bricks that are used for the Gluster volume. Those bricks then call statfs() on behalf of the Gluster client (fuse mountpoint), and the Gluster client uses the values returned by the bricks to calculate the 'fake' output for 'df'.

Now, on your environment you seem to have the RAID1 filesystem mounted on /data/brick1 (/dev/sdb1 in the above 'df' output). All of the bricks are also located under /data/brick1/<volume>. This means that all 'df'
commands will execute statfs() on the same filesystem hosting all of the bricks. Because statfs() returns the statistics over the whole filesystem (/data/brick1), the used and available size of /data/brick1 will be used in the calculations by the Gluster client to return the statistics to 'df'.

With this understanding, you should be able to verify the size of the filesystems used for the bricks, and combine them per Gluster volume.
Any of the /data/brick1 filesystems that host multiple bricks will likely have an 'unexpected' difference in available/used size.

HTH,
Niels
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux