gluster0:group1 not matching up with mounted directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have volumes set up like this:

gluster> volume info

 

Volume Name: machines0

Type: Distribute

Volume ID: f602dd45-ddab-4474-8308-d278768f1e00

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: gluster4:/data/brick1/machines0

 

Volume Name: group1

Type: Distribute

Volume ID: cb64c8de-1f76-46c8-8136-8917b1618939

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: gluster1:/data/brick1/group1

 

Volume Name: backups

Type: Replicate

Volume ID: d7cb93c4-4626-46fd-b638-65fd244775ae

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gluster3:/data/brick1/backups

Brick2: gluster4:/data/brick1/backups

 

Volume Name: group0

Type: Distribute

Volume ID: 0c52b522-5b04-480c-a058-d863df9ee949

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: gluster0:/data/brick1/group0

 

My problem is that when I do a disk free, group1 is filled up:

 

root@node0:~# df -h

Filesystem              Size  Used Avail Use% Mounted on

udev                     10M     0   10M   0% /dev

tmpfs                   3.2G  492K  3.2G   1% /run

/dev/mapper/pve-root     24G   12G   11G  52% /

tmpfs                   5.0M     0  5.0M   0% /run/lock

tmpfs                   6.3G   56M  6.3G   1% /run/shm

/dev/mapper/pve-data     48G  913M   48G   2% /var/lib/vz

/dev/sda1               495M  223M  248M  48% /boot

/dev/sdb1               740G  382G  359G  52% /data/brick1

/dev/fuse                30M   64K   30M   1% /etc/pve

gluster0:group0         740G  382G  359G  52% /mnt/pve/group0

16.xx.xx.137:backups  1.9T  1.6T  233G  88% /mnt/pve/backups

node4:machines0         7.3T  5.1T  2.3T  70% /mnt/pve/machines0

gluster0:group1         740G  643G   98G  87% /mnt/pve/group1

gluster2:/var/lib/vz    1.7T  182G  1.5T  11% /mnt/pve/node2local

 

When I do a du –h in the respective directories, this is what I get.  They don’t match up with what a df –h shows.  Gluster0:group0 shows the right amount of disk free, but gluster0:group1 is too fat and does not correspond to what is in /mnt/pve/group1

 

root@node0:/mnt/pve/group0# du -h -d 2

0       ./images/2134

0       ./images/8889

6.3G    ./images/134

56G     ./images/140

31G     ./images/153

9.9G    ./images/144

0       ./images/166

29G     ./images/141

9.9G    ./images/152

22G     ./images/142

0       ./images/155

0       ./images/145

18G     ./images/146

25G     ./images/148

24G     ./images/151

0       ./images/156

11G     ./images/143

0       ./images/157

0       ./images/158

0       ./images/159

0       ./images/160

0       ./images/161

0       ./images/162

0       ./images/164

0       ./images/9149

0       ./images/7186

0       ./images/9150

9.7G    ./images/149

29G     ./images/150

0       ./images/9100

0       ./images/9145

17G     ./images/147

51G     ./images/187

12G     ./images/9142

0       ./images/186

0       ./images/184

0       ./images/9167

0       ./images/102

0       ./images/99102

30G     ./images/9153

382G    ./images

0       ./template/iso

0       ./template

0       ./dump

382G    .

 

root@node0:/mnt/pve/group1/images# du -h -d 2

2.7G    ./9153

9.7G    ./162

9.9G    ./164

11G     ./166

9.6G    ./161

0       ./146

9.8G    ./155

9.8G    ./156

9.9G    ./157

9.7G    ./159

9.9G    ./160

9.9G    ./158

21G     ./185

11G     ./165

0       ./153

11G     ./154

0       ./9167

11G     ./168

11G     ./169

11G     ./167

0       ./9165

11G     ./171

0       ./9171

182G    .

 

root@node0:/data/brick1# du -h -d2

382G    ./group0/.glusterfs

8.0K    ./group0/images

0       ./group0/template

0       ./group0/dump

382G    ./group0

0       ./group1/.glusterfs

0       ./group1/images

0       ./group1/template

0       ./group1/dump

0       ./group1

382G    .

root@node0:/data/brick1#

 

gluster> peer status

Number of Peers: 3

 

Hostname: 10.0.0.137

Uuid: 92071298-6809-49ff-9d6c-3761c01039ea

State: Peer in Cluster (Connected)

 

Hostname: 10.0.0.138

Uuid: 040a3b67-c516-4c9b-834b-f7f7470e8dfd

State: Peer in Cluster (Connected)

 

Hostname: gluster1

Uuid: 71cbcefb-0aea-4414-b88f-11f8954a8be2

State: Peer in Cluster (Connected)

gluster>

 

 

gluster> pool list

UUID                                    Hostname        State

92071298-6809-49ff-9d6c-3761c01039ea    10.0.0.137      Connected

040a3b67-c516-4c9b-834b-f7f7470e8dfd    10.0.0.138      Connected

71cbcefb-0aea-4414-b88f-11f8954a8be2    gluster1        Connected

398228da-2300-4bc9-8e66-f4ae06a7c98e    localhost       Connected

gluster>

 

 

There are 5 nodes in a ProxMox cluster. 

 

Node0 has a 900GB RAID1 and is primarily responsible for running VMs from gluster0:group0   /mnt/pve/group0

Node1 has a 900GB RAID1 and  is primarily responsible for running VMS from gluster0:group1  /mnt/pve/group1

Node2 is a development machine: gluster2:/var/lib/vz   /mnt/pve/node2local

Node3 has backups: /mnt/pve/backups

Node4 has backups and also is supposed to mirror gluster0:group0 and group1

 

I think things are off on the configs. 

 

 

Thanks, I’m a bit of a newbie at gluster.  Wanting to learn.

 

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux