Inconsistency of disk free on replica volume.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've a simple 2-way replica volume, however it's capacity utilization is 
really inconsistent. I realize du and df aren't the same thing, but I'm 
confused how the brick and the NFS mount are not showing the same amount 
of capacity available. Underlying filesystem is XFS, and gluster volume 
is mounted using gluster NFS daemon - Note that volume is mounted on 
systems which run the bricks too.

[root at rhesproddns01 ~]# gluster volume info openfire

Volume Name: openfire
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhesproddns01:/gluster/openfire
Brick2: rhesproddns02:/gluster/openfire
Options Reconfigured:
nfs.rpc-auth-allow: 127.0.0.1
performance.write-behind-window-size: 128Mb
performance.cache-size: 256Mb
auth.allow: 10.250.53.*,10.252.248.*,169.254.*,127.0.0.1
network.ping-timeout: 5
performance.stat-prefetch: on
nfs.register-with-portmap: on
nfs.disable: off
performance.client-io-threads: 1
performance.io-cache: on
performance.io-thread-count: 64
performance.quick-read: on
[root at rhesproddns01 ~]# df -h /opt/openfire
Filesystem            Size  Used Avail Use% Mounted on
localhost:/openfire   960M  881M   80M  92% /opt/openfire

[root at rhesproddns01 ~]# du -sh /opt/openfire
174M    /opt/openfire

[root at rhesproddns01 ~]# df -h /gluster/openfire
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-gluster_openfire
                       960M  756M  205M  79% /gluster/openfire

[root at rhesproddns01 ~]# du -sh /gluster/openfire
256M    /gluster/openfire



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux