Volume usage mismatch problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all

Gluster-3.7.6 in 'Quota' problem occurs in the following test case.

(it doesn't occur if don't enable the volume quota)

Volume usage mismatch occurs when using glusterfs-3.7.6 on ZFS.

Can you help with the following problems?


1. zfs disk pool information

root@server-1:~# zpool status
  pool: pool
 state: ONLINE
  scan: none requested
config:

NAME                             STATE     READ WRITE CKSUM
pool                             ONLINE       0     0     0
 pci-0000:00:10.0-scsi-0:0:1:0  ONLINE       0     0     0
 pci-0000:00:10.0-scsi-0:0:2:0  ONLINE       0     0     0
 pci-0000:00:10.0-scsi-0:0:3:0  ONLINE       0     0     0

errors: No known data errors

root@server-2:~# zpool status
  pool: pool
 state: ONLINE
  scan: none requested
config:

NAME                             STATE     READ WRITE CKSUM
pool                             ONLINE       0     0     0
 pci-0000:00:10.0-scsi-0:0:1:0  ONLINE       0     0     0
 pci-0000:00:10.0-scsi-0:0:2:0  ONLINE       0     0     0
 pci-0000:00:10.0-scsi-0:0:3:0  ONLINE       0     0     0

errors: No known data errors

2. zfs volume list information

root@server-1:~# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pool         179K  11.3T    19K  /pool
pool/tvol1   110K  11.3T   110K  /pool/tvol1

root@server-2:~# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pool         179K  11.3T    19K  /pool
pool/tvol1   110K  11.3T   110K  /pool/tvol1

3. gluster volume information

root@server-1:~# gluster volume info
 
Volume Name: tvol1
Type: Distribute
Volume ID: 02d4c9de-e05f-4177-9e86-3b9b2195d7ab
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 38.38.38.101:/pool/tvol1
Brick2: 38.38.38.102:/pool/tvol1
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

4. gluster volume quota list

root@server-1:~# gluster volume quota tvol1 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                        100.0GB     80%(80.0GB)   0Bytes 100.0GB              No                   No

5. brick disk usage

root@server-1:~# df -k
Filesystem                 1K-blocks    Used   Available Use% Mounted on
pool                     12092178176       0 12092178176   0% /pool
pool/tvol1               12092178304     128 12092178176   1% /pool/tvol1
localhost:tvol1            104857600       0   104857600   0% /run/gluster/tvol1

root@server-2:~# df -k
Filesystem                 1K-blocks    Used   Available Use% Mounted on
pool                     12092178176       0 12092178176   0% /pool
pool/tvol1               12092178304     128 12092178176   1% /pool/tvol1

6. client mount / disk usage

root@client:~# mount -t glusterfs 38.38.38.101:/tvol1 /mnt
root@client:~# df -k
Filesystem               1K-blocks    Used Available Use% Mounted on
38.38.38.101:/tvol1      104857600       0 104857600   0% /mnt

7. Write using the dd command file

root@client:~# dd if=/dev/urandom of=/mnt/10m bs=1048576 count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.751261 s, 14.0 MB/s

8. client disk usage

root@client:~# df -k
Filesystem               1K-blocks    Used Available Use% Mounted on
38.38.38.101:/tvol1      104857600       0 104857600   0% /mnt

9. brick disk usage

root@server-1:~# df -k
Filesystem                 1K-blocks    Used   Available Use% Mounted on
pool                     12092167936       0 12092167936   0% /pool
pool/tvol1               12092178304   10368 12092167936   1% /pool/tvol1
localhost:tvol1            104857600       0   104857600   0% /run/gluster/tvol1

root@server-2:~# df -k
Filesystem                 1K-blocks    Used   Available Use% Mounted on
pool                     12092178176       0 12092178176   0% /pool
pool/tvol1               12092178304     128 12092178176   1% /pool/tvol1

10. zfs volume list information

root@server-1:~# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pool        10.2M  11.3T    19K  /pool
pool/tvol1  10.1M  11.3T  10.1M  /pool/tvol1

root@server-2:~# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pool         188K  11.3T    19K  /pool
pool/tvol1   110K  11.3T   110K  /pool/tvol1

11. gluster volume quota list

root@server-1:~# gluster volume quota tvol1 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                        100.0GB     80%(80.0GB) 512Bytes 100.0GB              No                   No

12. Views from the client file

root@client:~# ls -al /mnt
total 10246
drwxr-xr-x  4 root root        9  1월 30 02:23 .
drwxr-xr-x 22 root root     4096  1월 28 07:48 ..
-rw-r--r--  1 root root 10485760  1월 30 02:23 10m
drwxr-xr-x  3 root root        6  1월 30 02:14 .trashcan

root@client:~# df -k
Filesystem               1K-blocks    Used Available Use% Mounted on
38.38.38.101:/tvol1      104857600   10240 104847360   1% /mnt

root@server-1:~# gluster volume quota tvol1 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                        100.0GB     80%(80.0GB)   10.0MB 100.0GB              No                   No

-- 

Sungsik, Park/corazy [박성식, 朴成植]

Software Development Engineer

Email: mulgo79@xxxxxxxxx


----------------------------------------------------------------------------------------

This email may be confidential and protected by legal privilege. 

If you are not the intended recipient, disclosure, copying, distribution 

and use are prohibited; please notify us immediately and delete this copy 

from your system.

----------------------------------------------------------------------------------------

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux