On 4 November 2015 at 20:45, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:
The block count in the xattr doesn't amount to 16GB of used space.Is this consistently reproducible? If it is, then could you share the steps? That would help me recreate this in-house and debug it.
100% of the time for me, and all I have to do is copy or create a file on the gluster mount.
My bricks are all sitting on ZFS filesystems with compression enabled, maube that is confusing things? I'll try a test with compression off.
In the mean time here are the steps and results for a from scract volume I created (datastore3) with just one file.
root@vnb:/mnt/pve/gluster3# gluster volume info
Volume Name: datastore3
Type: Replicate
Volume ID: 96acb55b-b3c2-4940-b642-221dd1b88617
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vna.proxmox.softlog:/zfs_vm/datastore3
Brick2: vnb.proxmox.softlog:/glusterdata/datastore3
Brick3: vng.proxmox.softlog:/glusterdata/datastore3
Options Reconfigured:
performance.io-thread-count: 32
performance.write-behind-window-size: 128MB
performance.cache-size: 1GB
performance.cache-refresh-timeout: 4
nfs.disable: on
nfs.addr-namelookup: off
nfs.enable-ino32: on
performance.write-behind: on
cluster.self-heal-window-size: 256
server.event-threads: 4
client.event-threads: 4
cluster.quorum-type: auto
features.shard-block-size: 512MB
features.shard: on
performance.readdir-ahead: on
cluster.server-quorum-ratio: 51%
root@vnb:/mnt/pve/gluster3# dd if=/dev/sda of=test.bin bs=1MB count=8192
8192+0 records in
8192+0 records out
8192000000 bytes (8.2 GB) copied, 79.5335 s, 1
ls -l
total 289925
drwxr-xr-x 2 root root 2 Nov 4 22:24 images
-rw-r--r-- 1 root root 72357920896 Nov 4 22:26 test.bin
ls -lh
total 284M
drwxr-xr-x 2 root root 2 Nov 4 22:24 images
-rw-r--r-- 1 root root 68G Nov 4 22:26 test.bin
du test.bin
289924 test.bin
du /glusterdata/datastore3/.shard/
2231508 /glusterdata/datastore3/.shard/
getfattr -d -m . -e hex /glusterdata/datastore3/test.bin
getfattr: Removing leading '/' from absolute path names
# file: glusterdata/datastore3/test.bin
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.version=0x02000000000000005639f915000f2b76
trusted.gfid=0xa1ecf4c8ab0a4ecc8bd8d4f3affe0bfb
trusted.glusterfs.shard.block-size=0x0000000020000000
trusted.glusterfs.shard.file-size=0x00000010d8de40800000000000000000000000000008d9080000000000000000
root@vnb:/mnt/pve/gluster3# gluster volume info
Volume Name: datastore3
Type: Replicate
Volume ID: 96acb55b-b3c2-4940-b642-221dd1b88617
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vna.proxmox.softlog:/zfs_vm/datastore3
Brick2: vnb.proxmox.softlog:/glusterdata/datastore3
Brick3: vng.proxmox.softlog:/glusterdata/datastore3
Options Reconfigured:
performance.io-thread-count: 32
performance.write-behind-window-size: 128MB
performance.cache-size: 1GB
performance.cache-refresh-timeout: 4
nfs.disable: on
nfs.addr-namelookup: off
nfs.enable-ino32: on
performance.write-behind: on
cluster.self-heal-window-size: 256
server.event-threads: 4
client.event-threads: 4
cluster.quorum-type: auto
features.shard-block-size: 512MB
features.shard: on
performance.readdir-ahead: on
cluster.server-quorum-ratio: 51%
root@vnb:/mnt/pve/gluster3# dd if=/dev/sda of=test.bin bs=1MB count=8192
8192+0 records in
8192+0 records out
8192000000 bytes (8.2 GB) copied, 79.5335 s, 1
ls -l
total 289925
drwxr-xr-x 2 root root 2 Nov 4 22:24 images
-rw-r--r-- 1 root root 72357920896 Nov 4 22:26 test.bin
ls -lh
total 284M
drwxr-xr-x 2 root root 2 Nov 4 22:24 images
-rw-r--r-- 1 root root 68G Nov 4 22:26 test.bin
du test.bin
289924 test.bin
du /glusterdata/datastore3/.shard/
2231508 /glusterdata/datastore3/.shard/
getfattr -d -m . -e hex /glusterdata/datastore3/test.bin
getfattr: Removing leading '/' from absolute path names
# file: glusterdata/datastore3/test.bin
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.version=0x02000000000000005639f915000f2b76
trusted.gfid=0xa1ecf4c8ab0a4ecc8bd8d4f3affe0bfb
trusted.glusterfs.shard.block-size=0x0000000020000000
trusted.glusterfs.shard.file-size=0x00000010d8de40800000000000000000000000000008d9080000000000000000
--
Lindsay
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users