OK. I am not sure what it is that we're doing differently. I tried the steps you shared and here's what I got:
[root@dhcp35-215 bricks]# gluster volume info
Volume Name: rep
Type: Replicate
Volume ID: 3fd45a4b-0d02-4a44-b74a-41592d48e102
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: kdhananjay:/bricks/1
Brick2: kdhananjay:/bricks/2
Brick3: kdhananjay:/bricks/3
Options Reconfigured:
performance.strict-write-ordering: on
features.shard: on
features.shard-block-size: 512MB
cluster.quorum-type: auto
client.event-threads: 4
server.event-threads: 4
cluster.self-heal-window-size: 256
performance.write-behind: on
nfs.enable-ino32: on
nfs.addr-namelookup: off
nfs.disable: on
performance.cache-refresh-timeout: 4
performance.cache-size: 1GB
performance.write-behind-window-size: 128MB
performance.io-thread-count: 32
performance.readdir-ahead: on
Volume Name: rep
Type: Replicate
Volume ID: 3fd45a4b-0d02-4a44-b74a-41592d48e102
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: kdhananjay:/bricks/1
Brick2: kdhananjay:/bricks/2
Brick3: kdhananjay:/bricks/3
Options Reconfigured:
performance.strict-write-ordering: on
features.shard: on
features.shard-block-size: 512MB
cluster.quorum-type: auto
client.event-threads: 4
server.event-threads: 4
cluster.self-heal-window-size: 256
performance.write-behind: on
nfs.enable-ino32: on
nfs.addr-namelookup: off
nfs.disable: on
performance.cache-refresh-timeout: 4
performance.cache-size: 1GB
performance.write-behind-window-size: 128MB
performance.io-thread-count: 32
performance.readdir-ahead: on
[root@dhcp35-215 mnt]# gluster volume set rep strict-write-ordering on
volume set: success
[root@dhcp35-215 mnt]# dd if=/dev/sda of=test.bin bs=1MB count=8192
8192+0 records in
8192+0 records out
8192000000 bytes (8.2 GB) copied, 133.754 s, 61.2 MB/s
[root@dhcp35-215 mnt]# ls -l
total 8000000
-rw-r--r--. 1 root root 8192000000 Nov 5 16:40 test.bin
[root@dhcp35-215 mnt]# ls -lh
total 7.7G
-rw-r--r--. 1 root root 7.7G Nov 5 16:40 test.bin
[root@dhcp35-215 mnt]# du test.bin
8000000 test.bin
volume set: success
[root@dhcp35-215 mnt]# dd if=/dev/sda of=test.bin bs=1MB count=8192
8192+0 records in
8192+0 records out
8192000000 bytes (8.2 GB) copied, 133.754 s, 61.2 MB/s
[root@dhcp35-215 mnt]# ls -l
total 8000000
-rw-r--r--. 1 root root 8192000000 Nov 5 16:40 test.bin
[root@dhcp35-215 mnt]# ls -lh
total 7.7G
-rw-r--r--. 1 root root 7.7G Nov 5 16:40 test.bin
[root@dhcp35-215 mnt]# du test.bin
8000000 test.bin
[root@dhcp35-215 bricks]# du /bricks/1/.shard/
7475780 /bricks/1/.shard/
[root@dhcp35-215 bricks]# du /bricks/1/
.glusterfs/ .shard/ test.bin .trashcan/
[root@dhcp35-215 bricks]# du /bricks/1/test.bin
524292 /bricks/1/test.bin
7475780 /bricks/1/.shard/
[root@dhcp35-215 bricks]# du /bricks/1/
.glusterfs/ .shard/ test.bin .trashcan/
[root@dhcp35-215 bricks]# du /bricks/1/test.bin
524292 /bricks/1/test.bin
Just to be sure, did you rerun the test on the already broken file (test.bin) which was written to when strict-write-ordering had been off?
Or did you try the new test with strict-write-ordering on a brand new file?
-Krutika
From: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
To: "Krutika Dhananjay" <kdhananj@xxxxxxxxxx>
Cc: "gluster-users" <gluster-users@xxxxxxxxxxx>
Sent: Thursday, November 5, 2015 3:04:51 AM
Subject: Re: Shard file size (gluster 3.7.5)On 5 November 2015 at 01:09, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:Ah! It's the same issue. Just saw your volume info output. Enabling strict-write-ordering should ensure both size and disk usage are accurate.Tested it - nope :( Size s accurate (27746172928 bytes), but disk usage is wildly inaccurate (698787).I have compression disabled on the underlying storage now.
--Lindsay
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users