Re: Shard file size (gluster 3.7.5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could you try this again with performance.strict-write-ordering set to 'off'?

# gluster volume set <VOL> performance.strict-write-ordering off

-Krutika


From: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
To: "Krutika Dhananjay" <kdhananj@xxxxxxxxxx>, "gluster-users" <gluster-users@xxxxxxxxxxx>
Sent: Tuesday, November 3, 2015 7:26:41 AM
Subject: Re: Shard file size (gluster 3.7.5)

I can reproduce this 100% reliably, just by coping files onto a gluster volume. Reported File size  is always larger, sometimes radically so. If I copy the file again, the reported file is different each time.

using cmp I found that the file contents match, up to the size of the original file.

MD5SUMS probably differ because of the different file sizes.

On 2 November 2015 at 18:49, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:
Could you share
(1) the output of 'getfattr -d -m . -e hex <path>' where <path> represents the path to the original file from the brick where it resides
(2)  the size of the file as seen from the mount point around the time when (1) is taken
(3) output of 'gluster volume info'

-Krutika


From: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
To: "gluster-users" <gluster-users@xxxxxxxxxxx>
Sent: Sunday, November 1, 2015 6:29:44 AM
Subject: Shard file size (gluster 3.7.5)


Have upgraded my cluster to debian jessie, so able to natively test 3.7.5

 

I’ve noticed some peculiarities with reported file sizes on the gluster mount but I seem to recall this is a known issue with shards?

 

Source file is sparse, nominal size 64GB, real size 25GB. However underlying storage is ZFS with lz4 compression which reduces it to 16GB

 

No Shard:

ls –lh     : 64 GB

du –h      : 25 GB

 

4MB Shard:

ls –lh     : 144 GB

du –h      : 21 MB

 

512MB Shard:

ls –lh     : 72 GB

du –h      : 765 MB

 

 

a du –sh of the .shard directory show 16GB for all datastores

 

Is this a known bug for sharding? Will it be repaired eventually?

 

Sent from Mail for Windows 10


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




--
Lindsay

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux