In April of this year I reported the problem using sharding on gluster 7.4:
dd if=SOURCE bs=1M | pv -L NNm | ssh gluster_server "dd of=/gluster/VOL_NAME/TARGET bs=1M"
--
====
We're using GlusterFS in a replicated brick setup with 2 bricks with sharding turned on (shardsize 128MB).
There
is something funny going on as we can see that if we copy large VM
files to the volume we can end up with files that are a bit larger than
the source files DEPENDING on the speed with which we copied the files -
e.g.:It seems that if NN is <= 25 (i.e. 25 MB/s) the size of SOURCE and TARGET will be the same.
If
we crank NN to, say, 50 we sometimes risk that a 25G file ends up
having a slightly larger size, e.g. 26844413952 or 26844233728 - larger
than the expected 26843545600.
Unfortunately this is not an illusion ! If we dd the files out of Gluster we will receive the amount of data that 'ls' showed us.
Unfortunately this is not an illusion ! If we dd the files out of Gluster we will receive the amount of data that 'ls' showed us.
In
the brick directory (incl .shard directory) we have the expected amount
of shards for a 25G files (200) with size precisely equal to 128MB -
but there is an additional 0 size shard file created.
Has anyone else seen a phenomenon like this ?
====
After upgrade to 7.6 we're still seeing this problem - now, the extra bytes that are appearing can be removed using truncate in the mounted gluster volume, and md5sum can confirm that after truncate the content is identical to the source - however, it may point to an underlying issue.
I hope someone can reproduce this behaviour,
Thanx,
Claus.
--
Claus Jeppesen |
Manager, Network Services |
Datto, Inc. |
p +45 6170 5901 | Copenhagen Office |
www.datto.com |
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users