> On Apr 23, 2018, at 10:49 AM, WK <wkmail@xxxxxxxxx> wrote: > > From some old May 2017 email. I asked the following: > "From the docs, I see you can identify the shards by the GFID > # getfattr -d -m. -e hex path_to_file > # ls /bricks/*/.shard -lh | grep GFID > > Is there a gluster tool/script that will recreate the file? > > or can you just sort them sort them properly and then simply cat/copy+ them back together? > > cat shardGFID.1 .. shardGFID.X > thefile " > > > The response from RedHat was: > > "Yes, this should work, but you would need to include the base file (the 0th shard, if you will) first in the list of files that you're stitching up. In the happy case, you can test it by comparing the md5sum of the file from the mount to that of your stitched file." > > We tested it with some VM files and it indeed worked fine. That was probably on 3.10.1 at the time. Thanks for that, WK. Do you know if those images were sparse files? My understanding is that this will not work with files with holes. Quoting from : http://lists.gluster.org/pipermail/gluster-devel/2017-March/052212.html - - snip 1. A non-existent/missing shard anywhere between offset $SHARD_BLOCK_SIZE through ceiling ($FILE_SIZE/$SHARD_BLOCK_SIZE) indicates a hole. When you reconstruct data from a sharded file of this nature, you need to take care to retain this property. 2. The above is also true for partially filled shards between offset $SHARD_BLOCK_SIZE through ceiling ($FILE_SIZE/$SHARD_BLOCK_SIZE). What do I mean by partially filled shards? Shards whose sizes are not equal to $SHARD_BLOCK_SIZE. In the above, $FILE_SIZE can be gotten from the 'trusted.glusterfs.shard.file-size' extended attribute on the base file (the 0th block). - - snip So it sounds like (although I am not sure, which is why I was writing in the first place) one would need to use `dd` or similar to read out ( ${trusted.glusterfs.shard.file-size} - ($SHARD_BLOCK_SIZE * count) ) bytes from the partial shard. Although I also just realized the above quote fails to explain, if a file has a hole less than $SHARD_BLOCK_SIZE in size, how we know which shard(s) are holey, so I'm back to thinking reconstruction is undocumented and unsupported except for reading the files off on a client, blowing away the volume and reconstructing. Which is a problem. -j > -wk > > > On 4/20/2018 12:44 PM, Jamie Lawrence wrote: >> Hello, >> >> So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs and it shouldn't have. >> >> I'd like to turn sharding off and reverse the files back to normal. Some of these are sparse files, so I need to account for holes. There are more than enough that I need to write a tool to do it. >> >> I saw notes ca. 3.7 saying the only way to do it was to read-off on the client-side, blow away the volume and start over. This would be extremely disruptive for us, and language I've seen reading tickets and old messages to this list make me think that isn't needed anymore, but confirmation of that would be good. >> >> The only discussion I can find are these videos[1]: >> http://opensource-storage.blogspot.com/2016/07/de-mystifying-gluster-shards.html >> , and some hints[2] that are old enough that I don't trust them without confirmation that nothing's changed. The video things don't acknowledge the existence of file holes. Also, the hint in [2] mentions using trusted.glusterfs.shard.file-size to get the size of a partly filled hole; that value looks like base64, but when I attempt to decode it, base64 complains about invalid input. >> >> In short, I can't find sufficient information to reconstruct these. Has anyone written a current, step-by-step guide on reconstructing sharded files? Or has someone has written a tool so I don't have to? >> >> Thanks, >> >> -j >> >> >> [1] Why one would choose to annoy the crap out of their fellow gluster users by using video to convey about 80 bytes of ASCII-encoded information, I have no idea. >> [2] >> http://lists.gluster.org/pipermail/gluster-devel/2017-March/052212.html >> _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users