Hi Krutika, Thank you so much for myour reply. Let me answer all:
And others; No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command: sudo gluster volume create testvol replica 2 sr-09-loc-50-14-18:/bricks/brick1 sr-10-loc-50-14-18:/bricks/brick1 sr-09-loc-50-14-18:/bricks/brick2 sr-10-loc-50-14-18:/bricks/brick2 sr-09-loc-50-14-18:/bricks/brick3 sr-10-loc-50-14-18:/bricks/brick3 sr-09-loc-50-14-18:/bricks/brick4 sr-10-loc-50-14-18:/bricks/brick4 sr-09-loc-50-14-18:/bricks/brick5 sr-10-loc-50-14-18:/bricks/brick5 sr-09-loc-50-14-18:/bricks/brick6 sr-10-loc-50-14-18:/bricks/brick6 sr-09-loc-50-14-18:/bricks/brick7 sr-10-loc-50-14-18:/bricks/brick7 sr-09-loc-50-14-18:/bricks/brick8 sr-10-loc-50-14-18:/bricks/brick8 sr-09-loc-50-14-18:/bricks/brick9 sr-10-loc-50-14-18:/bricks/brick9 sr-09-loc-50-14-18:/bricks/brick10 sr-10-loc-50-14-18:/bricks/brick10 force and of course after that volume start executed. If shard enabled, I enable that feature BEFORE I start the sharded volume than mount. I tried converting from one to another but then I saw documentation says clean voluje should be better. So I tried clean method. Still same performance. Testfile grows from 1GB to 5GB. And tests are dd. See this example: dd if=/dev/zero of=/mnt/testfile bs=1G count=5 5+0 records in 5+0 records out 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 66.7978 s, 80.4 MB/s >> dd if=/dev/zero of=/mnt/testfile bs=5G count=1 This also gives same result. (bs and count reversed) And this example have generated a profile which I also attached to this e-mail. Is there anything that I can try? I am open to all kind of suggestions. Thanks, Gencer. From: Krutika Dhananjay [mailto:kdhananj@xxxxxxxxxx] Hi Gencer, I just checked the volume-profile attachments. Things that seem really odd to me as far as the sharded volume is concerned: 1. Only the replica pair having bricks 5 and 6 on both nodes 09 and 10 seems to have witnessed all the IO. No other bricks witnessed any write operations. This is unacceptable for a volume that has 8 other replica sets. Why didn't the shards get distributed across all of these sets? 2. For replica set consisting of bricks 5 and 6 of node 09, I see that the brick 5 is spending 99% of its time in FINODELK fop, when the fop that should have dominated its profile should have been in fact WRITE. * And if there are indeed two volumes, could you share both their `volume info` outputs to eliminate any confusion? * If there's just one volume, are you taking care to remove all data from the mount point of this volume before converting it? * What is the size the test file grew to? * These attached profiles are against dd runs? Or the file download test? -Krutika On Mon, Jul 3, 2017 at 8:42 PM, <gencer@xxxxxxxxxxxxx> wrote:
|
Attachment:
dd-5gb-shard_32mb.log
Description: Binary data
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users