Glad the fixes worked for you. Thanks for that update!
-KrutikaOn Tue, Aug 2, 2016 at 7:31 PM, David Gossage <dgossage@xxxxxxxxxxxxxxxxxx> wrote:
So far both dd commands that failed previously worked fine on 3.7.14Once I deleted old content from test volume it mounted to oVirt via storage add when previously it would error out. I am now creating a test VM with default disk caching settings (pretty sure oVirt is defaulting to none rather than writeback/through). So far all shards are being created properly.Load is sky rocketing but I have all 3 gluster bricks running off 1 hard drive on test box so I would expect horrible io/load issues with that.Very promising so far. Thank you developers for your help in working through this.Once I have the VM installed and running will test for a few days and make sure it doesn't have any freeze or locking issues then will roll this out to working cluster.David Gossage
Carousel Checks Inc. | System Administrator
Office 708.613.2284On Wed, Jul 27, 2016 at 8:37 AM, David Gossage <dgossage@xxxxxxxxxxxxxxxxxx> wrote:On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:Yes please, could you file a bug against glusterfs for this issue?-KrutikaOn Wed, Jul 27, 2016 at 1:39 AM, David Gossage <dgossage@xxxxxxxxxxxxxxxxxx> wrote:Has a bug report been filed for this issue or should l I create one with the logs and results provided so far?David Gossage
Carousel Checks Inc. | System Administrator
Office 708.613.2284On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <dgossage@xxxxxxxxxxxxxxxxxx> wrote:On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <samppah@xxxxxxxxxxxxx> wrote:
> Here is a quick way how to test this:
> GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset. gluster-test1 is server and gluster-test2 is client mounting with FUSE.
>
> Writing file with oflag=direct is not ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 bs=1024000
> dd: failed to open ‘file’: Invalid argument
>
> Enable network.remote-dio on Gluster Volume:
> [root@gluster-test1 gluster]# gluster volume set gluster network.remote-dio enable
> volume set: success
>
> Writing small file with oflag=direct is ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 bs=1024000
> 1+0 records in
> 1+0 records out
> 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>
> Writing bigger file with oflag=direct is ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 bs=1M
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>
> Enable Sharding on Gluster Volume:
> [root@gluster-test1 gluster]# gluster volume set gluster features.shard enable
> volume set: success
>
> Writing small file with oflag=direct is ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=1 bs=1M
> 1+0 records in
> 1+0 records out
> 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>
> Writing bigger file with oflag=direct is not ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 bs=1M
> dd: error writing ‘file3’: Operation not permitted
> dd: closing output file ‘file3’: Operation not permitted
>
Thank you for these tests! would it be possible to share the brick and
client logs?Not sure if his tests are same as my setup but here is what I end up withVolume Name: glustershardType: ReplicateVolume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3Status: StartedNumber of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: 192.168.71.10:/gluster1/shard1/1Brick2: 192.168.71.11:/gluster1/shard2/1Brick3: 192.168.71.12:/gluster1/shard3/1Options Reconfigured:features.shard-block-size: 64MBfeatures.shard: onserver.allow-insecure: onstorage.owner-uid: 36storage.owner-gid: 36cluster.server-quorum-type: servercluster.quorum-type: autonetwork.remote-dio: enablecluster.eager-lock: enableperformance.stat-prefetch: offperformance.io-cache: offperformance.quick-read: offcluster.self-heal-window-size: 1024cluster.background-self-heal-count: 16nfs.enable-ino32: offnfs.addr-namelookup: offnfs.disable: onperformance.read-ahead: offperformance.readdir-ahead: ondd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/ oflag=direct count=100 bs=1M81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__ .trashcan/[root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test oflag=direct count=100 bs=1Mdd: error writing ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’: Operation not permittedcreates the 64M file in expected location then the shard is 0# file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/testsecurity.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000trusted.afr.dirty=0x000000000000000000000000trusted.bit-rot.version=0x0200000000000000579231f3000e16e7trusted.gfid=0xec6de302b35f427985639ca3e25d9df0trusted.glusterfs.shard.block-size=0x0000000004000000trusted.glusterfs.shard.file-size=0x0000000004000000000000000000000000000000000000010000000000000000# file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000trusted.afr.dirty=0x000000000000000000000000trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
Regards,
Vijay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel