Re: [Gluster-users] 3.7.13 & proxmox/qemu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 21, 2016 at 12:48 PM, David Gossage <dgossage@xxxxxxxxxxxxxxxxxx> wrote:
On Thu, Jul 21, 2016 at 9:58 AM, David Gossage <dgossage@xxxxxxxxxxxxxxxxxx> wrote:
On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos <ndevos@xxxxxxxxxx> wrote:
On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> Did a quick test this morning - 3.7.13 is now working with libgfapi - yay!
>
>
> However I do have to enable write-back or write-through caching in qemu
> before the vm's will start, I believe this is to do with aio support. Not a
> problem for me.
>
> I see there are settings for storage.linux-aio and storage.bd-aio - not sure
> as to whether they are relevant or which ones to play with.

Both storage.*-aio options are used by the brick processes. Depending on
what type of brick you have (linux = filesystem, bd = LVM Volume Group)
you could enable the one or the other.

We do have a strong suggestion to set these "gluster volume group .."
options:
  https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example

>From those options, network.remote-dio seems most related to your aio
theory. It was introduced with http://review.gluster.org/4460 that
contains some more details.


Wonder if this may be related at all

* #1347553: O_DIRECT support for sharding

Is it possible to downgrade from 3.8 back to 3.7.x 

Building test box right now anyway but wondering.

May be anecdotal with small sample size but the few people who have had issue all seemed to have zfs backed gluster volumes.

Now that i recall back to the day I updated.  The gluster volume on xfs I use for my hosted engine never had issues.
 

 

Thanks with the exception of stat-prefetch I have those enabled 
I could try turning that back off though at the time of update to 3.7.13 it was off.  I didnt turn it back on till later in next week after downgrading back to 3.7.11.  

Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
Options Reconfigured:
diagnostics.brick-log-level: WARNING
features.shard-block-size: 64MB
features.shard: on
performance.readdir-ahead: on
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: on
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
server.allow-insecure: on
cluster.self-heal-window-size: 1024
cluster.background-self-heal-count: 16
performance.strict-write-ordering: off
nfs.disable: on
nfs.addr-namelookup: off
nfs.enable-ino32: off


HTH,
Niels

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux