This is replica 2, only , with following settings Options Reconfigured: performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: fixed cluster.server-quorum-type: none storage.owner-uid: 36 storage.owner-gid: 36 cluster.quorum-count: 1 cluster.self-heal-daemon: enable If I'll create "ids" file manually ( eg. " sanlock direct init -s 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 " ) on both bricks, vdsm is writing only to half of them ( that with 2 links = correct ) "ids" file has correct permittions, owner, size on both bricks. brick 1: -rw-rw---- 1 vdsm kvm 1048576 2. bře 18.56 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - not updated brick 2: -rw-rw---- 2 vdsm kvm 1048576 3. bře 10.16 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - is continually updated What happens when I'll restart vdsm ? Will oVirt storages go to "disable " state ??? = disconnect VMs storages ? regs.Pa. On 3.3.2016 02:02, Ravishankar N wrote:
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users