Re: [ovirt-users] open error -13 = sanlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/03/2016 02:53 PM, paf1@xxxxxxxx wrote:
This is replica 2, only , with following settings

Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: fixed
Not sure why you have set this option.
Ideally replica 3 or arbiter volumes are recommended for gluster+ovirt use.  (client) quorum does not make sense for a 2 node setup. I have a detailed write up here which explains things http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/ which explains things.

cluster.server-quorum-type: none
storage.owner-uid: 36
storage.owner-gid: 36
cluster.quorum-count: 1
cluster.self-heal-daemon: enable

If I'll create "ids" file manually (  eg. " sanlock direct init -s 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 " ) on both bricks,
vdsm is writing only to half of them ( that with 2 links = correct )
"ids" file has correct permittions, owner, size  on both bricks.
brick 1:  -rw-rw---- 1 vdsm kvm 1048576  2. bře 18.56 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - not updated

Okay, so this one has link count =1 which means the .glusterfs hardlink is missing.  Can you try deleting this file from the brick and perform a stat on the file from the mount? That should heal (i.e recreate it ) on this brick from the other brick with the appropriate .glusterfs hard link.


brick 2:  -rw-rw---- 2 vdsm kvm 1048576  3. bře 10.16 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - is continually updated

What happens when I'll restart vdsm ? Will oVirt storages go to "disable " state ??? = disconnect VMs storages ?

No idea on this one...
-Ravi

regs.Pa.

On 3.3.2016 02:02, Ravishankar N wrote:
On 03/03/2016 12:43 AM, Nir Soffer wrote:
PS:  # find /STORAGES -samefile /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids -print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I fix it ?? - online !

Ravi?
Is this the case in all 3 bricks of the replica?
BTW, you can just stat the file on the brick and see the link count (it must be 2) instead of running the more expensive find command.




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux