Re: VM fs becomes read only when one gluster node goes down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd see what your qemu logs put out if you have them around from a crash. Also you can check client connections across your cluster by hopping on your hypervisor and grepping the output of netstat -np for the pid of one of your gluster backed VM's like so

netstat -np | grep 11607
tcp        0      0 10.9.1.1:60414          10.9.1.1:24007          ESTABLISHED 11607/qemu-system-x
tcp        0      0 10.9.1.1:60409          10.9.1.1:24007          ESTABLISHED 11607/qemu-system-x
tcp        0      0 10.9.1.1:45998          10.9.1.1:50152          ESTABLISHED 11607/qemu-system-x
tcp        0      0 10.9.1.1:42606          10.9.1.2:50152          ESTABLISHED 11607/qemu-system-x
tcp        0      0 10.9.1.1:45993          10.9.1.1:50152          ESTABLISHED 11607/qemu-system-x
tcp        0      0 10.9.1.1:42601          10.9.1.2:50152          ESTABLISHED 11607/qemu-system-x
unix  3      [ ]         STREAM     CONNECTED     32860    11607/qemu-system-x /var/lib/libvirt/qemu/HFMWEB19.monitor

I mounted two disks for the machine so I have two controls and two connections per disk for my replicated setup. Someone else might be able to provide more info as to what your output should look like.  

----- Original Message -----
From: "André Bauer" <abauer@xxxxxxxxx>
To: "Josh Boon" <gluster@xxxxxxxxxxxx>
Cc: "Krutika Dhananjay" <kdhananj@xxxxxxxxxx>, "gluster-users" <gluster-users@xxxxxxxxxxx>, gluster-devel@xxxxxxxxxxx
Sent: Monday, October 26, 2015 7:47:07 PM
Subject: Re:  VM fs becomes read only when one gluster node goes down

Just some. But i think the reason is some vm images are replicated on
node 1 & 2 and some on node 3 & 4 because i use distributed/replicated
volume.

You're right. I think i have to try it on a testsetup.

At the moment i'm also no completly sure, if its a Glusterfs problem
(not connecting to the node with the replicated file immediately, when
read/write fails) or a problem of the filesystem (ext4 fs goes read only
on error to early)?


Regards
André

Am 26.10.2015 um 20:23 schrieb Josh Boon:
> Hmm even five should be OK.  Do you lose all VMs or just some? 
> 
> Also, we had issues with
> 
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> 
> and had to instead go with
> 
> cluster.server-quorum-type: none
> cluster.quorum-type: none
> 
> though we only replicate instead distribute and replicate so I'd be wary of changing those without advice from folks more familiar with the impact on your config. 
> 
> gfapi upon connect gets the volume file and is aware of the configuration and changes to it so it should be OK when a node is lost since it knows where the other nodes are. 
> 
> If you have a lab with your gluster config setup and you lose all of your VM's I'd suggest trying my config to see what happens.  The gluster logs and qemu clients could also have some tips on what happens when a node disappears. 
> ----- Original Message -----
> From: "André Bauer" <abauer@xxxxxxxxx>
> To: "Josh Boon" <gluster@xxxxxxxxxxxx>
> Cc: "Krutika Dhananjay" <kdhananj@xxxxxxxxxx>, "gluster-users" <gluster-users@xxxxxxxxxxx>, gluster-devel@xxxxxxxxxxx
> Sent: Monday, October 26, 2015 7:08:15 PM
> Subject: Re:  VM fs becomes read only when one gluster node goes down
> 
> Thanks guys!
> My volume info is attached at the bottom of this mail...
> 
> @ Josh
> As you can see, i already have a 5 second ping timeout set. I will try
> it with 3 seconds.
> 
> Not sure, if i want to have errors=continue on the fs level but i will
> give it a try, if its the only possibility to get automatic failover work.
> 
> 
> @ Roman
> I use qemu with libgfapi to access the images. So no glusterfs entries
> in fstab for my vm hosts. It also seems this is kind of deprecated:
> 
> http://blog.gluster.org/category/mount-glusterfs/
> 
> "`backupvolfile-server` - This option did not really do much rather than
> provide a 'shell' script based failover which was highly racy and
> wouldn't work during many occasions.  It was necessary to remove this to
> make room for better options (while it is still provided for backward
> compatibility in the code)"
> 
> 
> @ all
> Can anybody tell me how Glusterfs handles this internaly?
> Is the libgfapi client already aware of the server which replicates the
> image?
> Is there a way i can configure it manualy for a volume?
> 
> 
> 
> 
> Volume Name: vmimages
> Type: Distributed-Replicate
> Volume ID: 029285b2-dfad-4569-8060-3827c0f1d856
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: storage1.domain.local:/glusterfs/vmimages
> Brick2: storage2.domain.local:/glusterfs/vmimages
> Brick3: storage3.domain.local:/glusterfs/vmimages
> Brick4: storage4.domain.local:/glusterfs/vmimages
> Options Reconfigured:
> network.ping-timeout: 5
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> auth.allow:
> 192.168.0.21,192.168.0.22,192.168.0.23,192.168.0.24,192.168.0.25,192.168.0.26
> server.allow-insecure: on
> storage.owner-uid: 2000
> storage.owner-gid: 2000
> 
> 
> 
> Regards
> André
> 
> 
> Am 26.10.2015 um 17:41 schrieb Josh Boon:
>> Andre,
>>
>> I've not explored using a DNS solution to publish the gluster cluster
>> addressing space but things you'll want to check out
>> are network.ping-timeout and whether or not your VM goes read-only on
>> filesystem error. If your network is consistent and robust
>> tuning network.ping-timeout to a very low value such as three seconds
>> will instruct the client to drop that client on failure. The default
>> value for this is 42 seconds which will cause your VM to go read-only as
>> you've seen. You could also choose to have your VM's mount their
>> partitions errors=continue as well depending on the filesystem they run.
>> Our setup has timeout at seven seconds and errors=continue and has
>> survived both testing and storage node segfaults. No data integrity
>> issues have presented yet but our data is mostly temporal so integrity
>> hasn't been tested thoroughly. Also we're qemu 2.0 running gluster 3.6
>> on ubuntu 14.04 for those curious. 
>>
>> Best,
>> Josh 
>>
>> ------------------------------------------------------------------------
>> *From: *"Roman" <romeo.r@xxxxxxxxx>
>> *To: *"Krutika Dhananjay" <kdhananj@xxxxxxxxxx>
>> *Cc: *"gluster-users" <gluster-users@xxxxxxxxxxx>, gluster-devel@xxxxxxxxxxx
>> *Sent: *Monday, October 26, 2015 1:33:57 PM
>> *Subject: *Re:  VM fs becomes read only when one gluster
>> node goes down
>>
>> Hi,
>> got backupvolfile-server=NODE2NAMEHERE in fstab ? :)
>>
>> 2015-10-23 5:24 GMT+03:00 Krutika Dhananjay <kdhananj@xxxxxxxxxx
>> <mailto:kdhananj@xxxxxxxxxx>>:
>>
>>     Could you share the output of 'gluster volume info', and also
>>     information as to which node went down on reboot?
>>
>>     -Krutika
>>     ------------------------------------------------------------------------
>>
>>         *From: *"André Bauer" <abauer@xxxxxxxxx <mailto:abauer@xxxxxxxxx>>
>>         *To: *"gluster-users" <gluster-users@xxxxxxxxxxx
>>         <mailto:gluster-users@xxxxxxxxxxx>>
>>         *Cc: *gluster-devel@xxxxxxxxxxx <mailto:gluster-devel@xxxxxxxxxxx>
>>         *Sent: *Friday, October 23, 2015 12:15:04 AM
>>         *Subject: * VM fs becomes read only when one
>>         gluster node goes        down
>>
>>         Hi,
>>
>>         i have a 4 node Glusterfs 3.5.6 Cluster.
>>
>>         My VM images are in an replicated distributed volume which is
>>         accessed
>>         from kvm/qemu via libgfapi.
>>
>>         Mount is against storage.domain.local which has IPs for all 4
>>         Gluster
>>         nodes set in DNS.
>>
>>         When one of the Gluster nodes goes down (accidently reboot) a
>>         lot of the
>>         vms getting read only filesystem. Even when the node comes back up.
>>
>>         How can i prevent this?
>>         I expect that the vm just uses the replicated file on the other
>>         node,
>>         without getting ro fs.
>>
>>         Any hints?
>>
>>         Thanks in advance.
>>
>>         -- 
>>         Regards
>>         André Bauer
>>
>>         _______________________________________________
>>         Gluster-users mailing list
>>         Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
>>         http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>     _______________________________________________
>>     Gluster-users mailing list
>>     Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
>>     http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> -- 
>> Best regards,
>> Roman.
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: abauer@xxxxxxxxx
abauer@xxxxxxxxx <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
----------------------------------------------------------------------
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
----------------------------------------------------------------------
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux