> > Just to check I have this straight: > - Proxmox cluster using GlusterFS for storage > - bricks on Proxmox nodes > - Linux VM's running on Proxmox Nodes > - InnoDB running on the linux vms > Yes, that's exactly it. > When one of the proxmox nodes crashes (Power outage?) the InnoDB > database is hosed? > Not always, but often. Most of the time enabling force recovery fixes it, but not always, I had to export, wipe and then re-import everything a few times to fix innodb. > We run multiple MS SQL servers in the same setup and a few mysql, never > had that problem with them after server outages. > > - Replica 3? > - Could you post your gluster info? > - Whats the underlying filesystem for the bricks? ZFS? What sync mode > does it have set? > - Whats the KVM cache mode? The KVM cache is directsync, I was using none before and I started putting directsync hoping to fix that problem. The bricks are on XFS, but I also tried ext4. I didn't try ZFS because I have no idea how that even works. Here is the config of one of the clusters : Volume Name: VMs Type: Replicate Volume ID: c5272382-d0c8-4aa4-aced-dd25a064e45c Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: ips4adm:/mnt/storage/VMs Brick2: ips5adm:/mnt/storage/VMs Brick3: ips6adm:/mnt/storage/VMs Options Reconfigured: performance.readdir-ahead: on cluster.quorum-type: auto cluster.server-quorum-type: server network.remote-dio: enable cluster.eager-lock: enable performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off features.shard: on features.shard-block-size: 64MB cluster.data-self-heal-algorithm: full network.ping-timeout: 15 The data-self-heal-algorithm was advised by Krutika I believe, when we were having huge heal problems. Appart from that, everything works fine. fsck when the VM starts after a crash usually finds a few things, but nothing big, everything boots and nothing is missing. Only MySQL, from time to time. I imagine you might run into the problem someday too, we're just having a lot more crashes than the average user for reasons I mentionned in another mail. -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
Attachment:
signature.asc
Description: Digital signature
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users