Re: File Corruption with shards - 100% reproducable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You should be able to find a file named group-virt.example under /etc/glusterfs/
Copy that as /var/lib/glusterd/virt.

Then execute `gluster volume set datastore1 group virt`.
Now with this configuration, could you try your test case and let me know whether the file corruption still exists?

-Krutika


From: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
To: "Krutika Dhananjay" <kdhananj@xxxxxxxxxx>
Cc: "gluster-users" <gluster-users@xxxxxxxxxxx>
Sent: Saturday, November 14, 2015 10:51:26 AM
Subject: RE: File Corruption with shards - 100% reproducable

gluster volume set datastore1 group virt

Unable to open file '/var/lib/glusterd/groups/virt'. Error: No such file or directory

 

Not sure I understand this one – couldn’t find any docs for it.

 

Sent from Mail for Windows 10

 

 


From: Krutika Dhananjay
Sent: Saturday, 14 November 2015 1:45 PM
To: Lindsay Mathieson
Cc: gluster-users
Subject: Re: File Corruption with shards - 100% reproducable

 

 

The logs are at /var/log/glusterfs/<hyphenated-path-to-the-mountpoint>.log

 

OK. So what do you observe when you set group virt to on?

 

# gluster volume set <VOL> group virt

 

-Krutika

 

From: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
To: "Krutika Dhananjay" <kdhananj@xxxxxxxxxx>
Cc: "gluster-users" <gluster-users@xxxxxxxxxxx>
Sent: Friday, November 13, 2015 11:57:15 AM
Subject: Re: File Corruption with shards - 100% reproducable

 

 

On 12 November 2015 at 15:46, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:

OK. What do the client logs say?

 

Dumb question - Which logs are those? 

 

 

Could you share the exact steps to recreate this, and I will try it locally on my setup?

 

I'm running this on a 3 node proxmox cluster, which makes the vm creation & migration easy to test.

 

Steps:

- Create 3 node gluster datastore using proxmox vm host nodes


- Add gluster datastore as a storage dvice to proxmox

  * qemu vms use the gfapi to access the datastore

  * proxmox also adds a fuse mount for easy acces

 

- create a VM on the gluster storage, QCOW2 format. I just created a simple debain Mate vm

 

- start the vm, open a console to it.

 

- live migrate the VM to a another node

 

- It will rapdily barf itself with disk errors

 

- stop the VM

 

- qemu will show file corruption (many many errors)

  * qemu-img check <vm disk image>
  * qemu-img info <vm disk image>

 

 

Repeating the process with sharding off has no errors.


 

 

Also, want to see the output of 'gluster volume info'.

 


I've trimmed settings down to a bare minimum. This is a test gluster cluster so I can do with it as I wish.

 

 


gluster volume info
 
Volume Name: datastore1
Type: Replicate
Volume ID: 238fddd0-a88c-4edb-8ac5-ef87c58682bf
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/mnt/ext4
Brick2: vng.proxmox.softlog:/mnt/ext4
Brick3: vna.proxmox.softlog:/mnt/ext4
Options Reconfigured:
performance.strict-write-ordering: on
performance.readdir-ahead: off
cluster.quorum-type: auto
features.shard: on

 



--

Lindsay

 

 

 


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux