Re: Disastrous performance with rsync to mounted Gluster volume.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 04/24/2015 11:03 AM, Ben Turner wrote:
----- Original Message -----
From: "Ernie Dunbar" <maillist@xxxxxxxxxxxxx>
To: "Gluster Users" <gluster-users@xxxxxxxxxxx>
Sent: Friday, April 24, 2015 1:15:32 PM
Subject: Re:  Disastrous performance with rsync to mounted Gluster volume.

On 2015-04-23 18:10, Joe Julian wrote:
On 04/23/2015 04:41 PM, Ernie Dunbar wrote:
On 2015-04-23 12:58, Ben Turner wrote:

+1, lets nuke everything and start from a known good.  Those error
messages make me think something is really wrong with how we are
copying the data.  Gluster does NFS by default so you shouldn't have
have to reconfigure anything after you recreate the volume.

Okay... this is a silly question. How do I do that? Deleting the
volume doesn't affect the files in the underlying filesystem, and I
get the impression that trying to delete the files in the underlying
filesystem without shutting down or deleting the volume would result
in Gluster trying to write the files back where they "belong".

Should I stop the volume, delete it, then delete the files and start
from scratch, re-creating the volume?
That's what I would do.

Well, apparently removing the .glusterfs directory from the brick is an
exceptionally bad thing, and breaks gluster completely, rendering it
inoperable. I'm going to have to post another thread about how to fix
this mess now.
You are correct and I would just start from scratch Ernie.  Creating a gluster cluster is only about 3-4 commands and should only take a minute or two.  Also with all the problems you are having I am not confident in your data integrity.  All you need to do to clear EVERYTHING out is:

service glusterd stop
killall glusterfsd
killall glusterfs
sleep 1
for file in /var/lib/glusterd/*; do if ! echo $file | grep 'hooks' >/dev/null 2>&1;then rm -rf $file; fi; done

 From there restart the gluster service and recreate everything:

service glusterd restart
<make a new filesystem on your bricks, mount>
gluster peer probe <my peer>
gluster v create <my vol>
gluster v start <my vol>
gluster v info

 From there mount the new volume on your system with the data  you want to migrate:

mount -t nfs -o vers=3 <my vol> <my mount>
rsync <your rsync command>

And your rsync command should include, "--inplace".

This should get you where you need to be.  Before you start to migrate the data maybe do a couple DDs and send me the output so we can get an idea of how your cluster performs:

time `dd if=/dev/zero of=<gluster-mount>/myfile bs=1024k count=1000; sync`
echo 3 > /proc/sys/vm/drop_caches
dd if=<gluster mount> of=/dev/null bs=1024k count=1000

If you are using gigabit and glusterfs mounts with replica 2 you should get ~55 MB / sec writes and ~110 MB / sec reads.  With NFS you will take a bit of a hit since NFS doesnt know where files live like glusterfs does.

-b

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux