Re: Unnecessary healing in 3-node replication setup on reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 17 October 2015 at 00:26, Udo Giacomozzi <udo.giacomozzi@xxxxxxxxxx> wrote:
To me this sounds like Gluster is not really suited for big files, like as the main storage for VMs - since they are being modified constantly.

Depends :)

Any replicated storage will have to heal its copies if they are written to when a node is down. So long as the files can still be read/written while being healed and the resource usage (CPU/Network) is not to high then it should be transparent - that's a major  whole pint of a replicated filesystem.

I'm guessing that like me, you are running your gluster storage on your VM Hosts and you like me are a chronic tweaker, so tend to reboot the hosts more than you should. In that case you might want to consider moving your gluster storage to seperate dedicated nodes that you can leave up.
 
Or am I missing something? Perhaps Gluster can be configured to heal only modified parts of the files?


Not that I know of.

ceph is pretty good tracking changes and only transferring them - heals form a reboot only generally take a few minutes on my three node setup. But it is a huge headache to set up and administer, and its I/O performance is pretty bad on small setups (< 6 nodes, < 24 disks). But it scales really well and really shines when you get into the hundreds of nodes and disks, but I would not recommend it for small IT setups.


--
Lindsay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux