Re: High I/O And Processor Utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

healing across VM-s is really not normal. I would search for reasons on network side. I run 3.6.5 for my Proxmox setup (with libgfapi backend), which serves around 30 VM atm (and hope to run even more) and have no issues even when it starts it backups (copy large VM snapshots from glusterf volume to proxmox local disk). Disks in Glusterfs servers are all SAS 10K rpm and runing raid5 for distributed volumes and no raid for replicated volumes.

My options are:

Options Reconfigured:
network.ping-timeout: 15
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
server.allow-insecure: on
performance.write-behind: off

2016-01-10 1:14 GMT+02:00 Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>:
On 9/01/2016 12:34 PM, Ravishankar N wrote:
If you're trying arbiter, it would be good if you can compile the 3.7 branch and use it since it has an important fix (http://review.gluster.org/#/c/12479/) that will only make it to glusterfs-3.7.7. That way you'd get this fix and the sharding ones too right away.


is 3.7.7 far off?

--
Lindsay Mathieson


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



--
Best regards,
Roman.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux