Re: [Users] Creation of preallocated disk with Gluster replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Darell,

Le 2014-01-08 18:47, Darrell Budic a écrit :
Grégoire-

I think this is expected behavior. Well, at least the high glusterfsd
CPU use during disk creation, anyway. I tried creating a 10 G disk on
my test environment and observed similar high CPU usage by glusterfsd.
Did the creation on the i5 system, it showed 95%-105% cpu for
glusterfsd during creation, with the core2 system running ~35-65%
glusterfsd utilization during the creation. Minor disk wait was
observed on both systems, < 10% peak and generally < 5%. I imagine my
ZFS cached backends helped a lot here. Took about 3 minutes, roughly
what I’d expect for the i5’s disk system.

Does that mean that glusterfs + ovirt absolutely need to be separated so that changes on glusterfs have no negative impact on VM in production ? Here I got the problem with the creation of a preallocated disk but if tomorrow I want to change the way I replicate glusterfs bricks, I guess I'll have the same issue.

Network usage was about 45%
of the 1G link. No errors or messages logged to /var/log/messages.

If checked with iftop to be more accurate, and I can see it uses more than 95% with my setup.

Depending on what your test setup looks like, I’d check my network for
packet loss or errors first.

I did, I have 0 network error and 0% packet loss (for this latter, I just used ping with the ovirtmgmt interface, which showed 0% packet lost while my server was considered as down by Ovirt).

Then look at my storage setup and test
pure throughput on the disks to see what you’ve got, maybe see what
else is running. Did you use an NFS cluster or a PosixFS cluster for
this?

I use PosixFS cluster for this.
My detailed setup is :

Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (*12), 65GB Ram
Disk (both boot and storage) : Perc H710 2To, hardware RAID 1
2 1G ethernet in bonding (failover)

Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz (*8), 64GB Ram
Disk (boot + storage) : Perc H310
1G ethernet

I'm pretty sure there isn't any problem with the switch between them.

To conclude :
1) About the network issue, I think it could be possible to use iptables with QoS rules on specific ports to limit GlusterFS throughput. 2) However, the CPU issue seems to be more difficult to avoid. I guess I just have to review my architecture...

Thank you,
Regards,
Grégoire Leroy
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux