Re: Problems with qemu and disperse volumes (live merge)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Strahil,

WARNING: As you enabled sharding - NEVER DISABLE SHARDING, EVER !

Thanks -- good to be reminded :)
 
>When you say they will not be optimal are you referring mainly to
>performance considerations? We did plenty of testing, and in terms of
>performance didn't have issues even with I/O intensive workloads (using
>SSDs, I had issues with spinning disks).

Yes, the client side has  to connect to 6 bricks (4+2) at a time  and calculate the data in order to obtain the necessary information.Same is valid for writing.
If you need to conserve space, you can test VDO without compression (of even with it).

Understood -- will explore VDO. Storage usage efficiency is less important than fault tolerance or performance for us -- disperse volumes seemed to tick all the boxes so we looked at them primarily.
But clearly I had missed that they are not used as mainstream VM storage for oVirt (I did know they weren't supported, but as explained thought was more on the management side).
 

Also  with replica  volumes,  you can use 'choose-local'  /in case you have faster than the network storage (like  NVMe)/ and increase the read speed. Of course  this feature is useful for Hyperconverged setup (gluster + ovirt on the same node).

Will explore this option as well, thanks for the suggestion.


If you were using ovirt 4.3 ,  I  would  recommend you to focus  on gluster. Yet,  you  use  oVirt 4.4 which is quite  newer and it needs  some polishing.

Ovirt 4.3.9 (using the older Centos 7 qemu/libvirt) unfortunately had similar issues with the disperse volumes. Not sure if exactly the same, as never looked deeper into it, but the results were similar.
Ovirt 4.4.0 has some issues with snapshot deletion that are independent from Gluster (I have raised the issue here, https://bugzilla.redhat.com/show_bug.cgi?id=1840414, should be sorted with 4.4.2 I guess), so at the moment it only works with the "testing" AV repo.
 
Check ovirt  engine  logs (on the HostedEngine VM or your standalone engine) ,  vdsm logs  on the host that was running the VM  and next - check the brick  logs.

Will do.

Thanks,
Marco

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux