Hi,
I'll run few tests more today with running some copied production VM-s to be sure, that they will fail over during high writing and reading also.
We are just getting ready to move all of our production VM-s to gluster, so it'll be around 30 VMs on one site and 15 on other. These are not VM-s "for sale", we are using VM-s for our infrastructure. Both MS and Linux. Sizes are very different. 10-150 GB per VM. If it will be ok with gluster, we will try to run one with 2TB disks for owncloud.
I'm not sure about OpenVZ containers, will test that too. At least I'm sure, that containers do not run well on mounted glusterfs somewhere not in /var/lib/vz directory. But afaik there always been a problem with containers running not from /var/lib/vz directory, so this one is not tested yet.
I'll try to make simple benchmark tests during the self-heal process.
2014-08-29 10:02 GMT+03:00 Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>:
Wow, this is great news! Thanks a lot for sharing the results :-). Did you get a chance to test the performance of the applications in the vm during self-heal?
May I know more about your use case? i.e. How many vms and what is the avg size of each vm etc?
Pranith
On 08/28/2014 11:27 PM, Roman wrote:
Here are the results.1. still have problem with logs rotation. logs are being written to .log.1 file, not .log file. any hints, how to fix?2. healing logs are now much more better, I can see the successful message.3. both volumes with HD off and on successfully synced. the volume with HD on synced much more faster.4. both VMs on volumes survived the outage, when new files were added and deleted during outage.
So replication works well with both HD on and off for volumes for VM-s. With HD even faster. Need to solve the logging issue.
Seems we could start production storage from this moment :) The whole company will use it. Some distributed and some replicated. Thanks for great product.
2014-08-27 16:03 GMT+03:00 Roman <romeo.r@xxxxxxxxx>:
Installed new packages. Will make some tests tomorrow. thanx.
2014-08-27 14:10 GMT+03:00 Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>:
That is great Kaleb. Please notify semiosis as well in case he is yet to fix it.
On 08/27/2014 04:38 PM, Kaleb KEITHLEY wrote:
On 08/27/2014 03:09 AM, Humble Chirammal wrote:
...
----- Original Message -----
| From: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
| To: "Humble Chirammal" <hchiramm@xxxxxxxxxx>
| Cc: "Roman" <romeo.r@xxxxxxxxx>, gluster-users@xxxxxxxxxxx, "Niels de Vos" <ndevos@xxxxxxxxxx>
| Sent: Wednesday, August 27, 2014 12:34:22 PM
| Subject: Re: libgfapi failover problem on replica bricks
|
|
| On 08/27/2014 12:24 PM, Roman wrote:
| > root@stor1:~# ls -l /usr/sbin/glfsheal
| > ls: cannot access /usr/sbin/glfsheal: No such file or directory
| > Seems like not.
| Humble,
| Seems like the binary is still not packaged?
Checking with Kaleb on this.
| >>> |
| >>> | Humble/Niels,
| >>> | Do we have debs available for 3.5.2? In 3.5.1
| >>> there was packaging
| >>> | issue where /usr/bin/glfsheal is not packaged along
| >>> with the deb. I
| >>> | think that should be fixed now as well?
| >>> |
| >>> Pranith,
| >>>
| >>> The 3.5.2 packages for debian is not available yet. We
| >>> are co-ordinating internally to get it processed.
| >>> I will update the list once its available.
| >>>
| >>> --Humble
glfsheal isn't in our 3.5.2-1 DPKGs either. We (meaning I) started with the 3.5.1 packaging bits from Semiosis. Perhaps he fixed 3.5.1 after giving me his bits.
I'll fix it and spin 3.5.2-2 DPKGs.
Pranith
--
Kaleb
--
Best regards,
Roman.
--
Best regards,
Roman.
Best regards,
Roman.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users