Re: Performance Translators' Stability and Usefulness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Geoff Kassel wrote:

Sounds like a lot of effort and micro-downtime compared to a migration
to something else. Have you explored other options like PeerFS, GFS and
SeznamFS? Or NFS exports with failover rather than Gluster clients, with
Gluster only server-to-server?

These options are not production ready (as I believe has been pointed out already to the list) for what I need;

What is production unready (more than Gluster) about PeerFS or SeznamFS?

or in the case of NFS, defeating the point of redundancy in the first place.

You can fail over NFS servers. If the servers themselves are mirrored (DRBD) and/or have a shared file system NFS should be able to handle the IP being migrated between servers. I've found it this tends to work better with NFS over UDP provided you have a network that doesn't normally suffer packet loss.

(Also, GFS is also not compatible with the kernel patchset I need to use.)

How do you mean? GFS1 has been in the vanilla kernel for a while.

I have tried AFR on the server side and the client side. Both display similar issues.

An older version of GlusterFS - as buggy as it is for me - is unfortunately still the best option.

Out of interest, what was the last version of Gluster did you deem completely stable?

(That doesn't mean I can't complain about the lack of progress towards stability and reliability, though :)

Heh - and would you believe I just rebooted one of my root-on-glusterfs nodes and it came up OK without the bail-out requiring manual intervention caused by the bug that causes first access after mounting to fail before things have settled.

One of the problems is that some tests in this case are impossible to
carry out without having multiple nodes up and running, as a number of
bugs have been arising in cases where nodes join/leave or cause race
conditions. It would require a distributed test harness which would be
difficult to implement so that they run on any client that builds the
binaries. Just because the test harness doesn't ship with the sources
doesn't mean it doesn't exist on a test rig the developers use

Okay, so what about the volume of test cases that can be tested without a distributed test harness? I don't see any sign of testing mechanisms for that.

That point is hard to argue against. :)

And wouldn't it be prudent anyway - giving how often the GlusterFS devs do not have access to the platform with the reported problem - to provide this harness so that people can generate the appropriate test results the devs need for themselves? (Giving a complete stranger from overseas root access is a legal minefield to those who have to work with data held in-confidence.)

Indeed. And shifting test-case VM images tends to be impractical (even though I have provided both to the gluster developers in the past for specific error-case analysis).

It's been my impression, though, that the relevant bugs are not heisenbugs or race conditions.

I don't agree on that particular point, since the last outstanding bug I'm seeing with any significant frequency in my use case is the one of having to wait for a few seconds for the FS to settle after mounting before doing anything or the operation fails. And to top it off, I've just had it succeed without the wait. That seems quite heisenbuggy/recey to me. :)

(I'm judging that on the speed of the follow up patch, by the way - race conditions notoriously can take a long time to track down.)

That doesn't help - the first-access-settle-time bug has been around for a very long time. ;)

Gordan




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux