Re: Performance Translators' Stability and Usefulness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Geoff Kassel wrote:

(If it wasn't for that migrating to another solution would cause considerable, business-destroying downtime for my client base, I would have done so quite some time ago.)

There is an argument somewhere in there about deploying things that aren't production ready at time of deployment. But that's a different story.

All I see instead is this constant drive towards new features, with little to no signs that functionality that should be complete by now is actually so.

I can understand your point of view, but at the same time I'm assuming that the feature expansion is being done at the request of the paying customers they have, whose priorities and use cases may well be sufficiently different that the issues we are running into aren't as critical for them.

AFR is *the* key feature of GlusterFS in my mind - and the only point (I feel) for using it. Yet it's still this unstable after two plus years of development?

It is the only feature of it that I am looking into using, too, but it is plausible that somebody with a large distributed server farm focused on performance rather than redundancy may see it differently.

I have been using GlusterFS since the v1.3.x days, and I have yet to see
a version since then that doesn't crash at least once a day from just
load on even the simplest configurations.
>
I wouldn't say daily, but occasionally, I have seen lock-ups recently
during multiple glusterfs resyncs (separate volumes) on the new/target
machine. I have only seen it once, however, forcefully killing the
processes fixed it and it didn't re-occur. I have a suspicion that this
was related to the mounting order. I have seen weirdness happen when
changing the server order cluster-wide, and when servers rejoin the
cluster.

Well, I see one to two crashes nightly, when I rotate logs or perform backups that are stored on the GlusterFS exported drive. (It's hit and miss which processes run to completion on the first go before the crash, which should never be an issue with a reliable storage medium.)

There's a strong argument there for implementing syslog based logging.
How do you do log rotation, BTW? Do you have to issue a HUP? Or restart the glusterfsd process? As I said, I have seen issues with restarting server processes in different orders. Sometimes things will lock up and the glusterfsd process has to be killed and restarted. It seems to work when servers come up in priority order, but other orderings can be hit and miss.

The only common factor identifiable is higher-than-average I/O load.

I don't run any performance translators, because they make the situation much worse. It's just a straight AFR/posix-locks/dataspace/namespace setup, as I've posted quite a few times before.

Why do you namespaces for straight AFR?

I've had to institute server scripting to restart GlusterFS and any processes that touches replicated files (i.e. nearly everything running on my servers) because of these crashes to try to minimise the downtime to my clients.

Sounds like a lot of effort and micro-downtime compared to a migration to something else. Have you explored other options like PeerFS, GFS and SeznamFS? Or NFS exports with failover rather than Gluster clients, with Gluster only server-to-server?

Yes, that was bad, 2.0.2 is pretty good. Sure, there is still that
annoying settle-time bug that consistently fails the first attempt to
access the file system immediately after mounting (the time gap is
pretty tight, but if you script it, it is 100% reproducible). But other
than that I'm finding that all the other issues I had with it have been
resolved.

After two major data integrity bugs in two major releases in a row, I'm taking very much a wait-and-see attitude with any and all GlusterFS releases.

My use-case is somewhat unusual because I'm working on shared-rootfs clusters, and I need WAN functionality which cripples solutions like DRBD+GFS. But for data-only storage, there are probably alternatives out there. I'm intending to implement SeznamFS for bulk data, for example, because it's MySQL-like round-robin file replication distributes the bandwidth usage much more effectively (at the expense of having no locking capability and the replication ring being cut off if any one node fails). I'll probably stick with Gluster for /home for now because SeznamFS seemed to cause X and/or KDE to fail to start when /home was on SeznamFS.

What exactly do you mean by "regression test"? Regression testing means
putting in a test case to check for all the bugs that were previously
discovered and fixed to make sure a further change doesn't re-introduce
the bug. I haven't seen the test suite, so have no reason to doubt that
there is regression testing being carried out for each release. Perhaps
the developers can clarify the situation on the testing?

I meant it in the same sense that you do. I have not seen any framework - automated or otherwise - in the repository or release files to run through tests for previous and/or forseeable bugs and corner cases.

OK, I haven't actually checked. A "make test" feature listing all bugs by bugzilla ID as it goes through the testing process would go a long way toward providing some quality reassurance.

A test to compare cryptographic hashes of files before, after, and during storage/transfer between GlusterFS clients and backends should surely exist if there's any half-serious attempt at regression testing going on.

One of the problems is that some tests in this case are impossible to carry out without having multiple nodes up and running, as a number of bugs have been arising in cases where nodes join/leave or cause race conditions. It would require a distributed test harness which would be difficult to implement so that they run on any client that builds the binaries. Just because the test harness doesn't ship with the sources doesn't mean it doesn't exist on a test rig the developers use.

Surely, though, if tests like these existed and were being used, after the debacle with 2.0.0, they would have picked up at least the issue reported in 2.0.1 before release?

That depends. There are always going to be borderline or unusual use cases that wouldn't have been foreseen. For example, I tripped several issues with my usage of it for the root file system that would have been unlikely to arise for most people. The most odd one was the fact that glusterfsd wouldn't start without /tmp existing and being writable even though it doesn't seem to keep anything in there after startup. I only twigged that was what was happening when I was working on debuging it with Harha and for him the mounting worked when he mounted under /tmp, when I was mounting under /mnt. He thought it was something about /mnt having some kind of weird permissions issue, but then I twigged that I didn't actually have /tmp on my initrd bootstrap where this was being done on my setup. To this day I haven't seen an explanation of why /tmp is required and if it is a fuse requirement or gluster requirement or something else entirely.

That leads me to ask - where's the unit tests that are meant to exist, according to http://www.gluster.org/docs/index.php/GlusterFS_QA? If they exist, why (apparently) aren't tests like these still not part of them?

As I explained before, you can't sensibly come up with QA tests for timing based issues and race conditions, because those will always be heisenbuggy to some extent. I'm not saying such tests should exist, and at least perform some hammering for extended periods that was known to trigger the known issues, but that only counts statistically, it won't provide conclusive evidence of absence of the bug.

Gordan




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux