On 10/18/2012 09:39 AM, Corey Kovacs wrote: > My experiences so far were sort of disappointing until I found out a few key > items about GlusterFS which I'd taken for granted. > > 1. Stripes are not what you might think. The I/O for a stripe does _not_ fan > out as in a raid card. It's an unfortunate use of the term only describing and > allowing you to store files larger than the max size of a single brick. I'm not sure what you mean by "don't fan out" because stripe *will* issue multiple requests in parallel. It's just not that beneficial most of the time, because the overhead of splitting and recombining writes tends to overwhelm the advantage of parallelism. Some people might have different results, especially on faster networks, but we don't push it as a general-purpose performance enhancer because it doesn't work that way for most people. > 2. I/O is done in sync mode so cache coherency isn't an issue and to ensure the > integrity of the data written. Generally true only for metadata - not for data. We do honor O_SYNC and such when we see them, of course, but otherwise we're quite happy to buffer writes in write-behind, cache reads in io-cache, etc. > 3. The performance of a distributed volume far exceeds that of a stripe for my > use. Again, depends on the size of the bricks. ...and the size of the I/O requests, and a bunch of other things.