Cool; Amar was, of course, right about protocol/server needing to be the
last volume in the chain. That's why I wasn't having any problems with
performance translators on the server; they just weren't being used.
Using stat-prefetch on the server seems easily fatal; just ls or du a
subdirectory and the glusterfsd processes die instantly. I'm not sure if
stat-prefetch really makes sense on the server, anyway, though. It seems
to work fine on the client.
Read-ahead does, of course, have the memory leak on server as well as
client.
I don't see any issues with write-ahead, and happily, io-threads seems to
work just fine on the server side.
Thanks,
Brent
On Thu, 8 Mar 2007, Anand Babu wrote:
,----[ Amar S. Tumballi writes: ]
| On Thu, Mar 08, 2007 at 12:52:35PM -0500, Brent A Nelson wrote:
| > Do I chain the performance translators for the server the same way
| > as for the client? E.g.:
| > | > volume server
| > type protocol/server
| > subvolumes share0 share1 share2 share3 share4 share5 share6
| > share7 share8 share9 share10 share11 share12 share13 share14
| > share15
| > ...
| > end-volume
| | you can't have more than one subvolumes in protocol/server
| xlator. The clustering ability is there only within
| cluster/{unify,stripe,afr} xlators.
`----
protocol/server accepts multiple sub volumes. That is how you export
multiple volumes.
--
Anand Babu GPG Key ID: 0x62E15A31
Blog [http://ab.freeshell.org] The GNU Operating System
[http://www.gnu.org]