Here are some more memory-usage data points:
client -> unify (alu) -> io-threads -> read-ahead -> write-behind:
6697 root 15 0 369m 295m 876 S 45 14.6 3:10.13 [glusterfs]
client -> unify (alu) -> read-ahead -> write-behind:
7218 root 17 0 476m 464m 844 R 85 23.0 41:15.20 [glusterfs]
client -> unify (alu) -> write-behind:
8950 root 16 0 410m 398m 824 S 44 19.7 4:01.94 [glusterfs]
client -> unify (alu):
9232 root 16 0 401m 389m 800 S 35 19.3 3:54.52 [glusterfs]
client -> unify (rr):
9446 root 15 0 397m 385m 788 R 30 19.1 4:01.87 [glusterfs]
client (1)
9744 root 16 0 66212 52m 696 S 3 2.6 3:32.37 [glusterfs]
Note that even the single-client glusterfs starts out very modestly
(with about 14m virt and 1.2m res).
We can safely rule out the performance xlators, as well as the
schedulers.
As we see, even a lone client seems to demonstrate a (small) memory leak.
Since I normally unify 4 volumes plus a namespace, that leak might be
multiplied by 5. I don't know for sure if that's enough to be
responsible on its own, or if there's also a leak in unify itself.
I hope that helps to narrow it down somewhat.
Rhesa
(1) the glusterfs client spec file for this run consisted of a single
protocol/client volume, without any other xlators.