Hi, all. I set up a dht system, and sent a HUP signal to client to trigger the reconfiguration. But i found that the TCP connection established increased by the number of bricks(the number of glusterfsd progress). $ ps -ef | grep glusterfs root 8579 1 0 11:28 ? 00:00:00 glusterfsd -f /home/huz/dht/server.vol -l /home/huz/dht/server.log -L TRACE root 8583 1 0 11:28 ? 00:00:00 glusterfsd -f /home/huz/dht/server2.vol -l /home/huz/dht/server2.log -L TRACE root 8587 1 0 11:28 ? 00:00:00 glusterfsd -f /home/huz/dht/server3.vol -l /home/huz/dht/server3.log -L TRACE root 8595 1 1 11:28 ? 00:00:00 glusterfs -f /home/huz/dht/client.vol -l /home/huz/dht/client.log -L TRACE /home/huz/mnt $ sudo netstat -ntp | grep glusterfs tcp 0 0 127.0.0.1:6998 127.0.0.1:1023 ESTABLISHED 8579/glusterfsd tcp 0 0 127.0.0.1:1021 127.0.0.1:7000 ESTABLISHED 8595/glusterfs tcp 0 0 127.0.0.1:7000 127.0.0.1:1021 ESTABLISHED 8587/glusterfsd tcp 0 0 127.0.0.1:1023 127.0.0.1:6998 ESTABLISHED 8595/glusterfs tcp 0 0 127.0.0.1:6999 127.0.0.1:1022 ESTABLISHED 8583/glusterfsd tcp 0 0 127.0.0.1:1022 127.0.0.1:6999 ESTABLISHED 8595/glusterfs huz at furutuki:~/dht$ sudo kill -s HUP 8595 huz at furutuki:~/dht$ sudo netstat -ntp | grep glusterfs tcp 0 0 127.0.0.1:6998 127.0.0.1:1023 ESTABLISHED 8579/glusterfsd tcp 0 0 127.0.0.1:1021 127.0.0.1:7000 ESTABLISHED 8595/glusterfs tcp 0 0 127.0.0.1:6999 127.0.0.1:1019 ESTABLISHED 8583/glusterfsd tcp 0 0 127.0.0.1:7000 127.0.0.1:1021 ESTABLISHED 8587/glusterfsd tcp 0 0 127.0.0.1:1018 127.0.0.1:7000 ESTABLISHED 8595/glusterfs tcp 0 0 127.0.0.1:6998 127.0.0.1:1020 ESTABLISHED 8579/glusterfsd tcp 0 0 127.0.0.1:1023 127.0.0.1:6998 ESTABLISHED 8595/glusterfs tcp 0 0 127.0.0.1:7000 127.0.0.1:1018 ESTABLISHED 8587/glusterfsd tcp 0 0 127.0.0.1:1019 127.0.0.1:6999 ESTABLISHED 8595/glusterfs tcp 0 0 127.0.0.1:6999 127.0.0.1:1022 ESTABLISHED 8583/glusterfsd tcp 0 0 127.0.0.1:1022 127.0.0.1:6999 ESTABLISHED 8595/glusterfs tcp 0 0 127.0.0.1:1020 127.0.0.1:6998 ESTABLISHED 8595/glusterfs lookup at the example above, the TCP connection increased by 3. I wonder if this is the normal? Further, I checked the log of client, and found something like this: [2011-04-07 11:28:06.92451] T [rpc-clnt.c:405:rpc_clnt_reconnect] 0-client0: breaking reconnect chain [2011-04-07 11:28:06.92596] T [rpc-clnt.c:405:rpc_clnt_reconnect] 0-client1: breaking reconnect chain [2011-04-07 11:28:06.92648] T [rpc-clnt.c:405:rpc_clnt_reconnect] 0-client2: breaking reconnect chain [2011-04-07 11:29:05.101120] T [rpc-clnt.c:405:rpc_clnt_reconnect] 1-client0: breaking reconnect chain [2011-04-07 11:29:05.101254] T [rpc-clnt.c:405:rpc_clnt_reconnect] 1-client1: breaking reconnect chain [2011-04-07 11:29:05.101307] T [rpc-clnt.c:405:rpc_clnt_reconnect] 1-client2: breaking reconnect chain Apparently, there are two graphs in the system.(glusterFS 3.1.3 output graph id in gf_log function). My understanding is that there should only be 1 graph which is consist of xlators configured in the .vol file. Due to the HUP signal, the GlusterFS set up a new graph and initialize new TCP connections, so that i got the result above. My question is why there need to be two graphs or more?