As of now GlusterD maintains its own portmap table and is responsible for allocating ports for the services like brick process, snapd. The flow with the portmap goes like this: When a volume start is triggered, GlusterD checks whether the brick has been already assigned a port earlier, if so then the same port is been passed to the brick process, otherwise GlusterD picks up a free port from the portmap table. Now say if the node reboots, then GlusterD first starts the daemons followed by the brick process. Now given brick process tries to bind to the same persisted port there is no guarantee that the same port wouldn't be consumed by some other application (be it from gluster application or not) and this is exactly what we noticed in one of the BZ [1]. We hit this very frequently when number of brick processes go high. I think bringing up a process binding to a persisted port is not a good idea since its prone to fail considering processes (clients) contend for the same ports. I've sent a patch [2] which follows the same approach what snapd currently does for port. the ports will continue to get persisted but on every brick restart a fresh port will be allocated for the brick. The only reason of persisting the brick is that the same will be attempted to be removed from the portmap in case the brick hasn't been shutdown gracefully and a pmap_registry_remove () hasn't been invoked. We'd also need another patch [3] to get this work as currently we don't mark the port as free in pmap_registry_remove. Please note that [2] doesn't fully eliminate the probability of other process stealing the port allocated by GlusterD as there is still a small time window where GlusterD allocates the port and brick process binds to it. As a complete/long term solution we think that GlusterD has to give up managing the port allocation and the same has to be done by brick/daemon process and GlusterD will be doing a book keeping of those ports. Your comments/suggestion is more than welcome here :) ~Atin [1] https://bugzilla.redhat.com/show_bug.cgi?id=1322805 [2] http://review.gluster.org/#/c/13865/ [3] http://review.gluster.org/#/c/10785/ _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel