So I have an answer to this problem now. I was not aware that we have a limited number of commands (read only) which can be supported with --remote-host option in CLI and here is the list: [GLUSTER_CLI_LIST_FRIENDS] = { "LIST_FRIENDS", GLUSTER_CLI_LIST_FRIENDS, glusterd_handle_cli_list_friends, NULL, 0, DRC_NA}, [GLUSTER_CLI_UUID_GET] = { "UUID_GET", GLUSTER_CLI_UUID_GET, glusterd_handle_cli_uuid_get, NULL, 0, DRC_NA}, [GLUSTER_CLI_DEPROBE] = { "FRIEND_REMOVE", GLUSTER_CLI_DEPROBE, glusterd_handle_cli_deprobe, NULL, 0, DRC_NA}, [GLUSTER_CLI_GET_VOLUME] = { "GET_VOLUME", GLUSTER_CLI_GET_VOLUME, glusterd_handle_cli_get_volume, NULL, 0, DRC_NA}, [GLUSTER_CLI_GETWD] = { "GETWD", GLUSTER_CLI_GETWD, glusterd_handle_getwd, NULL, 1, DRC_NA}, [GLUSTER_CLI_STATUS_VOLUME] = {"STATUS_VOLUME", GLUSTER_CLI_STATUS_VOLUME, glusterd_handle_status_volume, NULL, 0, DRC_NA}, [GLUSTER_CLI_LIST_VOLUME] = {"LIST_VOLUME", GLUSTER_CLI_LIST_VOLUME, glusterd_handle_cli_list_volume, NULL, 0, DRC_NA}, [GLUSTER_CLI_MOUNT] = { "MOUNT", GLUSTER_CLI_MOUNT, glusterd_handle_mount, NULL, 1, DRC_NA}, [GLUSTER_CLI_UMOUNT] = { "UMOUNT", GLUSTER_CLI_UMOUNT, glusterd_handle_umount, NULL, 1, DRC_NA}, HTH, Atin On 06/06/2016 02:51 PM, Jiang, Jet (Nokia - CN/Hangzhou) wrote: > Hi, > > Ok, thank you very much~~ > > Thanks, > Br, > Jet > > -----Original Message----- > From: Atin Mukherjee [mailto:amukherj@xxxxxxxxxx] > Sent: Monday, June 06, 2016 5:07 PM > To: Jiang, Jet (Nokia - CN/Hangzhou) <jet.jiang@xxxxxxxxx>; Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> > Cc: pranithk@xxxxxxxxxxx; Madappa, Kaushal <kmadappa@xxxxxxxxxx> > Subject: Re: Some error appear when create glusterfs volume > > One more thing to add here is remote-host option usage at CLI is not > always safe when you run heterogeneous cluster since our CLI code is not > backward compatible. However I'll look into this issue and update you. > > ~Atin > > On 06/06/2016 02:19 PM, Atin Mukherjee wrote: >> I am looking into it. This does look like a bug as it stands. Will update. >> >> ~Atin >> >> On 06/06/2016 01:49 PM, Jiang, Jet (Nokia - CN/Hangzhou) wrote: >>> Hi, >>> Sorry to the late response. The root cause of the issue is the wrong configuration of Kubernets. >>> >>> Another question to bother. About the command "remote-host". >>> When I execute the remote-host to query the related glusterfs info , it is ok like following: >>> >>> [root@482cde6d9191 deploy]# gluster --remote-host=6b4bae6cd3da.vtas.local pool list >>> UUID Hostname State >>> b3d06f4e-6c70-4ce0-aeaa-5fd73824755f localhost Connected >>> [root@482cde6d9191 deploy]# >>> >>> But when I use remote-host to make peer , it seems not make effective, >>> >>> [root@482cde6d9191 deploy]# gluster peer probe ca8404991844.vtas.local --remote-host=6b4bae6cd3da.vtas.local >>> [root@482cde6d9191 deploy]# >>> On the host ca8404991844.vtas.local, there no gluster cluster. >>> >>> Does the "remote-host" just only support the query command? >>> My gluster version is 3.7.11. >>> >>> Thanks, >>> Br, >>> Jet >>> >>> -----Original Message----- >>> From: Atin Mukherjee [mailto:amukherj@xxxxxxxxxx] >>> Sent: Tuesday, May 24, 2016 3:03 PM >>> To: Jiang, Jet (Nokia - CN/Hangzhou) <jet.jiang@xxxxxxxxx>; Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> >>> Cc: pranithk@xxxxxxxxxxx; Madappa, Kaushal <kmadappa@xxxxxxxxxx> >>> Subject: Re: Some error appear when create glusterfs volume >>> >>> >>> >>> On 05/24/2016 11:42 AM, Jiang, Jet (Nokia - CN/Hangzhou) wrote: >>>> Hi, >>>> Thanks for you quickly response. >>>> I try as you suggest but still failed again. >>>> Another question: what cause the related rpc error? >>> Does gluster peer status still show the other node as connected? If so >>> then something is weird. Along with genuine RPC failures (in case node >>> loss its connection to the other peers) you may also see this error >>> message if CLI times out. >>> >>> Could you install glusterfs-debuginfo package and take glusterd running >>> process into gdb and print the backtrace after issuing volume create >>> command and shared the backtrace. I just want to see why is the command >>> taking that much of time. >>>> >>>> Thanks, >>>> Br, >>>> Jet >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Atin Mukherjee [mailto:amukherj@xxxxxxxxxx] >>>> Sent: Tuesday, May 24, 2016 1:51 PM >>>> To: Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>; Jiang, Jet (Nokia - CN/Hangzhou) <jet.jiang@xxxxxxxxx> >>>> Cc: pranithk@xxxxxxxxxxx; Madappa, Kaushal <kmadappa@xxxxxxxxxx> >>>> Subject: Re: Some error appear when create glusterfs volume >>>> >>>> I actually replied to this email which was sent to me earlier this morning. >>>> >>>> "This indicates that when vol create was issued then the other node went >>>> down in between as there are some rpc failures in the log. Since rpc >>>> time out is 600 secs and cli is 120 secs, the command timed out. Use >>>> gluster v create force to recreate the volume once the other node is up." >>>> >>>> On 05/24/2016 11:16 AM, Pranith Kumar Karampuri wrote: >>>>> hi Jiang, >>>>> I am happy to see people from Nokia send some good patches (Olia, >>>>> found two good leaks :-) ). I added both the glusterd maintainers to the >>>>> thread so that they know about this issue. >>>>> >>>>> Hope to see more collaboration from Nokia :-) >>>>> >>>>> Pranith >>>>> >>>>> On Tue, May 24, 2016 at 7:05 AM, Jiang, Jet (Nokia - CN/Hangzhou) >>>>> <jet.jiang@xxxxxxxxx <mailto:jet.jiang@xxxxxxxxx>> wrote: >>>>> >>>>> Hi, >>>>> Sorry to interrupt you. I met a issue when create glusterfs volume. >>>>> >>>>> I have two containers and made the peer successfully. >>>>> >>>>> */[root@test33 brick]# gluster pool list/* >>>>> */UUID Hostname >>>>> State/* >>>>> */4fefb316-b51c-4089-be3a-e160ab409b7e test44.vtas.local >>>>> Connected/* >>>>> */23ed0e26-89aa-440d-9ff3-0f12d5940410 localhost >>>>> Connected/* >>>>> */[root@test33 brick]# gluster peer status/* >>>>> */Number of Peers: 1/* >>>>> >>>>> */Hostname: test44.vtas.local/* >>>>> */Uuid: 4fefb316-b51c-4089-be3a-e160ab409b7e/* >>>>> */State: Peer in Cluster (Connected)/* >>>>> >>>>> But when create glusterfs volume, some error appeared as following: >>>>> */[root@test33 brick]# gluster volume create rep-volume replica 2 >>>>> test33.vtas.local:/mnt/services/brick/ >>>>> test44.vtas.local:/mnt/services/brick//* >>>>> */Error : Request timed out/* >>>>> >>>>> In my env, the docker and glusterfs version as following: >>>>> */[root@kube2-node3 ~]# docker version/* >>>>> */Client:/* >>>>> */Version: 1.10.3/* >>>>> */API version: 1.22/* >>>>> */Go version: go1.5.3/* >>>>> */Git commit: 20f81dd/* >>>>> */Built: Thu Mar 10 15:39:25 2016/* >>>>> */OS/Arch: linux/amd64/* >>>>> >>>>> */Server:/* >>>>> */Version: 1.10.3/* >>>>> */API version: 1.22/* >>>>> */Go version: go1.5.3/* >>>>> */Git commit: 20f81dd/* >>>>> */Built: Thu Mar 10 15:39:25 2016/* >>>>> */OS/Arch: linux/amd64/* >>>>> */[root@kube2-node3 ~]#/* >>>>> >>>>> */[root@test33 /]# gluster --version/* >>>>> */glusterfs 3.7.11 built on Apr 18 2016 13:20:48/* >>>>> */Repository revision: git://git.gluster.com/glusterfs.git >>>>> <http://git.gluster.com/glusterfs.git>/* >>>>> */Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>/* >>>>> */GlusterFS comes with ABSOLUTELY NO WARRANTY./* >>>>> */You may redistribute copies of GlusterFS under the terms of the >>>>> GNU General Public License./* >>>>> */[root@test33 /]#/* >>>>> >>>>> In addition, I found some glusterfs logs as following: >>>>> *[2016-05-23 10:38:19.122434] I [socket.c:3378:socket_submit_reply] >>>>> 0-socket.management: not connected (priv->connected = -1)* >>>>> *[2016-05-23 10:38:19.122472] E >>>>> [rpcsvc.c:1314:rpcsvc_submit_generic] 0-rpc-service: failed to >>>>> submit message (XID: 0x1, Program: GlusterD svc cli, ProgVers: 2, >>>>> Proc: 4) to rpc-transport (socket.man* >>>>> *agement)* >>>>> *[2016-05-23 10:38:19.122510] E [MSGID: 106430] >>>>> [glusterd-utils.c:474:glusterd_submit_reply] 0-glusterd: Reply >>>>> submission failed* >>>>> >>>>> I have no idea about this issue. Could you please support help if >>>>> you are free? >>>>> >>>>> Thanks, >>>>> >>>>> Jet >>>>> >>>>> Best Regards >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Pranith _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel