Re: gluster volume info output and some questions/advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Sat, Mar 11, 2017 at 3:27 PM, Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx> wrote:

Hi to all

let's assume this volume info output:

Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: Server1:/home/gfs/r2_0
Brick2: Server2:/home/gfs/r2_1
Brick3: Server1:/home/gfs/r2_2
Brick4: Server2:/home/gfs/r2_3

Can someone explain me how to read "Number of bricks" ?

Is the first number the number of "replicated bricks" and the second number the number of replica?

In this case, 2 bricks are replicated 2 times ?

So, a "Number of Bricks: 2 x 3 = 6" means that 2 bricks are replicated 3 times, right ?


That's correct. A X B indicates A is the distribute leg & B is the replica count.
 

Would be possible to add some indentation to this output? Something like this would be much easier to read and understand:

Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
  Brick1: Server1:/home/gfs/r2_0
  Brick2: Server2:/home/gfs/r2_1
 
  Brick3: Server1:/home/gfs/r2_2
  Brick4: Server2:/home/gfs/r2_3


We could do it but we've to be 100% sure that this doesn't break existing tools/scripts while parsing this output.

Or, if you don't want blank lines:

Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
  Brick1: Server1:/home/gfs/r2_0
   -> Brick2: Server2:/home/gfs/r2_1
   -> Brick3: Server3:/home/gfs/r2_2
  Brick4: Server1:/home/gfs/r2_3
   -> Brick5: Server2:/home/gfs/r2_4
   -> Brick6: Server3:/home/gfs/r2_5


This might give an impression that brick1 is the master and is always responsible for replicating the data to 2 & 3 which is not true as per the current replication model.
 

Now some questions:
is SNMP integration planed ? A SMUX Peer integrated in Gluster would be awesome
for monitoring and monitoring a storage cluster is mandatory :)
Just a single line to add in snmpd.conf and we are ready to go.

Currently, which are the monitoring options that we could use ? We have some Zabbix servers here
that use SNMP for monitoring. Any workaround with gluster ?

Probably, an easier workaround to implement in gluster would be triggering an SNMP trap.
When some events occur, gluster could automatically trigger a trap.
I think would be easier to develop than creating a whole SNMP SMUX and in this case,
a single configuration set on a single node, would be gluster-wide.
If you have 50 nodes, you just need to perform a single configuration on a single node
to enable traps for 50 nodes automatically.

You could ask (from CLI) for snmp target host to set cluster-wide and then all nodes will be able to trigger some traps.

In example:

gluster volume set test-volume snmp-trap-community 'public'
gluster volume set test-volume snmp-trap-server '1.2.3.4'


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel



--

~ Atin (atinm)
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux