Extra work in gluster volume rebalance and odd reporting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Folks,

With gluster 1.4.0 on fedora 19:

I have a four node gluster peer group (ir0, ir1, ir2, ir3).  I've got
two distributed filesystems on the cluster.

One (work) distributed with bricks on ir0, ir1, and ir2.  The other
(home) replicated and distributed with replication across the
distribution pairs (ir0, ir3) and (ir1, ir2).

When doing a gluster volume rebalance home start and gluster volume
rebalance work start, it does rebalance operations on every node in
the peer group.  For work, it ran a rebalance on ir3 even though there
is no brick on ir3.  For home, it ran a rebalance on ir1 and ir3 and
did no work on those.

[root at ir0]# gluster volume rebalance home status; gluster volume
rebalance work status
                                    Node Rebalanced-files
size       scanned      failures         status run time in secs
                               ---------      -----------
-----------   -----------   -----------   ------------
--------------
                               localhost            33441
2.3GB        120090             0    in progress         67154.00
                                     ir2            12878
32.7GB        234395             0      completed         29569.00
                                     ir3                0
0Bytes        234367             0      completed          1581.00
                                     ir1                0
0Bytes        234367             0      completed          1569.00
volume rebalance: home: success:
                                    Node Rebalanced-files
size       scanned      failures         status run time in secs
                               ---------      -----------
-----------   -----------   -----------   ------------
--------------
                               localhost                0
0Bytes       1862936             0      completed          4444.00
                                     ir2              417
10.4GB       1862936           417      completed          4466.00
                                     ir3                0
0Bytes       1862936             0      completed          4454.00
                                     ir1                4
282.8MB       1862936             4      completed          4438.00


Sometimes I would get:

volume rebalance: work: success:
[root at ir0 ghenders]# gluster volume rebalance home status; gluster
volume rebalance work status
                                    Node Rebalanced-files
size       scanned      failures         status run time in secs
                               ---------      -----------
-----------   -----------   -----------   ------------
--------------
                               localhost            31466
2.3GB        114290             0    in progress         63194.00
                               localhost            31466
2.3GB        114290             0    in progress         63194.00
                               localhost            31466
2.3GB        114290             0    in progress         63194.00
                               localhost            31466
2.3GB        114290             0    in progress         63194.00
                                     ir3                0
0Bytes        234367             0      completed          1581.00
volume rebalance: home: success:
                                    Node Rebalanced-files
size       scanned      failures         status run time in secs
                               ---------      -----------
-----------   -----------   -----------   ------------
--------------
                               localhost                0
0Bytes       1862936             0      completed          4444.00
                               localhost                0
0Bytes       1862936             0      completed          4444.00
                               localhost                0
0Bytes       1862936             0      completed          4444.00
                               localhost                0
0Bytes       1862936             0      completed          4444.00
                                     ir1                4
282.8MB       1862936             4      completed          4438.00


Where it only reports progress on one node.

Should I file bugs on these?

Joel


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux