(Changed subject) File write errors during a rebalance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Can you please send the rebalance logs and client logs? How many times was this file moved by the rebalance process?


Regards,
Nithya

----- Original Message -----
> From: "余长财" <yu.changcai@xxxxxxxxxxx>
> To: gluster-devel@xxxxxxxxxxx
> Sent: Friday, 15 May, 2015 6:58:23 AM
> Subject: Re:  Gluster-devel Digest, Vol 14, Issue 56
> 
> dear all,
> Recently, I have a test to rebalance, I seem to be a bug.
> I used the cluster version is v3.6.2 tag.
> The case I did is below.
> 
> 1. create a volume with dht number 2
> 2. write a few files in the volume
> 3. add-brick to the volume
> 4. rebalance fix-layout
> 5. find a file which would be rebalance to other brick
> 6. continuing write something to the file
> 7. exec the rebalance to move file to where they should be
> 
> After the rebalance, we saw something has been lost when it were write during
> the rebalance.
> 
> Could anyone help me with it?
> 
> 
> 
> On May 14, 2015, at 8:00 PM, gluster-devel-request@xxxxxxxxxxx wrote:
> 
> 
> Send Gluster-devel mailing list submissions to
> gluster-devel@xxxxxxxxxxx
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.gluster.org/mailman/listinfo/gluster-devel
> or, via email, send a message with subject or body 'help' to
> gluster-devel-request@xxxxxxxxxxx
> 
> You can reach the person managing the list at
> gluster-devel-owner@xxxxxxxxxxx
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-devel digest..."
> 
> 
> Today's Topics:
> 
> 1. Re: Rebalance failure wrt trashcan (Nithya Balachandran)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Thu, 14 May 2015 07:08:46 -0400 (EDT)
> From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
> To: SATHEESARAN <sasundar@xxxxxxxxxx>
> Cc: Gluster Devel <gluster-devel@xxxxxxxxxxx>
> Subject: Re:  Rebalance failure wrt trashcan
> Message-ID:
> <786595040.14130930.1431601726039.JavaMail.zimbra@xxxxxxxxxx>
> Content-Type: text/plain; charset=utf-8
> 
> The rebalance failure is due to the interaction of the lookup-unhashed
> changes and rebalance local crawl changes. I will send a patch shortly.
> 
> Regards,
> Nithya
> 
> 
> 
> 
> ----- SATHEESARAN <sasundar@xxxxxxxxxx> wrote:
> 
> 
> On 05/14/2015 12:55 PM, Vijay Bellur wrote:
> 
> 
> On 05/14/2015 09:00 AM, SATHEESARAN wrote:
> 
> 
> Hi All,
> 
> I was using glusterfs-3.7 beta2 build (
> glusterfs-3.7.0beta2-0.0.el6.x86_64 )
> I have seen rebalance failure in one of the node.
> 
> [2015-05-14 12:17:03.695156] E
> [dht-rebalance.c:2368:gf_defrag_settle_hash] 0-vmstore-dht: fix layout
> on /.trashcan/internal_op failed
> [2015-05-14 12:17:03.695636] E [MSGID: 109016]
> [dht-rebalance.c:2528:gf_defrag_fix_layout] 0-vmstore-dht: Fix layout
> failed for /.trashcan
> 
> Does it have any impact ?
> 
> 
> I don't think there should be any impact due to this. rebalance should
> continue fine without any problems. Do let us know if you observe the
> behavior to be otherwise.
> 
> -Vijay
> I tested the same functionally and I don't find any impact as such, but
> the 'gluster volume status <vol-name>' reports the rebalance as a FAILURE.
> Any tool ( for example oVirt ), consuming the output from 'gluster
> volume status <vol> --xml' would report the rebalance operation as FAILURE
> 
> [root@ ~]# gluster volume rebalance vmstore start
> volume rebalance: vmstore: success: Rebalance on vmstore has been
> started successfully. Use rebalance status command to check status of
> the rebalance process.
> ID: 68a12fc9-acd5-4f24-ba2d-bfc070ad5668
> 
> [root@~]# gluster volume rebalance vmstore status
> Node Rebalanced-files size
> scanned failures skipped status run time in secs
> --------- ----------- -----------
> ----------- ----------- ----------- ------------ --------------
> localhost 0
> 0Bytes 2 0 0 completed
> 0.00
> 10.70.37.58 0
> 0Bytes 0 3 0 failed 0.00
> volume rebalance: vmstore: success:
> 
> [root@~]# gluster volume status vmstore
> Status of volume: vmstore
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> ......
> 
> Task Status of Volume vmstore
> ------------------------------------------------------------------------------
> Task : Rebalance
> ID : 68a12fc9-acd5-4f24-ba2d-bfc070ad5668
> Status : failed
> 
> Snip from --xml tasks :
> <tasks>
> <task>
> <type>Rebalance</type>
> <id>68a12fc9-acd5-4f24-ba2d-bfc070ad5668</id>
> <status>4</status>
> <statusStr>failed</statusStr>
> </task>
> </tasks>
> 
> Even this is the case with remove-brick with data migration too
> 
> -- sas
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> ------------------------------
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> End of Gluster-devel Digest, Vol 14, Issue 56
> *********************************************
> 
> 
> 
> —————————
> 余长财 技术部
> 浙江九州云信息科技有限公司
> 地址:上海市卢湾区局门路427号1号楼206室
> 网址: http://www.99cloud.net
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux