Re: Spurious failure report for master branch - 2015-03-03

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I had a look at 
tests/bugs/distribute/bug-1117851.t

The test fails at :

EXPECT_WITHIN 75 "done" cat $M0/status_0


The test uses a status file to check if the file rename operation (where a 1000 files are renamed) which runs in the background is over. The status file $M0/status_0 is created before the rename begins and the string "running" is written to it. Once the rename is done, the string "done" is written to the file.

So it turns out the renames are actually finishing well in time - roughly 40 seconds. But the status_0 file is not present so cat fails on the file. The logs for two regression runs that failed confirm this (http://build.gluster.org/job/rackspace-regression-2GB/951/console and http://build.gluster.org/job/rackspace-regression-2GB/983/console). 

cat: /mnt/glusterfs/0/status_0: No such file or directory
[14:53:50] ./tests/bugs/distribute/bug-1117851.t ................................................. 
not ok 15 Got "" instead of "done"
Failed 1/24 subtests

The test runs successfully on my local setup and has failed only twice on the VM Justin provided(out of about 50 runs), so I am still looking into why it cannot find the file.


Regards,
Nithya

----- Original Message -----
From: "Justin Clift" <justin@xxxxxxxxxxx>
To: "Nithya Balachandran" <nbalacha@xxxxxxxxxx>
Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
Sent: Wednesday, 4 March, 2015 10:12:17 AM
Subject: Re:  Spurious failure report for master branch - 2015-03-03

Thanks. :)

If you need a VM setup in Rackspace for you to investigate on, it's easy
to do.  Let me know if so. :)

+ Justin


On 4 Mar 2015, at 04:37, Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:
> I'll take a look at tests/bugs/distribute/bug-1117851.t
> 
> Regards,
> Nithya
> 
> ----- Original Message -----
> From: "Justin Clift" <justin@xxxxxxxxxxx>
> To: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
> Sent: Wednesday, 4 March, 2015 9:57:00 AM
> Subject:  Spurious failure report for master branch -	2015-03-03
> 
> Ran 20 x regression tests on our GlusterFS master branch code
> as of a few hours ago, commit 95d5e60afb29aedc29909340e7564d54a6a247c2.
> 
> 5 of them were successful (25%), 15 of them failed in various ways
> (75%).
> 
> We need to get this down to about 5% or less (preferably 0%), as it's
> killing our development iteration speed.  We're wasting huge amounts
> of time working around this. :(
> 
> 
> Spurious failures
> *****************
> 
>  * 5 x tests/bugs/distribute/bug-1117851.t                                               (Wstat: 0 Tests: 24 Failed: 1)
>    Failed test:  15
> 
>    This one is causing a 25% failure rate all by itself. :(
> 
>    This needs fixing soon. :)
> 
> 
>  * 3 x tests/bugs/geo-replication/bug-877293.t                                           (Wstat: 0 Tests: 15 Failed: 1)
>    Failed test:  11
> 
>  * 2 x tests/basic/afr/entry-self-heal.t                                                 (Wstat: 0 Tests: 180 Failed: 2)
>    Failed tests:  127-128
> 
>  * 1 x tests/basic/ec/ec-12-4.t                                                          (Wstat: 0 Tests: 541 Failed: 2)
>    Failed tests:  409, 441
> 
>  * 1 x tests/basic/fops-sanity.t                                                         (Wstat: 0 Tests: 11 Failed: 1)
>    Failed test:  10
> 
>  * 1 x tests/basic/uss.t                                                                 (Wstat: 0 Tests: 160 Failed: 1)
>    Failed test:  26
> 
>  * 1 x tests/performance/open-behind.t                                                   (Wstat: 0 Tests: 17 Failed: 1)
>    Failed test:  17
> 
>  * 1 x tests/bugs/distribute/bug-884455.t                                                (Wstat: 0 Tests: 22 Failed: 1)
>    Failed test:  11
> 
>  * 1 x tests/bugs/fuse/bug-1126048.t                                                     (Wstat: 0 Tests: 12 Failed: 1)
>    Failed test:  10
> 
>  * 1 x tests/bugs/quota/bug-1038598.t                                                    (Wstat: 0 Tests: 28 Failed: 1)
>    Failed test:  28
> 
> 
> 2 x Coredumps
> *************
> 
>  * http://mirror.salasaga.org/gluster/master/2015-03-03/bulk5/
> 
>    IP - 104.130.74.142
> 
>    This coredump run also failed on:
> 
>      * tests/basic/fops-sanity.t                                                         (Wstat: 0 Tests: 11 Failed: 1)
>        Failed test:  10
> 
>      * tests/bugs/glusterfs-server/bug-861542.t                                          (Wstat: 0 Tests: 13 Failed: 1)
>        Failed test:  10
> 
>      * tests/performance/open-behind.t                                                   (Wstat: 0 Tests: 17 Failed: 1)
>        Failed test:  17
> 
>  * http://mirror.salasaga.org/gluster/master/2015-03-03/bulk8/
> 
>    IP - 104.130.74.143
> 
>    This coredump run also failed on:
> 
>      * tests/basic/afr/entry-self-heal.t                                                 (Wstat: 0 Tests: 180 Failed: 2)
>        Failed tests:  127-128
> 
>      * tests/bugs/glusterfs-server/bug-861542.t                                          (Wstat: 0 Tests: 13 Failed: 1)
>        Failed test:  10
> 
> Both VMs are also online, in case they're useful to log into
> for investigation (root / the jenkins slave pw).
> 
> If they're not, please let me know so I can blow them away. :)
> 
> 
> 1 x hung host
> *************
> 
> Hung on tests/bugs/posix/bug-1113960.t
> 
> root  12497  1290  0 Mar03 ?  S  0:00  \_ /bin/bash /opt/qa/regression.sh
> root  12504 12497  0 Mar03 ?  S  0:00      \_ /bin/bash ./run-tests.sh
> root  12519 12504  0 Mar03 ?  S  0:03          \_ /usr/bin/perl /usr/bin/prove -rf --timer ./tests
> root  22018 12519  0 00:17 ?  S  0:00              \_ /bin/bash ./tests/bugs/posix/bug-1113960.t
> root  30002 22018  0 01:57 ?  S  0:00                  \_ mv /mnt/glusterfs/0/longernamedir1/longernamedir2/longernamedir3/
> 
> This VM (23.253.53.111) is still online + untouched (still hung),
> if someone wants to log in to investigate.  (root / the jenkins
> slave pw)
> 
> Hope that's helpful. :)
> 
> Regards and best wishes,
> 
> Justin Clift
> 
> --
> GlusterFS - http://www.gluster.org
> 
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
> 
> My personal twitter: twitter.com/realjustinclift
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux