Re: Help needed: NFS Debugging for Glusto tests and Glusto help in general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 7, 2016 at 11:55 AM, Nigel Babu <nigelb@xxxxxxxxxx> wrote:
> Hello,
>
> I've been working on getting the Glusto tests to work and it appears that we're
> stuck in a situation which Shewtha and Jonathan haven't been able to fully
> arrive at a solution. Here are the two problems:
>
> 1. Originally, we ran into issues with NFS with an error that looked like this:
>
>         if 'nfs' in cls.mount_type:
>             cmd = "showmount -e localhost"
>             _, _, _ = g.run(cls.mnode, cmd)
>
>             cmd = "showmount -e localhost | grep %s" % cls.volname
>             ret, _, _ = g.run(cls.mnode, cmd)
>>           assert (ret == 0), "Volume %s not exported" % cls.volname
> E           AssertionError: Volume testvol_replicated not exported
> E           assert 1 == 0
>

We stopped exporting volumes over the internal nfs server by default
for new volumes in 3.8 and all volumes in 3.9 and beyond. You will
need to enable nfs on volumes by setting the option 'nfs.disable' to
'off'.

> bvt/test_bvt_lite_and_plus.py:88: AssertionError
>
> Entries in glustomain.log:
>
> 2016-11-07 06:17:06,007 INFO (run) root@172.19.2.69 (cp): gluster volume info | egrep "^Brick[0-9]+" | grep -v "ss_brick"
> 2016-11-07 06:17:06,058 ERROR (get_servers_used_bricks_dict) error in getting bricklist using gluster v info
> 2016-11-07 06:17:06,059 INFO (run) root@172.19.2.69 (cp): gluster volume info testvol_replicated --xml
> 2016-11-07 06:17:06,111 INFO (run) root@172.19.2.69 (cp): gluster volume create testvol_replicated replica 3       172.19.2.69:/mnt/testvol_replicated_brick0 172.19.2.15:/mnt/testvol_replicated_brick1 172.19.2.3
> 8:/mnt/testvol_replicated_brick2 --mode=script force
> 2016-11-07 06:17:08,272 INFO (run) root@172.19.2.69 (cp): gluster volume start testvol_replicated --mode=script
> 2016-11-07 06:17:19,066 INFO (run) root@172.19.2.69 (cp): gluster volume info testvol_replicated
> 2016-11-07 06:17:19,125 INFO (run) root@172.19.2.69 (cp): gluster vol status testvol_replicated
> 2016-11-07 06:17:19,189 INFO (run) root@172.19.2.69 (cp): showmount -e localhost
> 2016-11-07 06:17:19,231 INFO (run) root@172.19.2.69 (cp): showmount -e localhost | grep testvol_replicated
> 2016-11-07 06:17:19,615 INFO (main) Ending glusto via main()
> 2016-11-07 06:20:23,713 INFO (main) Starting glusto via main()
>
> Today I tried to comment out the NFS bits and run the test again. Here's what
> that got me:
>
>         # Setup Volume
>         ret = setup_volume(mnode=cls.mnode,
>                            all_servers_info=cls.all_servers_info,
>                            volume_config=cls.volume, force=True)
>>       assert (ret == True), "Setup volume %s failed" % cls.volname
> E       AssertionError: Setup volume testvol_distributed-replicated failed
> E       assert False == True
>
> bvt/test_bvt_lite_and_plus.py:73: AssertionError
>
> Entries in glustomain.log:
> 2016-11-07 06:20:34,994 INFO (run) root@172.19.2.69 (cp): gluster volume info | egrep "^Brick[0-9]+" | grep -v "ss_brick"
> 2016-11-07 06:20:35,048 ERROR (form_bricks_list) Not enough bricks available for creating the bricks
> 2016-11-07 06:20:35,049 ERROR (setup_volume) Number_of_bricks is greater than the unused bricks on servers
>

This seems to be an error with provisioning or Glusto. Glusto seems to
be expecting more bricks than have been provisioned.

> Does this make sense to anyone in terms of whether it's an error at Glusto-end
> or an error in Gluster that's being caught?
>
> --
> nigelb
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux