Re: [ceph-commit] teuthology value error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

Am new to git and playing around git qa suite , trying to test rbd on
local machine .
and local test bed setup as ;

OS :  Ubuntu 12.04LTS
teuthology version: 0.0.1
ceph : 0.48.1 arganaut

we are running server, and the client on locale in a single machine .
yaml file format :
roles:
- [mon.a, osd.0, osd.1]

targets:
   lokesh@lokesh: ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDmSb/VpYpdSQLp4unsKUNV/DKV2f55M1QSXHu10Qvco33rYopOF/l5+eYlINCHF1v/SA+bLOMT/4OHGkZR67TajAoXFpyolVSxZKRoJHLV2iJ+MrRiaxShctMYWpLOjc4iRA4BG3FTG5TTPKJweDj8dDdUIqZ9PuSpP5VuOriCpQkPWECL2hJqAYwnknK1Uhg3rYV0XxL14Iep9KCZf2PcJNw3Eur6XKDAczu/sAUlOiLrBsQpFmOaPY5jFDaM3U7KpJvqiI4Drq331iMh9n3GuA+JvcTMKuT7CN36GswlvGuakTzaDoR66JaYYKkGzDl977K94XrAdQwa2NKLu1XH


tasks:
- ceph:
- ceph-fuse:
- workunit:
    clients:
      client.0:
        - rbd/test_cls_rbd.sh
#-interactive-on-error: true

and my .teutholgy.yaml file format is:

lock_server: http://lokesh/lock
queue_host: lokesh
queue_port: 4000

while am trying to execute test suite as:
lokesh@lokesh:~/Downloads/ceph-teuthology-78b7b02$
./virtualenv/bin/teuthology rbd_cls_tests1.yaml

i met error as:
INFO:teuthology.run_tasks:Running task internal.save_config...
INFO:teuthology.task.internal:Saving configuration
INFO:teuthology.run_tasks:Running task internal.check_lock...
INFO:teuthology.task.internal:Checking locks...
INFO:teuthology.lock:GET request to 'http://lokesh/lock/lokesh@lokesh'
with body 'None' failed with response code 404
ERROR:teuthology.run_tasks:Saw exception from tasks
Traceback (most recent call last):
  File "/home/lokesh/Downloads/ceph-teuthology-78b7b02/teuthology/run_tasks.py",
line 25, in run_tasks
    manager = _run_one_task(taskname, ctx=ctx, config=config)
  File "/home/lokesh/Downloads/ceph-teuthology-78b7b02/teuthology/run_tasks.py",
line 14, in _run_one_task
    return fn(**kwargs)
  File "/home/lokesh/Downloads/ceph-teuthology-78b7b02/teuthology/task/internal.py",
line 110, in check_lock
    'could not read lock status for {name}'.format(name=machine)
AssertionError: could not read lock status for lokesh@lokesh

i didnt get idea abt it . but the server is working fine.. any idea pls reply.



With Regards,

Lokesh K



On Tue, Oct 2, 2012 at 10:36 PM, Tommi Virtanen <tv@xxxxxxxxxxx> wrote:
> On Tue, Oct 2, 2012 at 9:56 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>>> File "/home/lokesh/Downloads/ceph-teuthology-78b7b02/teuthology/misc.py",
>>> line 501, in read_config
>>> ctx.teuthology_config.update(new)
>>> ValueError: dictionary update sequence element #0 has length 1; 2 is
>>> required
>> I haven't looked at teuthology much in a while, but the error is
>> complaining that you have a list of length 1 that requires length 2.
>> I'm pretty sure your problem is that you only have one machine
>> available, but the default Ceph task wants 2.
>
> Actually, that error seems to be bubbling up from where it reads
> ~/.teuthology.yaml -- it expects a file like this:
>
> lock_server: http://foo.example.com/locker/lock
> queue_host: bar.example.com
> queue_port: 1234
>
> Teuthology doesn't have much in the way of data format enforcement --
> it kind of assumes the target audience is programmers, so they can
> just dig in.
>
>> Rather more importantly than that, teuthology is a piece of our
>> internal infrastructure. We make it available to others, but if you
>> plan to use it in your own testing you will need infrastructure of
>> your own to run it on — we have a generic server farm, but also
>> "special" machines like a lock server, in addition to the client
>> actually running the test. You should not play with it casually.
>
> Hey, I resent that remark!
>
> We *want* outside users for teuthology. That would be awesome.
>
> That would probably also mean needing to split the current "tasks" out
> of the teuthology core, so that the core becomes a general-purpose
> multi-machine test runner.
>
> But you are correct in that there are a lot of assumptions about the
> environment in there. Tread carefully.



-- 
With Regards,

Lokesh K
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux