Re: Running a single teuthology job locally using containers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 03/09/2015 11:15, Ivo Jimenez wrote:
> This describes how to run teuthology jobs using docker in three
> "easy" steps, with the goal of shortening the develop/build/test
> cycle for integration-tested code.
> 
>  1. Write a file containing an entire teuthology job (tasks, targets
>     and roles in a YAML file):
> 
>     ```yaml
>     sshkeys: ignore
>     roles:
>     - [mon.0, osd.0, osd.1, osd.2, client.0]
>     tasks:
>     - install.ship_utilities:
>     - ceph:
>         conf:
>           mon:
>             debug mon: 20
>             debug ms: 1
>             debug paxos: 20
>           osd:
>             debug filestore: 20
>             debug journal: 20
>             debug ms: 1
>             debug osd: 20
>     - radosbench:
>         clients: [client.0]
>     targets:
>       'root@localhost:2222': ssh-dss ignored
>     ```
> 
>     The `sshkeys` option is required and the `install.ship_utilities`
>     should be the first task to execute. Also, `~/.teuthology.yaml`
>     should look like this:
> 
>     ```yaml
>     lab_domain: ''
>     lock_server: ''
>     ```
> 
>  2. Initialize a `cephdev` container (the following assumes `$PWD` is 
>     the folder containing the ceph code in your machine):
> 
>     ```bash
>     docker run \
>       --name remote0
>       -p 2222:22
>       -d -e AUTHORIZED_KEYS="`cat ~/.ssh/id_rsa.pub`" \
>       -v `pwd`:/ceph \
>       -v /dev:/dev \
>       -v /tmp/ceph_data/$RANDOM:/var/lib/ceph \
>       --cap-add=SYS_ADMIN --privileged \
>       --device /dev/fuse
>       ivotron/cephdev
>     ```

$PWD is ceph built from sources ? Could you share the dockerfile you used to create ivotron/cephdev ?

> 
>  3. Execute teuthology using the `wip-11892-docker` branch:
> 
>     ```bash
>     teuthology \
>       -a ~/archive/`date +%s` \
>       --suite-path /path/to/ceph-qa-suite/ \
>       ~/test.yml
>     ```
> 
> Caveats:
> 
>   * only a single job can be executed and has to be manually 
>     assembled. I plan to work on supporting suites, which, in short, 
>     implies stripping out the `install` task from existing suites and 
>     leaving only the `install.ship_utilities` subtask instead (the 
>     container image has all the dependencies in it already).

Maybe there could be a script to transform config files such as http://qa-proxy.ceph.com/teuthology/loic-2015-09-02_15:41:18-rbd-master---basic-multi/1042448/config.yaml into a config file suitable for this use case ? Together with git clone -b $sha1 + make in the container, it would be a nice way to replay / debug a failed job using a single vm and without going through packages. 

>   * I have only tried the above with the `radosbench` and `ceph-fuse` 
>     tasks. Using `--cap-add=ALL` and `-v /lib/modules:/lib/modules` 
>     flags allows a container to load kernel modules so, in principle,
>     it should work for `rbd` and  `kclient` tasks but I haven't tried
>     it yet.
>   * For jobs specifying multiple remotes, multiple containers can be 
>     launched (one per remote). While it is possible to run these 
>     on the same docker host, the way ceph daemons dynamically
>     bind to ports in the 6800-7300 range makes it difficult to
>     determine which ports to expose from each container (exposing the
>     same port from multiple containers in the same host is not
>     allowed, for obvious reasons). So either each remote runs on a
>     distinct docker host machine, or a deterministic port assignment
>     is implemented such that, for example, 6800 is always assigned to
>     osd.0, regardless of where it runs.

Would docker run --publish-all=true help ? 

Clever hack, congrats :-)

Cheers

> 
> Cheers,
> ivo
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux