RE: Ceph code tests / teuthology

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 24 Apr 2015, Zhou, Yuan wrote:
> Hi Loic/Zack,
> 
> So I've got some progress here. I was able to run a single job with 
> teuthology xxx.yaml targets.yaml. From the code, teuthology-suite needs 
> to query the lock-server for some machine info, like os_type, platform. 
> Is there any documents for the lock-server?

You can skip these checks with 

 check-locks: false

in the job yaml.

sage


> 
> Thanks, -yuan
> 
> -----Original Message-----
> From: Loic Dachary [mailto:loic@xxxxxxxxxxx] 
> Sent: Monday, April 13, 2015 5:19 PM
> To: Zhou, Yuan
> Cc: Ceph Development; Zack Cerza
> Subject: Re: Ceph code tests / teuthology
> 
> Hi,
> 
> On 13/04/2015 04:39, Zhou, Yuan wrote:> Hi Loic,
> > 
> >  
> > 
> > I'm trying to setup an internal Teuthology cluster here. I was able to setup a 3 node cluster now. however there's not much docs and I'm confused about some questions here:
> > 
> >  
> > 
> > 1)      how does Ceph upstream do tests? Currently I see there's a) Jekins(make check on each PR) 
> 
> Yes.
> 
> > b) Teuthology Integration tests(on important PR only).
> >
> 
> The teuthology tests are run either by cron jobs or by people. http://pulpito.ceph.com/. They are not run on pull request.
> 
> > 2)      Teuthology automatically fetch the binary files from gitbuilder.ceph.com currently. However the binary will not be built for each pull request? 
> 
> Right. Teuthology can be pointed to an alternate repository but there is a catch: it needs to have the same naming conventions as gitbuilder.ceph.com. These naming convention are not documented (as far as I know) and you would need to read the code to figure them out. When I tried to customize the repository, I replaced the code locating the repository with something that was configurable instead (reading the yaml file). But I did it in a hackish way and did not take the time to figure out how to contribute that back properly.
> 
> > 3)      Can Teuthology working on VMs? I got some info from your blog, looks like you're running Teuthology on a Openstack/Docker.
> 
> The easiest way is to prepare three VMs and make sure you can ssh to them without password. You then create a targets.yaml file with these three machines. And you can run a single job that will use them. It will save you the trouble of setting up a full teuthology cluster (I think http://dachary.org/?p=2204 is still mostly valid). The downside is that it only allows you to run a single job at a time and will not allow you to run teuthology-suite to schedule a number of jobs and have them wait in the queue.
> 
> I'm not actually using the docker backend I hacked together, therefore I don't recommend you try this route, unless you have a week or two to devote to it.  
> 
> > 4)      If I have a working Teuthology cluster now, how do I start a full run? or only the workunits/* is good enough?
> 
> For instance:
> 
> ./virtualenv/bin/teuthology-suite --filter-out btrfs,ext4 --priority 1000 --suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email abhishek.lekshmanan@xxxxxxxxx --owner abhishek.lekshmanan@xxxxxxxxx --ceph giant-backports
> 
> http://tracker.ceph.com/issues/11153 contains many examples of how teuthology is run to test stable releases.
> 
> The easiest way to create a single job probably is to run ./virtualenv/bin/teuthology-suite : it will output calls to teuthology that you can probalby copy / paste to run a single job. I've not tried that and went a more difficult route instead (manually assembling yaml files to create a job). 
> 
> Zack will probably have more hints and advices on how to run your own teuthology suite.
> 
> Cheers
> 
> >  
> > 
> > Thanks for any hints!
> > 
> > -yuan
> > 
> >  
> > 
> 
> -- 
> Loïc Dachary, Artisan Logiciel Libre
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux