My $.02: I think we should focus our efforts on the orchestrator API, and the ssh orchestrator implementation in particular (alongside rook, of course!), and then aim to replace teuthology's ceph.py with a new implementation that bootstraps the cluster with ceph-daemon and uses the orchestrator commands to provision everything, start/stop daemons, etc. This would kill several birds with one (large!) stone: end-to-end testing of the orchestrator (via the ssh implementation), testing against container images, and finally shedding the all of the baggage from the very weird way that ceph.py currently provisions and runs ceph. There is a pretty large range of things that ceph.py needs to do in terms of how the cluster is provisioned, daemons are started/stopped, etc., but we can start with the basics, and then use the tests as a forcing function to ensure the "new way" of bootstrapping and provisioning supports all of the things it needs. Meanwhile, the rook CI effort should focus on testing all of the interesting kube vs rook/ceph interactions in a way that is natural for the kubernetes ecosystem, and not concern itself with testing all of the odd ceph cases (which teuthology covers) or any of the legacy ceph testing baggage at all. Both would/cloud consume these same cephci images. sage On Thu, 12 Sep 2019, Dan Mick wrote: > I'm not aware of any current plan to integrate into Rook's CI or > teuthology; the latter would be more-suited, IMO, since it's more about > "testing Ceph inside a container" rather than "testing changes to Rook". > Although of course it would be useful to *enable* both, I think the Rook > developers would be doing this more one-off than as a CI regular run thing. > > I don't know what plans the teuthology testing team might have for > entering the container realm; I would think this is one gating factor. > I'm also pretty uneducated on what's going on for testing with > OpenShift, and that might play into the design of container-based > testing upstream. > > but the time is ripe for discussion, I agree. > > On 9/11/19 8:49 AM, Sebastian Wagner wrote: > > Thanks Dan! > > > > how can we continue from here? > > > > From the orchestrator's pov, I'd need some kind of framework to write > > tests. Is there a plan to include them into the existing Rook CI? > > > > I did investigate some custom script to bring up a k8s cluster using the > > latest ceph images on my own, but it turns out to be too brittle to be > > usable. > > > > - Sebastian > > > > Am 11.09.19 um 06:26 schrieb Dan Mick: > >> I've added code to ceph-container.git and ceph-build.git [1] to > >> automatically build a 'daemon-base' container for each branch with a > >> name beginning with 'wip-', for the CentOS 7 'default' flavor build. > >> This is the container that is used with Rook. > >> > >> Each container image is pushed to quay.io/cephci, an organization > >> created by me but intended for public consumption. The images are > >> tagged with the name of the ceph wip branch, the 7-digit SHA1 of the > >> head commit, and the suffix "centos-7-x86_64-devel", so for instance one > >> of the tags built today was > >> > >> wip-sage3-testing-2019-09-10-1000-7295ce6-centos-7-x86_64-devel > >> > >> so a pull of > >> "quay.io/cephci/daemon-base:wip-sage3-testing-2019-09-10-1000-7295ce6-centos-7-x86_64-devel" > >> would fetch that image. > >> You can see the list of tags currently available at > >> https://quay.io/repository/cephci/daemon-base?tab=tags, or just remember > >> to browse to "quay.io/cephci" and dig down. > >> > >> So far no image reaping mechanism is in place; I expect we'll need one > >> eventually and we'll define a policy then. > >> > >> Note, again, that the branch must be named "wip-" to allow this > >> building; this is a side-effect of existing code in ceph-container. If > >> that becomes too much of a limitation, we can address that with a later > >> change. > >> > >> I'll add a description of this mechanism to the Ceph developer docs soon. > >> > >> Please let me know of any issues with either the build or the resultant > >> images and I'll help figure out what's up. > >> > >> ----- > >> [1] https://github.com/ceph/ceph-container/pull/1457 and > >> https://github.com/ceph/ceph-build/pull/1378 > >> > > > _______________________________________________ > Dev mailing list -- dev@xxxxxxx > To unsubscribe send an email to dev-leave@xxxxxxx > > _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx