Why not just keep it bare metal? Especially with future ceph upgrading/testing. I am having centos7 with luminous and am running libvirt on the nodes aswell. If you configure them with a tls/ssl connection, you can even nicely migrate a vm, from one host/ceph node to the other. Next thing I am testing with is mesos, to use the ceph nodes to run containers. I am still testing this on some vm's, but looks like you have to install only a few rpms (maybe around 300MB) and 2 extra services on the nodes to get this up and running aswell. (But keep in mind that the help on their mailing list is not so good as here ;)) -----Original Message----- From: David Turner [mailto:drakonstein@xxxxxxxxx] Sent: 18 February 2019 17:31 To: ceph-users Subject: Migrating a baremetal Ceph cluster into K8s + Rook I'm getting some "new" (to me) hardware that I'm going to upgrade my home Ceph cluster with. Currently it's running a Proxmox cluster (Debian) which precludes me from upgrading to Mimic. I am thinking about taking the opportunity to convert most of my VMs into containers and migrate my cluster into a K8s + Rook configuration now that Ceph is [1] stable on Rook. I haven't ever configured a K8s cluster and am planning to test this out on VMs before moving to it with my live data. Has anyone done a migration from a baremetal Ceph cluster into K8s + Rook? Additionally what is a good way for a K8s beginner to get into managing a K8s cluster. I see various places recommend either CoreOS or kubeadm for starting up a new K8s cluster but I don't know the pros/cons for either. As far as migrating the Ceph services into Rook, I would assume that the process would be pretty simple to add/create new mons, mds, etc into Rook with the baremetal cluster details. Once those are active and working just start decommissioning the services on baremetal. For me, the OSD migration should be similar since I don't have any multi-device OSDs so I only need to worry about migrating individual disks between nodes. [1] https://blog.rook.io/rook-v0-9-new-storage-backends-in-town-ab952523ec53 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com