Re: How to leverage ceph udev rules for containerized ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Assuming disks are prepared using ceph-disk from within a container with access to /dev, activating using udev rules could probably be done by prefixing all ceph-disk trigger in https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules with docker run ceph-osd ceph-disk trigger. And merely installing this udev rule file on the host.

The problem is that ceph-disk trigger relies on the init system (upstart or systemd) to asynchronously run the ceph-disk activation. It does that so the udev action does not last longer than strictly necessary. If systemd can run within the container, even if it does not have access to system services, it would be enough for the purpose of ceph-disk trigger.

Another approach would be to rely on "docker run" to run ceph-disk activate in the background and make sure the udev action is short lived. That would be defeated if the ceph-osd docker image did not exist on the machine as docker run would have to download it, which can take a long time. However, it probably is safe to assume that a machine used to ceph-disk prepare disks via the ceph-osd container already has such an image. Otherwise it would not have disks to activate anyway. The udev actions could then be run with ceph-disk trigger --sync (which does *not* delegate to the init system).

Note that udev rules may fire actions that do not lead to a running ceph-osd. For instance, if /dev/sdb comes up with the journal while /dev/sdc has the data but is not there yet. The journal activation will fail and the container that tried to activate must be removed. When /dev/sdc comes up, it will try to activate and succeed because /dev/sdb is there already and the journal can be found. The container that activated successfully must keep running.

Cheers

On 14/03/2016 19:34, Jim Curtis wrote:
> Greeting ceph-devel,
> 
> We are working on a ceph containerization project within Red Hat.  We
> have recently released our RHEL-based ceph container docker image and
> now we are moving on to handling a feature limitation with that image.
> 
> Specifically, the issue is that on our Atomic host, there is no ceph
> installed, so there are no ceph udev rules to trigger dynamic
> configuration of OSDs when a disk is plugged into the host.
> 
> What we would like to do is install our own set of ceph udev rules
> that would trigger the startup of our ceph docker container.  We would
> like to leverage the current implementation of the ceph udev rules to
> do this.
> 
> Also, since ceph-disk and Ceph's udev rules are tightly coupled and
> ceph-disk creates systemd or upstart rules for OSD daemons, does it
> make sense to add hooks in ceph-disk to start up a containerized OSD
> daemons either in systemd or upstart?
> 
> Can somebody in this community help us with this?
> 
> Thanks,
> 
> Jim C.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux