Re: How to leverage ceph udev rules for containerized ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Loic,

The image is on the registry:
https://access.redhat.com/search/#/container-images?q=ceph&p=1&sort=relevant&rows=12&srch=any&documentKind=ImageRepository

It can be pulled from:

registry.access.redhat.com/rhceph/rhceph-1.3-rhel7

Huamin might also have some input on your comments.  I forgot to cc
him on my original message and I'm not sure if he is subscribed to the
list.

Jim C.

On Mon, Mar 14, 2016 at 1:04 PM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
> Hi Jim,
>
> On 14/03/2016 19:34, Jim Curtis wrote:
>> Greeting ceph-devel,
>>
>> We are working on a ceph containerization project within Red Hat.  We
>> have recently released our RHEL-based ceph container docker image and
>> now we are moving on to handling a feature limitation with that image.
>>
>> Specifically, the issue is that on our Atomic host, there is no ceph
>> installed, so there are no ceph udev rules to trigger dynamic
>> configuration of OSDs when a disk is plugged into the host.
>
> It would be convenient to have a standalone ceph-disk package (native or pypi) that includes udev rules and init scripts.
>
>> What we would like to do is install our own set of ceph udev rules
>> that would trigger the startup of our ceph docker container.  We would
>> like to leverage the current implementation of the ceph udev rules to
>> do this.
>
> Let say we allocate a new partition type for each existing partition type[1] so that ceph-disk knows it should docker run ceph-osd instead of running the ceph-osd daemon itself. If docker run ceph-osd has the same set of arguments as the ceph-osd daemon, the only thing to adapt is how the init system handles a container with a name instead of a daemon with a pid. Alternatively, ceph-disk could be instructed to delegate running the ceph-osd daemon to docker instead of using an init system. The later would make more sense to me because the semantic of an init system is not perfectly aligned with the docker run / stop semantic.
>
> What I'm not sure about is how a ceph-osd running within a container can be instructed to use a given device. The only way I know is to expose /dev with --privileged which is probably too much. Is --device=/dev/sdb:/dev/sdb:rwm enough ? Is it possible, in case we only need to grant access to a partition used for journaling, to --device=/dev/sdc1:/dev/sdc1 ?
>
>> Also, since ceph-disk and Ceph's udev rules are tightly coupled and
>> ceph-disk creates systemd or upstart rules for OSD daemons, does it
>> make sense to add hooks in ceph-disk to start up a containerized OSD
>> daemons either in systemd or upstart?
>
> Yes, but that would probably not be my first choice.
>
> Could you please provide the URL to the RHEL-based ceph container docker image you released recently ?
>
> Cheers
>
>>
>> Can somebody in this community help us with this?
>>
>> Thanks,
>>
>> Jim C.
>
> [1] https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules#L4 etc.
>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux