Re: Running ceph in docker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You could actually managed every osd and mon and mds through docker swarm, since all just software it make sense to deploy it through docker where you add the disk that is needed.

Mons does not need permanent storage either. Not that a restart of the docker instance would remove the but rather that a remove would.

Updates are easy as well, download the latest docker image and rebuild the OSD/MON.

All in all I believe you give the sysop exactly one control plane to handle all of the environment.

Regards,
Josef


On Thu, 30 Jun 2016, 15:16 xiaoxi chen, <superdebugger@xxxxxxxxxxx> wrote:
It make sense to me to run MDS inside docker or k8s as MDS is stateless.
But Mon and OSD do have data in local , what's the motivation to run it in docker? 

> To: ceph-users@xxxxxxxxxxxxxx
> From: dang@xxxxxxxxxx
> Date: Thu, 30 Jun 2016 08:36:45 -0400
> Subject: Re: Running ceph in docker

>
> On 06/30/2016 02:05 AM, F21 wrote:
> > Hey all,
> >
> > I am interested in running ceph in docker containers. This is extremely
> > attractive given the recent integration of swarm into the docker engine,
> > making it really easy to set up a docker cluster.
> >
> > When running ceph in docker, should monitors, radosgw and OSDs all be on
> > separate physical nodes? I watched Sebastian's video on setting up ceph
> > in docker here: https://www.youtube.com/watch?v=FUSTjTBA8f8. In the
> > video, there were 6 OSDs, with 2 OSDs running on each node.
> >
> > Is running multiple OSDs on the same node a good idea in production? Has
> > anyone operated ceph in docker containers in production? Are there any
> > things I should watch out for?
> >
> > Cheers,
> > Francis
>
> It's actually quite common to run multiple OSDs on the same physical
> node, since an OSD currently maps to a single block device. Depending
> on your load and traffic, it's usually a good idea to run monitors and
> RGWs on separate nodes.
>
> Daniel
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux