Re: osd inside LXC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for all your answers,

Today people dedicate servers to act as ceph osd nodes which serve data stored inside to other dedicated servers which run applications or VMs, can we think about squashing the 2 inside 1?


Le 14 juil. 2016 18:15, "Daniel Gryniewicz" <dang@xxxxxxxxxx> a écrit :
This is fairly standard for container deployment: one app per container instance.  This is how we're deploying docker in our upstream ceph-docker / ceph-ansible as well.

Daniel

On 07/13/2016 08:41 PM, Łukasz Jagiełło wrote:
Hi,

Just wonder why you want each OSD inside separate LXC container? Just to
pin them to specific cpus?

On Tue, Jul 12, 2016 at 6:33 AM, Guillaume Comte
<guillaume.comte@xxxxxxxxxxxxxxx
<mailto:guillaume.comte@xxxxxxxxxxxxxxx>> wrote:

    Hi,

    I am currently defining a storage architecture based on ceph, and i
    wish to know if i don't misunderstood some stuffs.

    So, i plan to deploy for each HDD of each servers as much as OSD as
    free harddrive, each OSD will be inside a LXC container.

    Then, i wish to turn the server itself as a rbd client for objects
    created in the pools, i wish also to have a SSD to activate caching
    (and also store osd logs as well)

    The idea behind is to create CRUSH rules which will maintain a set
    of object within a couple of servers connected to the same pair of
    switch in order to have the best proximity between where i store the
    object and where i use them (i don't bother having a very high
    insurance to not loose data if my whole rack powerdown)

    Am i already on the wrong track ? Is there a way to guaranty
    proximity of data with ceph whitout making twisted configuration as
    i am ready to do ?

    Thks in advance,

    Regards
    --
    *Guillaume Comte*
    06 25 85 02 02  | guillaume.comte@xxxxxxxxxxxxxxx
    <mailto:guillaume@xxxxxxxxxxxxxxx>
    90 avenue des Ternes, 75 017 Paris


    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Łukasz Jagiełło
lukasz<at>jagiello<dot>org


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux