Re: Using Ceph and CloudStack? Let us know!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/16/2013 05:57 PM, Dewan Shamsul Alam wrote:
I don't think 4.2 is coming out anytime soon. Right now all 4.1 builds
are stable except for the systemvm. Have a look in their jenkins server
http://jenkins.cloudstack.org/. 4.2 is not even listed there. :(


4.2 is scheduled for July. It currently lives in the master branch. The feature freeze is May 31st and then there will be a cut-off.

Wido

Best Regards,
Dewan Shamsul Alam



On Thu, May 16, 2013 at 9:04 PM, Wido den Hollander <wido@xxxxxxxx
<mailto:wido@xxxxxxxx>> wrote:

    On 05/16/2013 04:34 PM, Dewan Shamsul Alam wrote:

        Hi,

        I will be deploying CloudStack and will use Ceph. Too bad CloudStack
        requires NFS as the primary storage for System VM. So I have to
        use a
        DRBD+NFS for that. The setup is as follows:


    Wait for CloudStack 4.2 :) As we speak I'm working on the new
    features for CloudStack 4.2, which will bring:
    - Cloning (Layering) of templates
    - Snapshotting
    - Running SystemVMs of RBD

    The last was made possible by removing the so called "patch disk" in
    CloudStack.

    When a SystemVM boots up it requires metadata, like what his
    IP-address should be, where the management server can be found, etc,
    etc. We used to generate a file (Yes, FILE) on the Primary Storage
    which was attached as an extra disk. As you can guess, RBD images
    are no files and the Bash script which did this didn't understand RBD.

    The new way is that we open a VirtIO Serial Console to the SystemVM
    on the Hypervisor where it is running and over that Serial Console
    the SystemVM will get his metadata.

    This way we can deploy System VMs from Ceph without the need for NFS
    or anything else.

    --
    Wido den Hollander
    42on B.V.

    Phone: +31 (0)20 700 9902 <tel:%2B31%20%280%2920%20700%209902>
    Skype: contact42on


        3 Node Ceph Cluster [Bobtail] - Planning to upgrade to Cuttle
        fish after
        trying this.
        2 Node for DRBD+NFS
        1 VM for the management system - I'm not worried about high
        availability
        of this node at this point.
        3 Compute nodes
        All commodity hardware backed by a gigabit switch

        I will have this setup running after next week. I will let you
        guys know
        how it went.

        Best Regards,
        Dewan Shamsul Alam




        On Thu, May 16, 2013 at 8:02 PM, Patrick McGarry
        <patrick@xxxxxxxxxxx <mailto:patrick@xxxxxxxxxxx>
        <mailto:patrick@xxxxxxxxxxx <mailto:patrick@xxxxxxxxxxx>>> wrote:

             Greetings ceph-ers,

             As you may have noticed lately, there has been a lot of
        talk about
             Ceph and OpenStack.  While we love all of the excitement
        that this has
             generated, we want to make sure that other cloud setups
        aren't getting
             neglected or ignored.  CloudStack, for instance, also has a
        great Ceph
             integration thanks to some enterprising work from Wido at
        42on.com <http://42on.com>
             <http://42on.com>.


             So, are you using CloudStack and Ceph?  If so we'd love to
        hear from
             you.  Whether it's just a quiet note for our eyes only, or
        whether you
             have a story to share with the world, we'd love to know.
          Of course,
             we'd love to hear about anything you're working on.  So, if
        you have
             notes to share about Ceph with other cloud flavors, massive
        storage
             clusters, or custom work, we'd treasure them appropriately.

             Feel free to just reply to this email, send a message to
        community@xxxxxxxxxxx <mailto:community@xxxxxxxxxxx>
        <mailto:community@xxxxxxxxxxx <mailto:community@xxxxxxxxxxx>>__,
        message
             'scuttlemonkey' on irc.oftc.net <http://irc.oftc.net>
        <http://irc.oftc.net>, or tie

             a note to our ip-over-carrier-pigeon network.  Thanks, and
        happy
             Ceph-ing.


             Best Regards,

             Patrick McGarry
             Director, Community || Inktank

        http://ceph.com  || http://inktank.com
             @scuttlemonkey || @ceph || @inktank
             _________________________________________________
             ceph-users mailing list
        ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
        <mailto:ceph-users@xxxxxxxxxx.__com
        <mailto:ceph-users@xxxxxxxxxxxxxx>>
        http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
        <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>





        _________________________________________________
        ceph-users mailing list
        ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
        http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
        <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>


    _________________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
    <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux