Re: Tips for faster openstack instance boot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If your glance configuration includes the following, RBD images will be cached to disk on the API server:

[paste_deploy]
flavor = keystone+cachemanagement

See [1] for the configuration steps for Glance.

[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-glance

-- 

Jason Dillaman 

----- Original Message ----- 

> From: "Vickey Singh" <vickey.singh22693@xxxxxxxxx>
> To: "Jason Dillaman" <dillaman@xxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx, "ceph-users" <ceph-users@xxxxxxxx>
> Sent: Tuesday, February 9, 2016 11:11:31 AM
> Subject: Re:  Tips for faster openstack instance boot

> Guys Thanks a lot for your response.

> We are running OpenStack Juno + Ceph 94.5

> @ Jason Dillaman Can you please explain what do you mean by "Glance is
> configured to cache your RBD image" ? This might give me some clue.

> Many Thanks.

> On Mon, Feb 8, 2016 at 10:33 PM, Jason Dillaman < dillaman@xxxxxxxxxx >
> wrote:

> > If Nova and Glance are properly configured, it should only require a quick
> > clone of the Glance image to create your Nova ephemeral image. Have you
> > double-checked your configuration against the documentation [1]? What
> > version of OpenStack are you using?
> 

> > To answer your questions:
> 

> > > - From Ceph point of view. does COW works cross pool i.e. image from
> > > glance
> 
> > > pool ---> (cow) --> instance disk on nova pool
> 
> > Yes, cloning copy-on-write images works across pools
> 

> > > - Will a single pool for glance and nova instead of separate pool . will
> > > help
> 
> > > here ?
> 
> > Should be no change -- the creation of the clone is extremely lightweight
> > (add the image to a directory, create a couple metadata objects)
> 

> > > - Is there any tunable parameter from Ceph or OpenStack side that should
> > > be
> 
> > > set ?
> 
> > I'd double-check your OpenStack configuration. Perhaps Glance isn't
> > configured with "show_image_direct_url = True", or Glance is configured to
> > cache your RBD images, or you have an older OpenStack release that requires
> > patches to fully support Nova+RBD.
> 

> > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> 

> > --
> 

> > Jason Dillaman
> 

> > ----- Original Message -----
> 

> > > From: "Vickey Singh" < vickey.singh22693@xxxxxxxxx >
> 
> > > To: ceph-users@xxxxxxxxxxxxxx , "ceph-users" < ceph-users@xxxxxxxx >
> 
> > > Sent: Monday, February 8, 2016 9:10:59 AM
> 
> > > Subject:  Tips for faster openstack instance boot
> 

> > > Hello Community
> 

> > > I need some guidance how can i reduce openstack instance boot time using
> > > Ceph
> 

> > > We are using Ceph Storage with openstack ( cinder, glance and nova ). All
> 
> > > OpenStack images and instances are being stored on Ceph in different
> > > pools
> 
> > > glance and nova pool respectively.
> 

> > > I assume that Ceph by default uses COW rbd , so for example if an
> > > instance
> > > is
> 
> > > launched using glance image (which is stored on Ceph) , Ceph should take
> > > COW
> 
> > > snapshot of glance image and map it as RBD disk for instance. And this
> > > whole
> 
> > > process should be very quick.
> 

> > > In our case , the instance launch is taking 90 seconds. Is this normal ?
> > > (
> > > i
> 
> > > know this really depends one's infra , but still )
> 

> > > Is there any way , i can utilize Ceph's power and can launch instances
> > > ever
> 
> > > faster.
> 

> > > - From Ceph point of view. does COW works cross pool i.e. image from
> > > glance
> 
> > > pool ---> (cow) --> instance disk on nova pool
> 
> > > - Will a single pool for glance and nova instead of separate pool . will
> > > help
> 
> > > here ?
> 
> > > - Is there any tunable parameter from Ceph or OpenStack side that should
> > > be
> 
> > > set ?
> 

> > > Regards
> 
> > > Vickey
> 

> > > _______________________________________________
> 
> > > ceph-users mailing list
> 
> > > ceph-users@xxxxxxxxxxxxxx
> 
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux