Re: Thick provisioning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I concur - at the moment we need to manually sum the RBD images to look at how much we have "provisioned" vs what ceph df shows.  in our case we had a rapid run of provisioning new LUNs but it took  a while before usage started to catch up with what was provisioned as data was migrated in.  Ceph df would show say only 20% of a pool used, but the actual RBD allocation was nearer 80+%

I am not sure if its workable but if there could be a pool level metric to track the total allocation of RBD images that would be useful.  I imagine it gets tricky with snapshots/clones though.


> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> sinan@xxxxxxxx
> Sent: Thursday, 19 October 2017 6:41 AM
> To: Samuel Soulard <samuel.soulard@xxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Thick provisioning
>
> Hi all,
>
> Thanks for the replies.
>
> The main reason why I was looking for the thin/thick provisioning setting is
> that I want to be sure that provisioned space should not exceed the cluster
> capacity.
>
> With thin provisioning there is a risk that more space is provisioned than the
> cluster capacity. When you monitor closely the real usage, this should not be
> a problem; but from experience when there is no hard limit, overprovisioning
> will happen at some point.
>
> Sinan
>
> > I can only speak for some environments, but sometimes, you would want
> > to make sure that a cluster cannot fill up until you can add more capacity.
> >
> > Some organizations are unable to purchase new capacity rapidly and
> > making sure you cannot exceed your current capacity, then you can't
> > run into problems.
> >
> > It may also come from an understanding that thick provisioning will
> > provide more performance initially like virtual machines environment.
> >
> > Having said all of this, isn't there a way to make sure the cluster
> > can accommodate the size of all RBD images that are created. And
> > ensure they have the space available? Some service availability might
> > depend on making sure the storage can provide the necessary capacity.
> >
> > I'm assuming that this is all from an understanding that it is more
> > costly to run such type of environments, however, you can also
> > guarantee that you will never fill up unexpectedly your cluster.
> >
> > Sam
> >
> > On Oct 18, 2017 02:20, "Wido den Hollander" <wido@xxxxxxxx> wrote:
> >
> >
> >> Op 17 oktober 2017 om 19:38 schreef Jason Dillaman
> >> <jdillama@xxxxxxxxxx>:
> >>
> >>
> >> There is no existing option to thick provision images within RBD.
> >> When an image is created or cloned, the only actions that occur are
> >> some small metadata updates to describe the image. This allows image
> >> creation to be a quick, constant time operation regardless of the
> >> image size. To thick provision the entire image would require writing
> >> data to the entire image and ensuring discard support is disabled to
> >> prevent the OS from releasing space back (and thus re-sparsifying the
> >> image).
> >>
> >
> > Indeed. It makes me wonder why anybody would want it. It will:
> >
> > - Impact recovery performance
> > - Impact scrubbing performance
> > - Utilize more space then needed
> >
> > Why would you want to do this Sinan?
> >
> > Wido
> >
> >> On Mon, Oct 16, 2017 at 10:49 AM,  <sinan@xxxxxxxx> wrote:
> >> > Hi,
> >> >
> >> > I have deployed a Ceph cluster (Jewel). By default all block
> >> > devices
> > that
> >> > are created are thin provisioned.
> >> >
> >> > Is it possible to change this setting? I would like to have that
> >> > all created block devices are thick provisioned.
> >> >
> >> > In front of the Ceph cluster, I am running Openstack.
> >> >
> >> > Thanks!
> >> >
> >> > Sinan
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > ceph-users@xxxxxxxxxxxxxx
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> >>
> >> --
> >> Jason
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux