Re: Snapshot Costs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Simon Leinen
> Sent: 19 March 2017 17:23
> To: Gregory Farnum <gfarnum@xxxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Snapshot Costs
> 
> Gregory Farnum writes:
> > On Tue, Mar 7, 2017 at 12:43 PM, Kent Borg <kentborg@xxxxxxxx> wrote:
> >> I would love it if someone could toss out some examples of the sorts
> >> of things snapshots are good for and the sorts of things they are
> >> terrible for.  (And some hints as to why, please.)
> 
> > They're good for CephFS snapshots. They're good at RBD snapshots as
> > long as you don't take them too frequently.
> 
> We take snapshots of about thirty 2-TB RBD images (Ceph Cinder volumes)
> every night.  We keep about 60 of each around.  Does that still fall under
> "reasonable"?

Yeah, snap removal sucks. If you are using Hammer you can use the "snap trim
sleep" OSD option to throttle removal. Don't use this in Jewel, it makes the
main IO thread sleep, which slows everything down. This has been fixed in
Luminous, but I'm not sure if it will be backported to Jewel.

Best thing you can possibly try in Jewel is the Linux 4.10 kernel on your
OSD nodes, it has some new writeback throttling which seems to help. Or set
your nr_requests somewhere between 4-8, although this will decrease write
throughput under heavy IO.

> 
> One round of snapshots is deleted every night; that causes significant
load on
> our cluster - currently Hammer, will be upgraded to Jewel soon.
> 
> Most of the volumes (and thus snapshots) don't have the "object-map"
> feature enabled yet; maybe after the Jewel upgrade we can add object-maps
> to them to reduce the cost of deleting the snapshots.
> 
> Do object-maps help with snap trimming, or am I overly optimistic?
> --
> Simon.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux