Re: cephfs manila snapshots best practices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/03/19 16:16 +0100, Paul Emmerich wrote:
On Fri, Mar 22, 2019 at 4:05 PM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:

Perfect, thanks for that input.
So, in your experience the async snaptrim mechanism doesn't make it as transparent as scrubbing etc.?
On the rbd side, the impact of snaptrim seems invisible to us.

Snaptrim works great (with the snap sleep option set to a low value
instead of the default of 0)

The problem is just that users tend to do crazy things that will lead
to slowness for everyone. It's just that you might end up with way too
many snapshots of lots of small objects leading to more overhead and
system load than you initially expected or scaled for....

(But I'm biased towards seeing clusters that have a problem as we
offer "fix up your cluster" consulting services...)


Paul


Not arguing with any of this but will observe that if you are using manila already to manage cephfs shares then manila enforces its own quota system on shares and on snapshots; also that the granularity of the snapshots is that they are for a whole share. Manila's notion of the size of a share is nominal in that it is enforced by a CephFS quota so if there's very little data in a share then the snapshot will indeed also be small. But at least one can limit the number of snapshots that are produced by a user and users don't have the ability to target less than their whole share.

-- Tom


-- dan


On Fri, Mar 22, 2019 at 3:42 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:

I wouldn't give users the ability to perform snapshots directly on
Ceph unless you have full control over your users or fully trust them.
Too easy to ruin your day by creating lots of small files and lots of
snapshots that will wreck your performance...

Also, snapshots aren't really accounted in their quota by CephFS :/


Paul
On Wed, Mar 20, 2019 at 4:34 PM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>
> Hi all,
>
> We're currently upgrading our cephfs (managed by OpenStack Manila)
> clusters to Mimic, and want to start enabling snapshots of the file
> shares.
> There are different ways to approach this, and I hope someone can
> share their experiences with:
>
> 1. Do you give users the 's' flag in their cap, so that they can
> create snapshots themselves? We're currently planning *not* to do this
> -- we'll create snapshots for the users.
> 2. We want to create periodic snaps for all cephfs volumes. I can see
> pros/cons to creating the snapshots in /volumes/.snap or in
> /volumes/_nogroup/<uuid>/.snap. Any experience there? Or maybe even
> just an fs-wide snap in /.snap is the best approach ?
> 3. I found this simple cephfs-snap script which should do the job:
> http://images.45drives.com/ceph/cephfs/cephfs-snap  Does anyone have a
> different recommendation?
>
> Thanks!
>
> Dan
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux