Best method to limit snapshot/clone space overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I am looking for a way to alleviate the overhead of RBD snapshots/clones for some time.

In our scenario there are a few “master” volumes that contain production data, and are frequently snapshotted and cloned for dev/qa use. Those snapshots/clones live for a few days to a few weeks before they get dropped, and they sometimes grow very fast (databases, etc.).

With the default 4MB object size there seems to be huge overhead involved with this, could someone give me some hints on how to solve that?

I have some hope in

1) FIEMAP
I’ve calculated that files on my OSDs are approx. 30% filled with NULLs - I suppose this is what it could save (best-scenario) and it should also make COW operations much faster.
But there are lots of bugs in FIEMAP in kernels (i saw some reference to CentOS 6.5 kernel being buggy - which is what we use) and filesystems (like XFS). No idea about ext4 which we’d like to use in the future.

Is enabling FIEMAP a good idea at all? I saw some mention of it being replaced with SEEK_DATA and SEEK_HOLE.

2) object size < 4MB for clones
I did some quick performance testing and setting this lower for production is probably not a good idea. My sweet spot is 8MB object size, however this would make the overhead for clones even worse than it already is.
But I could make the cloned images with a different block size from the snapshot (at least according to docs). Does someone use it like that? Any caveats? That way I could have the production data with 8MB block size but make the development snapshots with for example 64KiB granularity, probably at expense of some performance, but most of the data would remain in the (faster) master snapshot anyway. This should drop overhead tremendously, maybe even more than neabling FIEMAP. (Even better when working in tandem I suppose?)

Your thoughts?

Thanks

Jan


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux