On Fri, Jul 24, 2015 at 11:55 PM, Jason Dillaman <dillaman@xxxxxxxxxx> wrote: >> Hi all, >> I am looking for a way to alleviate the overhead of RBD snapshots/clones for >> some time. >> >> In our scenario there are a few “master” volumes that contain production >> data, and are frequently snapshotted and cloned for dev/qa use. Those >> snapshots/clones live for a few days to a few weeks before they get dropped, >> and they sometimes grow very fast (databases, etc.). >> >> With the default 4MB object size there seems to be huge overhead involved >> with this, could someone give me some hints on how to solve that? >> > > Do you have any statistics (or can you gather any statistics) that indicate the percentage of block-size, zeroed extents within the clone images' RADOS objects? If there is a large amount of waste, it might be possible / worthwhile to optimize how RBD handles copy-on-write operations against the clone. I think the fiemap/seek_hole could benefit rbd objects after recovering or backfill mostly. > > -- > > Jason Dillaman > Red Hat > dillaman@xxxxxxxxxx > http://www.redhat.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Best Regards, Wheat _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com