Re: Best method to limit snapshot/clone space overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

If I understand correctly you want to look at how many “guest filesystem block size” blocks there are that are empty?
This might not be that precise because we do not discard blocks inside the guests, but if you tell me how to gather this - I can certainly try that. I’m not sure if my bash-fu is enough to do this.

Anyway, if I understand how COW works with Ceph clones, then a single 1-byte write inside a clone image will cause the whole object to be replicated and that 1-byte write eats 4MB of space. I’m not sure whether FIEMAP actually creates sparse files if the source is “empty" or just doesn’t bother reading the holes. 
The same granularity probably applies even without clones, though. If I remember correctly “mkfs.ext4” which writes here and there on a volume cost me ~200GB of space for 2500GB volume (at least according to stats). Not sure how much data it writes from the guest perspective (5GB? 20GB? Will be something like that spread over the volume).

Thanks
Jan


> On 24 Jul 2015, at 17:55, Jason Dillaman <dillaman@xxxxxxxxxx> wrote:
> 
>> Hi all,
>> I am looking for a way to alleviate the overhead of RBD snapshots/clones for
>> some time.
>> 
>> In our scenario there are a few “master” volumes that contain production
>> data, and are frequently snapshotted and cloned for dev/qa use. Those
>> snapshots/clones live for a few days to a few weeks before they get dropped,
>> and they sometimes grow very fast (databases, etc.).
>> 
>> With the default 4MB object size there seems to be huge overhead involved
>> with this, could someone give me some hints on how to solve that?
>> 
> 
> Do you have any statistics (or can you gather any statistics) that indicate the percentage of block-size, zeroed extents within the clone images' RADOS objects?  If there is a large amount of waste, it might be possible / worthwhile to optimize how RBD handles copy-on-write operations against the clone.
> 
> -- 
> 
> Jason Dillaman 
> Red Hat 
> dillaman@xxxxxxxxxx 
> http://www.redhat.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux