Snapshot size and cluster usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We do have 2 ceph (9.2.1) clusters, where one is sending snaphots of
pools to the other one for backup purposes.

Snapshots are fine, however the ceph pool get's blown up by sizes not
matching the snapshots.

Here's the size of a snapshot and the resulting cluster usage
afterwards. The snapshot is ~2GB, but the cluster itself increases by
~300GB (every night)


# rbd diff --from-snap 20161017-010003 pool2/image@20161018-010005
--format plain | awk '{ SUM += $2 } END { print SUM/1024/1024/1024 " GB" }'
2.29738 GB

--- before snap ---

GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    4947G     2953G        1993G         40.29

-- after snap ---

GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    4947G     2627G        2319G         46.88

The originating one behaves correctly. (Increases bit more due to other
pools and images)

# rbd diff --from-snap 20161017-010003 pool1/image@20161018-010005
--format plain | awk '{ SUM += $2 } END { print SUM/1024/1024/1024 " GB" }'
2.29738 GB

--- before snap ---

GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    2698G     1292G        1405G         52.10

-- after snap ---

GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    2698G     1288G        1409G         52.24

Any ideas where to have a look?

regards

Stefan

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux