Hello ceph users!
I have a question regarding the ceph data usage and the rados gateway
multisite replication.
Our test cluster have the following setup:
* 3 monitors
* 12 osds (raw size : 5gb, journal size 1gb, colocated on the same drive)
* osd pool default size is set to 2, min size to 1
* osd pool pg_num and pgp_num set to 256 each.
* 2 rados gateways hosts
This test cluster replicates to another cluster using the multisite
configuration (1 master zone, 1 passive).
On the cluster holding this primary zone the ceph data usage (aka bluefs
db used bytes) is growing up rapidly ~ 3-4 mb per minute on some osds.
I do not understand why this is happening since we have no activity on
this cluster (except the multisite replication).
The ceph data usage stopped to growth when i stopped the radosgw daemon
on the 2 radosgw nodes.
Some insights:
cluster:
id: 2496e97d-4a89-4a3d-82cf-42f3570bf444
health: HEALTH_OK
services:
mon: 3 daemons, quorum
cephmon00-staging,cephmon01-staging,cephmon02-staging
mgr: cephmon00-staging(active), standbys: cephmon01-staging,
cephmon02-staging
osd: 12 osds: 12 up, 12 in
rgw: 2 daemons active
data:
pools: 7 pools, 1792 pgs
objects: 1206 objects, 2063 MB
usage: 29898 MB used, 31493 MB / 61392 MB avail
pgs: 1792 active+clean
io:
client: 1278 B/s rd, 1 op/s rd, 0 op/s wr
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
61392M 31508M 29883M 48.68 1206
POOLS:
NAME ID QUOTA OBJECTS QUOTA
BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ
WRITE RAW USED
.rgw.root 50 N/A N/A
14852 0 4875M 26 26 213k 153
29704
cluster.rgw.control 51 N/A N/A
0 0 4875M 8 8 0 0 0
cluster.rgw.meta 52 N/A N/A
2506 0 4875M 11 11 8838 195
5012
cluster.rgw.log 53 N/A N/A
5303 0 4875M 611 611 5599k 2092k
10606
cluster.rgw.buckets.index 54 N/A N/A
0 0 4875M 5 5 441k 568k 0
cluster.rgw.buckets.data 55 N/A N/A 2063M
29.73 4875M 545 545 273k 358k 4126M
cluster.rgw.buckets.non-ec 56 N/A N/A
0 0 4875M 0 0 1009 546 0
Thanks for your help guys.
Florian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com