There's some metadata on Bluestore OSDs (the rocksdb database), it's usually ~1% of your data.
The DB will start out at a size of around 1GB, so that's expected.
PaulThe DB will start out at a size of around 1GB, so that's expected.
2018-06-04 15:55 GMT+02:00 Marc-Antoine Desrochers <marc-antoine.desrochers@xxxxxxxxxxx>:
Hi,
Im not sure if it’s normal or not but each time I add a new osd with ceph-deploy osd create --data /dev/sdg ceph-n1.
It add 1GB to my global data but I just format the drive so it’s supposed to be at 0 right ?
So I have 6 osd in my ceph and it took 6gib.
[root@ceph-n1 ~]# ceph -s
cluster:
id: 1d97aa70-2029-463a-b6fa-
20e98f3e21fb health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph-n1
mgr: ceph-n1(active)
mds: cephfs-1/1/1 up {0=ceph-n1=up:active}
osd: 6 osds: 6 up, 6 in
data:
pools: 2 pools, 600 pgs
objects: 341 objects, 63109 kB
usage: 6324 MB used, 2782 GB / 2788 GB avail
pgs: 600 active+clean
So im kind of confused...
Thanks for your help.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com