cluster:
health: HEALTH_OK
services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail
How can I analyze this?
_______________________________________________
Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:
Hi Max,
No that's not normal. 9GB for an empty cluster. Maybe you reserved some space or you have other service that's taking the space. But It seems way to much for me.
El 02/03/18 a las 12:09, Max Cuttins escribió:
I don't care of get back those space.
And maybe I leaved something in the way.
I just want to know if it's expected or not.
Because I run several rados bench with the flag--no-cleanup
Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins <max@xxxxxxxxxxxxx>:--
Hi everybody,
i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 9510 MB used, 8038 GB / 8048 GB avail
pgs:
Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?
People setting up new clusters also see this, there are overhead items and stuff that eat some spaceso it would never be zero. At your place, it would seem it is close to 0.1%, so just live with it and moveon to using your 8TB for what you really needed it to be used for.
In almost no case will I think that "if only I could get those 0.1% back and then my cluster would be greatagain".
Storage clusters should probably have something like 10% "admin" margins so if ceph warns andwhines at OSDs being 85% full, then at 75% you should be writing orders for more disks and/or morestorage nodes.
At that point, regardless of where the "miscalculation" is, or where ceph manages to waste9500M while you think it should be zero, it will be all but impossible to make anything decent with itif you were to get those 0.1% back with some magic command.
May the most significant bit of your life be positive.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com