perfect
Il 02/03/2018 19:18, Igor Fedotov ha
scritto:
Yes, by default BlueStore reports 1Gb per OSD as used by
BlueFS.
On 3/2/2018 8:10 PM, Max Cuttins
wrote:
Umh....
Taking a look to your computation I think the ratio OSD/Overhead
it's really about 1.1Gb per OSD.
Because I have 9 NVMe OSD alive right now. So about 9.5Gb of
overhead.
So I guess this is just it's right behaviour.
Fine!
Il 02/03/2018 15:18, David Turner
ha scritto:
[1] Here is a ceph starts on a brand new cluster that has
never had any pools created or data or into it at all.
323GB used out of 2.3PB. that's 0.01% overhead, but we're
using 10TB disks for this cluster, and the overhead is
moreso per osd than per TB. It is 1.1GB overhead per osd.
34 of the osds are pure nvme and the other 255 have
collocated DBs with their WAL on flash.
The used space your string is most likely just osd
overhead, but you can double check if there are any orphaned
rados objects using up space with a `rados ls`. Another
thing to note is that deleting a pool in ceph is not
instant. It goes into garbage collection and is taken care
of over time. Most likely you're just looking at osd
overhead, though.
[1]
$ ceph -s
cluster:
health: HEALTH_OK
services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com