Hi Yenya,
I guess Ceph adds the size of all your data.db devices to the cluster total used space.
Regards,
Jakub
pt., 8 lut 2019, 10:11: Jan Kasprzak <kas@xxxxxxxxxx> napisał(a):
Hello, ceph users,
I moved my cluster to bluestore (Ceph Mimic), and now I see the increased
disk usage. From ceph -s:
pools: 8 pools, 3328 pgs
objects: 1.23 M objects, 4.6 TiB
usage: 23 TiB used, 444 TiB / 467 TiB avail
I use 3-way replication of my data, so I would expect the disk usage
to be around 14 TiB. Which was true when I used filestore-based Luminous OSDs
before. Why the disk usage now is 23 TiB?
If I remember it correctly (a big if!), the disk usage was about the same
when I originally moved the data to empty bluestore OSDs by changing the
crush rule, but went up after I have added more bluestore OSDs and the cluster
rebalanced itself.
Could it be some miscalculation of free space in bluestore? Also, could it be
related to the HEALTH_ERR backfill_toofull problem discused here in the other
thread?
Thanks,
-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
This is the world we live in: the way to deal with computers is to google
the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com