Bluestore increased disk usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



	Hello, ceph users,

I moved my cluster to bluestore (Ceph Mimic), and now I see the increased
disk usage. From ceph -s:

    pools:   8 pools, 3328 pgs
    objects: 1.23 M objects, 4.6 TiB
    usage:   23 TiB used, 444 TiB / 467 TiB avail

I use 3-way replication of my data, so I would expect the disk usage
to be around 14 TiB. Which was true when I used filestore-based Luminous OSDs
before. Why the disk usage now is 23 TiB?

If I remember it correctly (a big if!), the disk usage was about the same
when I originally moved the data to empty bluestore OSDs by changing the
crush rule, but went up after I have added more bluestore OSDs and the cluster
rebalanced itself.

Could it be some miscalculation of free space in bluestore? Also, could it be
related to the HEALTH_ERR backfill_toofull problem discused here in the other
thread?

Thanks,

-Yenya

-- 
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
 This is the world we live in: the way to deal with computers is to google
 the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux