Total free space in addition to MAX AVAIL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I recently learned that 'MAX AVAIL' in the 'ceph df' output doesn't
represent what I thought it did.  It actually represents the amount of
data that can be used before the first OSD becomes full, and not the sum
of all free space across a set of OSDs.  This means that balancing the
data with 'ceph osd reweight' will actually increase the value of 'MAX
AVAIL'.

Knowing this I would like to graph both 'MAX AVAIL' and the total free
space across two different sets of OSDs so I can get an idea how out of
balance the cluster is.

This is where I'm running into trouble.  I have two different types of
Ceph nodes in my cluster.  One with all HDDs+SSD journals, and the other
with all SSDs using co-located journals.  There isn't any cache tiering
going on, so a pool either uses the all-HDD root, or the all-SSD root, but
not both.

The only method I can think of to get this information is to walk the
CRUSH tree to figure out which OSDs are under a given root, and then use
the output of 'ceph osd df -f json' to sum up the free space of each OSD.
Is there a better way?

Thanks,
Bryan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux