Hi, Am Donnerstag, den 16.11.2017, 13:44 +0100 schrieb Burkhard Linke: > > What remains is the growth of used data in the cluster. > > > > I put background information of our cluster and some graphs of > > different metrics on a wiki page: > > > > https://wiki.mur.at/Dokumentation/CephCluster > > > > Basically we need to reduce the growth in the cluster, but since we > > are > > not sure what causes it we don't have an idea. > > Just a wild guess (wiki page is not accessible yet): Oh damn, sorry! Fixed that. The wiki page is accessible now. > Are you sure that the journals were creating on the new SSD? If the > journals were created as files in the OSD directory, their size might > be accounted for in the cluster size report (assuming OSDs are > reporting their free space, not a sum of all object sizes). Yes, I am sure. Just checked and all the journal links point to the correct devices. See OSD 5 as an example: ls -l /var/lib/ceph/osd/ceph-5 total 64 -rw-r--r-- 1 root root 481 Mar 30 2017 activate.monmap -rw-r--r-- 1 ceph ceph 3 Mar 30 2017 active -rw-r--r-- 1 ceph ceph 37 Mar 30 2017 ceph_fsid drwxr-xr-x 342 ceph ceph 12288 Apr 6 2017 current -rw-r--r-- 1 ceph ceph 37 Mar 30 2017 fsid lrwxrwxrwx 1 root root 58 Oct 17 14:43 journal -> /dev/disk/by- partuuid/f04832e3-2f09-460e-806f-4a6fe7aa1425 -rw-r--r-- 1 ceph ceph 37 Oct 25 11:12 journal_uuid -rw------- 1 ceph ceph 56 Mar 30 2017 keyring -rw-r--r-- 1 ceph ceph 21 Mar 30 2017 magic -rw-r--r-- 1 ceph ceph 6 Mar 30 2017 ready -rw-r--r-- 1 ceph ceph 4 Mar 30 2017 store_version -rw-r--r-- 1 ceph ceph 53 Mar 30 2017 superblock -rw-r--r-- 1 ceph ceph 0 Nov 7 11:45 systemd -rw-r--r-- 1 ceph ceph 10 Mar 30 2017 type -rw-r--r-- 1 ceph ceph 2 Mar 30 2017 whoami Regards, -- J.Hofmüller Nisiti - Abie Nathan, 1927-2008
Attachment:
signature.asc
Description: This is a digitally signed message part
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com