Data distribution question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We have a cluster with 235 osd's running version 12.2.11 with a combination of 4 and 6 TB drives.  The data distribution across osd's varies from 52% to 94%.

I have been trying to figure out how to get this a bit more balanced as we are running into 'backfillfull' issues on a regular basis.

I've tried adding more pgs...but this did not seem to do much in terms of the imbalance.

Here is the end output from 'ceph osd df':

MIN/MAX VAR: 0.73/1.31  STDDEV: 7.73

We have 8199 pgs total with 6775 of them in the pool that has 97% of the data.

The other pools are not really used (data, metadata, .rgw.root, .rgw.control, etc).  I have thought about deleting those unused pools so that most if not all the pgs are being used by the pool with the majority of the data.

However...before I do that...there anything else I can do or try in order to see if I can balance out the data more uniformly?

Thanks in advance,

Shain

--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux