Re: My new osd is not normally ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

is the balancer on? And which mode is enabled?

ceph balancer status

You definitely should split PGs, aim at 100 - 150 PGs per OSD at first. I would inspect the PG sizes of the new OSDs:

ceph pg ls-by-osd 288 (column BYTES)

and compare them with older OSDs. If you have very large PG sizes, only a few of them could fill up an OSD quite quickly since your OSD sizes are "only" 1.7 TB.

Zitat von Yunus Emre Sarıpınar <yunusemresaripinar@xxxxxxxxx>:

I have 6 ssd sata and 12 osd per server in a 24 server cluster. This
environment was created when it was in the natilus version.

I switched this environment to the Octopus version 6 months ago. The
cluster is working healthily.

I added 8 new servers, I created 6 ssd sata and 12 osd on these servers in
the same way.

I did not change the number of PGs in the environment, I have 8192 PGs.

The problem is that in my ceph -s output, the remapped pg and missplaced
object states are gone, but there is a warning of 6nearfull osds and 4pools
nearfull.

I saw in the ceph df output that my pools are also full above normal.

In the output of the ceph osd df tree command, I observed that the
occupancy percentages of the newly added osds were around 80%, while the
percentages of the old osds were around 30%.

How do I equalize this situation?

Note: I am sharing the output of crushmap and osd df tree with you in the
attachment.
My new osds between 288-384.
My new servers are ekuark13,14,15,16 and bkuark13,14,15,16.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux