Low space hindering backfill and 2 backfillfull osd(s)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've added 5 more nodes to my cluster and got this issue.
HEALTH_WARN 2 backfillfull osd(s); 17 pool(s) backfillfull; Low space hindering backfill (add storage if this doesn't resolve itself): 4 pgs backfill_toofull
OSD_BACKFILLFULL 2 backfillfull osd(s)
    osd.150 is backfill full
    osd.178 is backfill full

I read in the mail list that I might need to increase the pg on the some pool to have smaller pgs.
Also read I might need to reweigt the mentioned full osd with 1.2 until it's ok, then set back.
Which would be the best solution?

Still 17% percent left from rebalance, should I leave it and wait until finished or I should action something?

  data:
    pools:   17 pools, 2816 pgs
    objects: 87.76M objects, 158 TiB
    usage:   442 TiB used, 424 TiB / 866 TiB avail
    pgs:     31046292/175526278 objects misplaced (17.688%)
             2235 active+clean
             543  active+remapped+backfill_wait
             29   active+remapped+backfilling
             6    active+remapped+backfill_wait+backfill_toofull
             1    active+remapped+backfill_toofull
             1    active+clean+scrubbing+deep
             1    active+clean+scrubbing

  io:
    client:   760 MiB/s rd, 573 MiB/s wr, 26.24k op/s rd, 18.18k op/s wr
    recovery: 10 GiB/s, 2.82k objects/s

These are the most full osds, 1nvme has 4 osds on it:

184  nvme 3.49269  1.00000 3.5 TiB  2.9 TiB  2.9 TiB 145 MiB 5.5 GiB  643 GiB 82.01 1.61  26     up
208  nvme 3.49269  1.00000 3.5 TiB  2.9 TiB  2.8 TiB 152 MiB 5.5 GiB  655 GiB 81.70 1.60  20     up
178  nvme 3.49269  1.00000 3.5 TiB  2.7 TiB  2.7 TiB 134 MiB 5.4 GiB  769 GiB 78.48 1.54  20     up
164  nvme 3.49269  1.00000 3.5 TiB  2.6 TiB  2.6 TiB 123 MiB 5.1 GiB  884 GiB 75.28 1.47  31     up
188  nvme 3.49269  1.00000 3.5 TiB  2.6 TiB  2.6 TiB 143 MiB 5.2 GiB  902 GiB 74.79 1.46  20     up


Side note:

  *   Before the cluster have only 4 nodes, each node 8 drives.
  *   The added 5 nodes have 6 drives, the plan was to move out 1 nvme from the existing nodes and add it to the new ones so the final setup would be 7drives in each of the 9 nodes.

Thank you for your help and idea.

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux