Re: Remove empty orphaned PGs not mapped to a pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Eugen,

yes, we followed the documentation and everything worked fine. The cache is gone.

Removing the pool worked well. Everything is clean.

The PGs are empty active+clean.

Possible solutions:

1.

ceph pg {pg-id} mark_unfound_lost delete

I do not think this is the right way since it is for PGs with status unfound. But it could work also.

2.

Set the following for the three disk:

ceph osd lost {osd-id}

I am not sure how the cluster will react to this.

3.

ceph-objectstore-tool --data-path /path/to/osd --op remove --pgid 3.0 --force

Now, will the cluster accept the removed PG status?

4.

The three disks are still presented in the crush rule, class ssd, each single OSD under one host entry.

What if I remove them from crush?

Do you have a better idea, Eugen?

Best,
Malte

Am 04.10.23 um 09:21 schrieb Eugen Block:
Hi,

just for clarity, you're actually talking about the cache tier as described in the docs [1]? And you followed the steps until 'ceph osd tier remove cold-storage hot-storage' successfully? And the pool has been really deleted successfully ('ceph osd pool ls detail')?

[1] https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#removing-a-cache-tier

Zitat von Malte Stroem <malte.stroem@xxxxxxxxx>:

Hello,

we removed an SSD cache tier and its pool.

The PGs for the pool do still exist.

The cluster is healthy.

The PGs are empty and they reside on the cache tier pool's SSDs.

We like to take out the disks but it is not possible. The cluster sees the PGs and answers with a HEALTH_WARN.

Because of the replication of three there are still 128 PGs on three of the 24 OSDs. We were able to remove the other OSDs.

Summary:

- pool removed
- 3 x 128 empty PGs still exist
- 3 of 24 OSDs still exist

How is it possible to remove these empty and healthy PGs?

The only way I found was something like:

ceph pg {pg-id} mark_unfound_lost delete

Is that the right way?

Some output of:

ceph pg ls-by-osd 23

PG      OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES  OMAP_BYTES* OMAP_KEYS*  LOG   STATE         SINCE  VERSION         REPORTED UP               ACTING             SCRUB_STAMP DEEP_SCRUB_STAMP 3.0          0         0          0        0      0            0   0     0  active+clean    27h             0'0    2627265:196316 [15,6,23]p15  [15,6,23]p15  2023-09-28T12:41:52.982955+0200 2023-09-27T06:48:23.265838+0200 3.1          0         0          0        0      0            0   0     0  active+clean     9h             0'0    2627266:19330 [6,23,15]p6  [6,23,15]p6  2023-09-29T06:30:57.630016+0200 2023-09-27T22:58:21.992451+0200 3.2          0         0          0        0      0            0   0     0  active+clean     2h             0'0   2627265:1135185 [23,15,6]p23  [23,15,6]p23  2023-09-29T13:42:07.346658+0200 2023-09-24T14:31:52.844427+0200 3.3          0         0          0        0      0            0   0     0  active+clean    13h             0'0    2627266:193170 [6,15,23]p6  [6,15,23]p6  2023-09-29T01:56:54.517337+0200 2023-09-27T17:47:24.961279+0200 3.4          0         0          0        0      0            0   0     0  active+clean    14h             0'0   2627265:2343551 [23,6,15]p23  [23,6,15]p23  2023-09-29T00:47:47.548860+0200 2023-09-25T09:39:51.259304+0200 3.5          0         0          0        0      0            0   0     0  active+clean     2h             0'0    2627265:194111 [15,6,23]p15  [15,6,23]p15  2023-09-29T13:28:48.879959+0200 2023-09-26T15:35:44.217302+0200 3.6          0         0          0        0      0            0   0     0  active+clean     6h             0'0   2627265:2345717 [23,15,6]p23  [23,15,6]p23  2023-09-29T09:26:02.534825+0200 2023-09-27T21:56:57.500126+0200

Best regards,
Malte
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux